name
stringlengths 10
10
| title
stringlengths 22
113
| abstract
stringlengths 282
2.29k
| fulltext
stringlengths 15.3k
85.1k
| keywords
stringlengths 87
585
|
---|---|---|---|---|
train_C-41 | Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems | A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability. | 1. INTRODUCTION
Achieving end-to-end real-time quality of service (QoS)
is particularly important for open distributed real-time and
embedded (DRE) systems that face resource constraints, such
as limited computing power and network bandwidth.
Overutilization of these system resources can yield unpredictable
and unstable behavior, whereas under-utilization can yield
excessive system cost. A promising approach to meeting
these end-to-end QoS requirements effectively, therefore, is
to develop and apply adaptive middleware [10, 15], which is
software whose functional and QoS-related properties can be
modified either statically or dynamically. Static
modifications are carried out to reduce footprint, leverage
capabilities that exist in specific platforms, enable functional
subsetting, and/or minimize hardware/software infrastructure
dependencies. Objectives of dynamic modifications include
optimizing system responses to changing environments or
requirements, such as changing component interconnections,
power-levels, CPU and network bandwidth availability,
latency/jitter, and workload.
In open DRE systems, adaptive middleware must make
such modifications dependably, i.e., while meeting
stringent end-to-end QoS requirements, which requires the
specification and enforcement of upper and lower bounds on
system resource utilization to ensure effective use of
system resources. To meet these requirements, we have
developed the Hybrid Adaptive Resource-management
Middleware (HyARM), which is an open-source1
distributed
resource management middleware.
HyARM is based on hybrid control theoretic techniques [8],
which provide a theoretical framework for designing
control of complex system with both continuous and discrete
dynamics. In our case study, which involves a distributed
real-time video distribution system, the task of adaptive
resource management is to control the utilization of the
different resources, whose utilizations are described by
continuous variables. We achieve this by adapting the resolution
of the transmitted video, which is modeled as a continuous
variable, and by changing the frame-rate and the
compression, which are modeled by discrete actions. We have
implemented HyARM atop The ACE ORB (TAO) [13], which
is an implementation of the Real-time CORBA
specification [12]. Our results show that (1) HyARM ensures
effective system resource utilization and (2) end-to-end QoS
requirements of higher priority applications are met, even in
the face of fluctuations in workload.
The remainder of the paper is organized as follows:
Section 2 describes the architecture, functionality, and resource
utilization model of our DRE multimedia system case study;
Section 3 explains the structure and functionality of HyARM;
Section 4 evaluates the adaptive behavior of HyARM via
experiments on our multimedia system case study; Section 5
compares our research on HyARM with related work; and
Section 6 presents concluding remarks.
1
The code and examples for HyARM are available at www.
dre.vanderbilt.edu/∼nshankar/HyARM/.
Article 7
2. CASE STUDY: DRE MULTIMEDIA
SYSTEM
This section describes the architecture and QoS
requirements of our DRE multimedia system.
2.1 Multimedia System Architecture
Wireless Link
Wireless Link
Wireless
Link
`
`
`
Physical Link
Physical Link
Physical Link
Base Station
End Receiver
End Receiver
End Receiver`
Physical Link
End Receiver
UAV
Camera
Video
Encoder
Camera
Video
Encoder
Camera
Video
Encoder
UAV
Camera
Video
Encoder
Camera
Video
Encoder
Camera
Video
Encoder
UAV
Camera
Video
Encoder
Camera
Video
Encoder
Camera
Video
Encoder
Figure 1: DRE Multimedia System Architecture
The architecture for our DRE multimedia system is shown
in Figure 1 and consists of the following entities: (1)Data
source (video capture by UAV), where video is captured
(related to subject of interest) by camera(s) on each UAV,
followed by encoding of raw video using a specific encoding
scheme and transmitting the video to the next stage in the
pipeline. (2)Data distributor (base station), where the
video is processed to remove noise, followed by
retransmission of the processed video to the next stage in the pipeline.
(3) Sinks (command and control center), where the
received video is again processed to remove noise, then
decoded and finally rendered to end user via graphical displays.
Significant improvements in video encoding/decoding and
(de)compression techniques have been made as a result of
recent advances in video encoding and compression
techniques [14]. Common video compression schemes are
MPEG1, MPEG-2, Real Video, and MPEG-4. Each compression
scheme is characterized by its resource requirement, e.g., the
computational power to (de)compress the video signal and
the network bandwidth required to transmit the compressed
video signal. Properties of the compressed video, such as
resolution and frame-rate determine both the quality and the
resource requirements of the video.
Our multimedia system case study has the following
endto-end real-time QoS requirements: (1) latency, (2)
interframe delay (also know as jitter), (3) frame rate, and (4)
picture resolution. These QoS requirements can be
classified as being either hard or soft. Hard QoS requirements
should be met by the underlying system at all times, whereas
soft QoS requirements can be missed occasionally.2
For our
case study, we treat QoS requirements such as latency and
jitter as harder QoS requirements and strive to meet these
requirements at all times. In contrast, we treat QoS
requirements such as video frame rate and picture resolution as
softer QoS requirements and modify these video properties
adaptively to handle dynamic changes in resource
availabil2
Although hard and soft are often portrayed as two discrete
requirement sets, in practice they are usually two ends of
a continuum ranging from softer to harder rather than
two disjoint points.
ity effectively.
2.2 DRE Multimedia System Rresources
There are two primary types of resources in our DRE
multimedia system: (1) processors that provide
computational power available at the UAVs, base stations, and end
receivers and (2) network links that provide communication
bandwidth between UAVs, base stations, and end receivers.
The computing power required by the video capture and
encoding tasks depends on dynamic factors, such as speed
of the UAV, speed of the subject (if the subject is mobile),
and distance between UAV and the subject. The wireless
network bandwidth available to transmit video captured by
UAVs to base stations also depends on the wireless
connectivity between the UAVs and the base station, which in-turn
depend on dynamic factors such as the speed of the UAVs
and the relative distance between UAVs and base stations.
The bandwidth of the link between the base station and
the end receiver is limited, but more stable than the
bandwidth of the wireless network. Resource requirements and
availability of resources are subjected to dynamic changes.
Two classes of applications - QoS-enabled and best-effort
- use the multimedia system infrastructure described above
to transmit video to their respective receivers. QoS-enabled
class of applications have higher priority over best-effort
class of application. In our study, emergency response
applications belong to QoS-enabled and surveillance applications
belong to best-effort class. For example, since a stream from
an emergency response application is of higher importance
than a video stream from a surveillance application, it
receives more resources end-to-end.
Since resource availability significantly affects QoS, we use
current resource utilization as the primary indicator of
system performance. We refer to the current level of system
resource utilization as the system condition. Based on this
definition, we can classify system conditions as being either
under, over, or effectively utilized.
Under-utilization of system resources occurs when the
current resource utilization is lower than the desired lower bound
on resource utilization. In this system condition, residual
system resources (i.e., network bandwidth and
computational power) are available in large amounts after meeting
end-to-end QoS requirements of applications. These
residual resources can be used to increase the QoS of the
applications. For example, residual CPU and network bandwidth
can be used to deliver better quality video (e.g., with greater
resolution and higher frame rate) to end receivers.
Over-utilization of system resources occurs when the
current resource utilization is higher than the desired upper
bound on resource utilization. This condition can arise
from loss of resources - network bandwidth and/or
computing power at base station, end receiver or at UAV - or
may be due to an increase in resource demands by
applications. Over-utilization is generally undesirable since the
quality of the received video (such as resolution and frame
rate) and timeliness properties (such as latency and jitter)
are degraded and may result in an unstable (and thus
ineffective) system.
Effective resource utilization is the desired system
condition since it ensures that end-to-end QoS requirements of
the UAV-based multimedia system are met and utilization of
both system resources, i.e., network bandwidth and
computational power, are within their desired utilization bounds.
Article 7
Section 3 describes techniques we applied to achieve effective
utilization, even in the face of fluctuating resource
availability and/or demand.
3. OVERVIEW OF HYARM
This section describes the architecture of the Hybrid
Adaptive Resource-management Middleware (HyARM). HyARM
ensures efficient and predictable system performance by
providing adaptive resource management, including monitoring
of system resources and enforcing bounds on application
resource utilization.
3.1 HyARM Structure and Functionality
Resource Utilization
Legend
Resource Allocation
Application Parameters
Figure 2: HyARM Architecture
HyARM is composed of three types of entities shown in
Figure 2 and described below:
Resource monitors observe the overall resource
utilization for each type of resource and resource utilization per
application. In our multimedia system, there are resource
monitors for CPU utilization and network bandwidth. CPU
monitors observe the CPU resource utilization of UAVs, base
station, and end receivers. Network bandwidth monitors
observe the network resource utilization of (1) wireless network
link between UAVs and the base station and (2) wired
network link between the base station and end receivers.
The central controller maintains the system resource
utilization below a desired bound by (1) processing periodic
updates it receives from resource monitors and (2)
modifying the execution of applications accordingly, e.g., by
using different execution algorithms or operating the
application with increased/decreased QoS. This adaptation
process ensures that system resources are utilized efficiently and
end-to-end application QoS requirements are met. In our
multimedia system, the HyARM controller determines the
value of application parameters such as (1) video
compression schemes, such as Real Video and MPEG-4, and/or (2)
frame rate, and (3) picture resolution. From the perspective
of hybrid control theoretic techniques [8], the different video
compression schemes and frame rate form the discrete
variables of application execution and picture resolution forms
the continuous variables.
Application adapters modify application execution
according to parameters recommended by the controller and
ensures that the operation of the application is in accordance
with the recommended parameters. In the current
mplementation of HyARM, the application adapter modifies the
input parameters to the application that affect application
QoS and resource utilization - compression scheme, frame
rate, and picture resolution. In our future implementations,
we plan to use resource reservation mechanisms such as
Differentiated Service [7, 3] and Class-based Kernel Resource
Management [4] to provision/reserve network and CPU
resources. In our multimedia system, the application adapter
ensures that the video is encoded at the recommended frame
rate and resolution using the specified compression scheme.
3.2 Applying HyARM to the Multimedia
System Case Study
HyARM is built atop TAO [13], a widely used open-source
implementation of Real-time CORBA [12]. HyARM can be
applied to ensure efficient, predictable and adaptive resource
management of any DRE system where resource availability
and requirements are subject to dynamic change.
Figure 3 shows the interaction of various parts of the
DRE multimedia system developed with HyARM, TAO,
and TAO"s A/V Streaming Service. TAO"s A/V Streaming
service is an implementation of the CORBA A/V
Streaming Service specification. TAO"s A/V Streaming Service is
a QoS-enabled video distribution service that can transfer
video in real-time to one or more receivers. We use the A/V
Streaming Service to transmit the video from the UAVs to
the end receivers via the base station. Three entities of
Receiver
UAV
TAO
Resource
Utilization
HyARM
Central
Controller
A/V Streaming
Service : Sender
MPEG1
MPEG4
Real
Video
HyARM
Resource
Monitor
A/V Streaming
Service : Receiver
Compressed
Video Compressed
Video
Application
HyARM
Application
Adapter
Remote Object Call
Control
Inputs Resource
Utilization
Resource
Utilization /
Control Inputs
Control
Inputs
Legend
Figure 3: Developing the DRE Multimedia System
with HyARM
HyARM, namely the resource monitors, central controller,
and application adapters are built as CORBA servants, so
they can be distributed throughout a DRE system.
Resource monitors are remote CORBA objects that update
the central controller periodically with the current resource
utilization. Application adapters are collocated with
applications since the two interact closely.
As shown in Figure 3, UAVs compress the data using
various compression schemes, such as MPEG1, MPEG4, and
Real Video, and uses TAO"s A/V streaming service to
transmit the video to end receivers. HyARM"s resource monitors
continuously observe the system resource utilization and
notify the central controller with the current utilization. 3
The interaction between the controller and the resource
monitors uses the Observer pattern [5]. When the controller
receives resource utilization updates from monitors, it
computes the necessary modifications to application(s)
parameters and notifies application adapter(s) via a remote
operation call. Application adapter(s), that are collocated with
the application, modify the input parameters to the
application - in our case video encoder - to modify the application
resource utilization and QoS.
3
The base station is not included in the figure since it only
retransmits the video received from UAVs to end receivers.
Article 7
4. PERFORMANCE RESULTS AND
ANALYSIS
This section first describes the testbed that provides the
infrastructure for our DRE multimedia system, which was
used to evaluate the performance of HyARM. We then
describe our experiments and analyze the results obtained to
empirically evaluate how HyARM behaves during
underand over-utilization of system resources.
4.1 Overview of the Hardware and Software
Testbed
Our experiments were performed on the Emulab testbed
at University of Utah. The hardware configuration consists
of two nodes acting as UAVs, one acting as base station,
and one as end receiver. Video from the two UAVs were
transmitted to a base station via a LAN configured with
the following properties: average packet loss ratio of 0.3 and
bandwidth 1 Mbps. The network bandwidth was chosen to
be 1 Mbps since each UAV in the DRE multimedia system
is allocated 250 Kbps. These parameters were chosen to
emulate an unreliable wireless network with limited bandwidth
between the UAVs and the base station. From the base
station, the video was retransmitted to the end receiver via a
reliable wireline link of 10 Mbps bandwidth with no packet
loss.
The hardware configuration of all the nodes was chosen as
follows: 600 MHz Intel Pentium III processor, 256 MB
physical memory, 4 Intel EtherExpress Pro 10/100 Mbps Ethernet
ports, and 13 GB hard drive. A real-time version of Linux
- TimeSys Linux/NET 3.1.214 based on RedHat Linux
9was used as the operating system for all nodes. The
following software packages were also used for our experiments: (1)
Ffmpeg 0.4.9-pre1, which is an open-source library (http:
//www.ffmpeg.sourceforge.net/download.php) that
compresses video into MPEG-2, MPEG-4, Real Video, and many
other video formats. (2) Iftop 0.16, which is an
opensource library (http://www.ex-parrot.com/∼pdw/iftop/)
we used for monitoring network activity and bandwidth
utilization. (3) ACE 5.4.3 + TAO 1.4.3, which is an
opensource (http://www.dre.vanderbilt.edu/TAO)
implementation of the Real-time CORBA [12] specification upon which
HyARM is built. TAO provides the CORBA Audio/Video
(A/V) Streaming Service that we use to transmit the video
from the UAVs to end receivers via the base station.
4.2 Experiment Configuration
Our experiment consisted of two (emulated) UAVs that
simultaneously send video to the base station using the
experimentation setup described in Section 4.1. At the base
station, video was retransmitted to the end receivers (without
any modifications), where it was stored to a file. Each UAV
hosted two applications, one QoS-enabled application
(emergency response), and one best-effort application
(surveillance). Within each UAV, computational power is shared
between the applications, while the network bandwidth is
shared among all applications.
To evaluate the QoS provided by HyARM, we monitored
CPU utilization at the two UAVs, and network bandwidth
utilization between the UAV and the base station. CPU
resource utilization was not monitored at the base station and
the end receiver since they performed no
computationallyintensive operations. The resource utilization of the 10 Mpbs
physical link between the base station and the end receiver
does not affect QoS of applications and is not monitored by
HyARM since it is nearly 10 times the 1 MB bandwidth
of the LAN between the UAVs and the base station. The
experiment also monitors properties of the video that affect
the QoS of the applications, such as latency, jitter, frame
rate, and resolution.
The set point on resource utilization for each resource was
specified at 0.69, which is the upper bound typically
recommended by scheduling techniques, such as rate monotonic
algorithm [9]. Since studies [6] have shown that human eyes
can perceive delays more than 200ms, we use this as the
upper bound on jitter of the received video. QoS
requirements for each class of application is specified during system
initialization and is shown in Table 1.
4.3 Empirical Results and Analysis
This section presents the results obtained from running
the experiment described in Section 4.2 on our DRE
multimedia system testbed. We used system resource utilization
as a metric to evaluate the adaptive resource management
capabilities of HyARM under varying input work loads. We
also used application QoS as a metric to evaluate HyARM"s
capabilities to support end-to-end QoS requirements of the
various classes of applications in the DRE multimedia
system. We analyze these results to explain the significant
differences in system performance and application QoS.
Comparison of system performance is decomposed into
comparison of resource utilization and application QoS. For
system resource utilization, we compare (1) network
bandwidth utilization of the local area network and (2) CPU
utilization at the two UAV nodes. For application QoS, we
compare mean values of video parameters, including (1)
picture resolution, (2) frame rate, (3) latency, and (4) jitter.
Comparison of resource utilization. Over-utilization
of system resources in DRE systems can yield an unstable
system. In contrast, under-utilization of system resources
increases system cost. Figure 4 and Figure 5 compare the
system resource utilization with and without HyARM.
Figure 4 shows that HyARM maintains system utilization close
to the desired utilization set point during fluctuation in
input work load by transmitting video of higher (or lower) QoS
for QoS-enabled (or best-effort) class of applications during
over (or under) utilization of system resources.
Figure 5 shows that without HyARM, network
utilization was as high as 0.9 during increase in workload
conditions, which is greater than the utilization set point of 0.7
by 0.2. As a result of over-utilization of resources, QoS of
the received video, such as average latency and jitter, was
affected significantly. Without HyARM, system resources
were either under-utilized or over-utilized, both of which
are undesirable. In contrast, with HyARM, system resource
utilization is always close to the desired set point, even
during fluctuations in application workload. During
sudden fluctuation in application workload, system conditions
may be temporarily undesirable, but are restored to the
desired condition within several sampling periods. Temporary
over-utilization of resources is permissible in our multimedia
system since the quality of the video may be degraded for
a short period of time, though application QoS will be
degraded significantly if poor quality video is transmitted for
a longer period of time.
Comparison of application QoS. Figures 6, Figure 7,
and Table 2 compare latency, jitter, resolution, and
frameArticle 7
Class Resolution Frame Rate Latency (msec ) Jitter (msec)
QoS Enabled 1024 x 768 25 200 200
Best-effort 320 x 240 15 300 250
Table 1: Application QoS Requirements
Figure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM
rate of the received video, respectively. Table 2 shows that
HyARM increases the resolution and frame video of
QoSenabled applications, but decreases the resolution and frame
rate of best effort applications. During over utilization of
system resources, resolution and frame rate of lower priority
applications are reduced to adapt to fluctuations in
application workload and to maintain the utilization of resources
at the specified set point.
It can be seen from Figure 6 and Figure 7 that HyARM
reduces the latency and jitter of the received video
significantly. These figures show that the QoS of QoS-enabled
applications is greatly improved by HyARM. Although
application parameters, such as frame rate and resolutions,
which affect the soft QoS requirements of best-effort
applications may be compromised, the hard QoS requirements,
such as latency and jitter, of all applications are met.
HyARM responds to fluctuation in resource availability
and/or demand by constant monitoring of resource
utilization. As shown in Figure 4, when resources utilization
increases above the desired set point, HyARM lowers the
utilization by reducing the QoS of best-effort applications. This
adaptation ensures that enough resources are available for
QoS-enabled applications to meet their QoS needs.
Figures 6 and 7 show that the values of latency and jitter of
the received video of the system with HyARM are nearly half
of the corresponding value of the system without HyARM.
With HyARM, values of these parameters are well below
the specified bounds, whereas without HyARM, these value
are significantly above the specified bounds due to
overutilization of the network bandwidth, which leads to network
congestion and results in packet loss. HyARM avoids this
by reducing video parameters such as resolution, frame-rate,
and/or modifying the compression scheme used to compress
the video.
Our conclusions from analyzing the results described above
are that applying adaptive middleware via hybrid control to
DRE system helps to (1) improve application QoS, (2)
increase system resource utilization, and (3) provide better
predictability (lower latency and inter-frame delay) to
QoSenabled applications. These improvements are achieved largely
due to monitoring of system resource utilization, efficient
system workload management, and adaptive resource
provisioning by means of HyARM"s network/CPU resource
monitors, application adapter, and central controller,
respectively.
5. RELATED WORK
A number of control theoretic approaches have been
applied to DRE systems recently. These techniques aid in
overcoming limitations with traditional scheduling approaches
that handle dynamic changes in resource availability poorly
and result in a rigidly scheduled system that adapts poorly
to change. A survey of these techniques is presented in [1].
One such approach is feedback control scheduling (FCS) [2,
11]. FCS algorithms dynamically adjust resource allocation
by means of software feedback control loops. FCS
algorithms are modeled and designed using rigorous
controltheoretic methodologies. These algorithms provide robust
and analytical performance assurances despite uncertainties
in resource availability and/or demand. Although existing
FCS algorithms have shown promise, these algorithms often
assume that the system has continuous control variable(s)
that can continuously be adjusted. While this assumption
holds for certain classes of systems, there are many classes
of DRE systems, such as avionics and total-ship computing
environments that only support a finite a priori set of
discrete configurations. The control variables in such systems
are therefore intrinsically discrete.
HyARM handles both continuous control variables, such
as picture resolution, and discrete control variable, such as
discrete set of frame rates. HyARM can therefore be applied
to system that support continuous and/or discrete set of
control variables. The DRE multimedia system as described
in Section 2 is an example DRE system that offers both
continuous (picture resolution) and discrete set (frame-rate) of
control variables. These variables are modified by HyARM
to achieve efficient resource utilization and improved
application QoS.
6. CONCLUDING REMARKS
Article 7
Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter
Source Picture Size / Frame Rate
With HyARM Without HyARM
UAV1 QoS Enabled Application 1122 X 1496 / 25 960 X 720 / 20
UAV1 Best-effort Application 288 X 384 / 15 640 X 480 / 20
UAV2 QoS Enabled Application 1126 X 1496 / 25 960 X 720 / 20
UAV2 Best-effort Application 288 X 384 / 15 640 X 480 / 20
Table 2: Comparison of Video Quality
Many distributed real-time and embedded (DRE) systems
demand end-to-end quality of service (QoS) enforcement
from their underlying platforms to operate correctly. These
systems increasingly run in open environments, where
resource availability is subject to dynamic change. To meet
end-to-end QoS in dynamic environments, DRE systems can
benefit from an adaptive middleware that monitors system
resources, performs efficient application workload
management, and enables efficient resource provisioning for
executing applications.
This paper described HyARM, an adaptive middleware,
that provides effective resource management to DRE
systems. HyARM employs hybrid control techniques to
provide the adaptive middleware capabilities, such as resource
monitoring and application adaptation that are key to
providing the dynamic resource management capabilities for
open DRE systems. We employed HyARM to a
representative DRE multimedia system that is implemented using
Real-time CORBA and CORBA A/V Streaming Service.
We evaluated the performance of HyARM in a system
composed of three distributed resources and two classes of
applications with two applications each. Our empirical
results indicate that HyARM ensures (1) efficient resource
utilization by maintaining the resource utilization of system
resources within the specified utilization bounds, (2) QoS
requirements of QoS-enabled applications are met at all times.
Overall, HyARM ensures efficient, predictable, and adaptive
resource management for DRE systems.
7. REFERENCES
[1] T. F. Abdelzaher, J. Stankovic, C. Lu, R. Zhang, and Y. Lu.
Feddback Performance Control in Software Services. IEEE:
Control Systems, 23(3), June 2003.
[2] L. Abeni, L. Palopoli, G. Lipari, and J. Walpole. Analysis of a
reservation-based feedback scheduler. In IEEE Real-Time
Systems Symposium, Dec. 2002.
[3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and
W. Weiss. An architecture for differentiated services. Network
Information Center RFC 2475, Dec. 1998.
[4] H. Franke, S. Nagar, C. Seetharaman, and V. Kashyap.
Enabling Autonomic Workload Management in Linux. In
Proceedings of the International Conference on Autonomic
Computing (ICAC), New York, New York, May 2004. IEEE.
[5] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design
Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley, Reading, MA, 1995.
[6] G. Ghinea and J. P. Thomas. Qos impact on user perception
and understanding of multimedia video clips. In
MULTIMEDIA "98: Proceedings of the sixth ACM
international conference on Multimedia, pages 49-54, Bristol,
United Kingdom, 1998. ACM Press.
[7] Internet Engineering Task Force. Differentiated Services
Working Group (diffserv) Charter.
www.ietf.org/html.charters/diffserv-charter.html, 2000.
[8] X. Koutsoukos, R. Tekumalla, B. Natarajan, and C. Lu. Hybrid
Supervisory Control of Real-Time Systems. In 11th IEEE
Real-Time and Embedded Technology and Applications
Symposium, San Francisco, California, Mar. 2005.
[9] J. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic
Scheduling Algorithm: Exact Characterization and Average
Case Behavior. In Proceedings of the 10th IEEE Real-Time
Systems Symposium (RTSS 1989), pages 166-171. IEEE
Computer Society Press, 1989.
[10] J. Loyall, J. Gossett, C. Gill, R. Schantz, J. Zinky, P. Pal,
R. Shapiro, C. Rodrigues, M. Atighetchi, and D. Karr.
Comparing and Contrasting Adaptive Middleware Support in
Wide-Area and Embedded Distributed Object Applications. In
Proceedings of the 21st International Conference on
Distributed Computing Systems (ICDCS-21), pages 625-634.
IEEE, Apr. 2001.
[11] C. Lu, J. A. Stankovic, G. Tao, and S. H. Son. Feedback
Control Real-Time Scheduling: Framework, Modeling, and
Algorithms. Real-Time Systems Journal, 23(1/2):85-126, July
2002.
[12] Object Management Group. Real-time CORBA Specification,
OMG Document formal/02-08-02 edition, Aug. 2002.
[13] D. C. Schmidt, D. L. Levine, and S. Mungee. The Design and
Performance of Real-Time Object Request Brokers. Computer
Communications, 21(4):294-324, Apr. 1998.
[14] Thomas Sikora. Trends and Perspectives in Image and Video
Coding. In Proceedings of the IEEE, Jan. 2005.
[15] X. Wang, H.-M. Huang, V. Subramonian, C. Lu, and C. Gill.
CAMRIT: Control-based Adaptive Middleware for Real-time
Image Transmission. In Proc. of the 10th IEEE Real-Time and
Embedded Tech. and Applications Symp. (RTAS), Toronto,
Canada, May 2004.
Article 7 | real-time video distribution system;dynamic environment;hybrid system;video encoding/decoding;quality of service;streaming service;hybrid adaptive resourcemanagement middleware;distributed real-time embedded system;distribute real-time embed system;adaptive resource management;service end-to-end quality;hybrid control technique;service quality;real-time corba specification;end-to-end quality of service;resource reservation mechanism |
train_C-42 | Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization | Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed. | 1. INTRODUCTION
Grid computing [1] is an emerging collaborative
computing paradigm to extend institution/organization
specific high performance computing (HPC) capabilities
greatly beyond local resources. Its importance stems from
the fact that ground breaking research in strategic
application areas such as bioscience and medicine, energy
exploration and environmental modeling involve strong
interdisciplinary components and often require intercampus
collaborations and computational capabilities beyond
institutional limitations.
The Texas Internet Grid for Research and Education
(TIGRE) [2,3] is a state funded cyberinfrastructure
development project carried out by five (Rice, A&M, TTU,
UH and UT Austin) major university systems - collectively
called TIGRE Institutions. The purpose of TIGRE is to
create a higher education Grid to sustain and extend
research and educational opportunities across Texas.
TIGRE is a project of the High Performance Computing
across Texas (HiPCAT) [4] consortium. The goal of
HiPCAT is to support advanced computational technologies
to enhance research, development, and educational
activities.
The primary goal of TIGRE is to design and deploy
state-of-the-art Grid middleware that enables integration of
computing systems, storage systems and databases,
visualization laboratories and displays, and even
instruments and sensors across Texas. The secondary goal
is to demonstrate the TIGRE capabilities to enhance
research and educational opportunities in strategic
application areas of interest to the State of Texas. These are
bioscience and medicine, energy exploration and air quality
modeling. Vision of the TIGRE project is to foster
interdisciplinary and intercampus collaborations, identify
novel approaches to extend academic-government-private
partnerships, and become a competitive model for external
funding opportunities. The overall goal of TIGRE is to
support local, campus and regional user interests and offer
avenues to connect with national Grid projects such as
Open Science Grid [5], and TeraGrid [6].
Within the energy exploration strategic application area,
we have Grid-enabled the ensemble Kalman Filter (EnKF)
[7] approach for data assimilation in reservoir modeling and
demonstrated the extensibility of the application using the
TIGRE environment and the GridWay [8] metascheduler.
Section 2 provides an overview of the TIGRE environment
and capabilities. Application description and the need for
Grid-enabling EnKF methodology is provided in Section 3.
The implementation details and merits of our approach are
discussed in Section 4. Conclusions are provided in Section
5. Finally, observations and lessons learned are documented
in Section 6.
2. TIGRE ENVIRONMENT
The TIGRE Grid middleware consists of minimal set of
components derived from a subset of the Virtual Data
Toolkit (VDT) [9] which supports a variety of operating
systems. The purpose of choosing a minimal software stack
is to support applications at hand, and to simplify
installation and distribution of client/server stacks across
TIGRE sites. Additional components will be added as they
become necessary. The PacMan [10] packaging and
distribution mechanism is employed for TIGRE
client/server installation and management. The PacMan
distribution mechanism involves retrieval, installation, and
often configuration of the packaged software. This
approach allows the clients to keep current, consistent
versions of TIGRE software. It also helps TIGRE sites to
install the needed components on resources distributed
throughout the participating sites. The TIGRE client/server
stack consists of an authentication and authorization layer,
Globus GRAM4-based job submission via web services
(pre-web services installations are available up on request).
The tools for handling Grid proxy generation, Grid-enabled
file transfer and Grid-enabled remote login are supported.
The pertinent details of TIGRE services and tools for job
scheduling and management are provided below.
2.1. Certificate Authority
The TIGRE security infrastructure includes a certificate
authority (CA) accredited by the International Grid Trust
Federation (IGTF) for issuing X. 509 user and resource
Grid certificates [11]. The Texas Advanced Computing
Center (TACC), University of Texas at Austin is the
TIGRE"s shared CA. The TIGRE Institutions serve as
Registration Authorities (RA) for their respective local user
base. For up-to-date information on securing user and
resource certificates and their installation instructions see
ref [2]. The users and hosts on TIGRE are identified by
their distinguished name (DN) in their X.509 certificate
provided by the CA. A native Grid-mapfile that contains a
list of authorized DNs is used to authenticate and authorize
user job scheduling and management on TIGRE site
resources. At Texas Tech University, the users are
dynamically allocated one of the many generic pool
accounts. This is accomplished through the Grid User
Management System (GUMS) [12].
2.2. Job Scheduling and Management
The TIGRE environment supports GRAM4-based job
submission via web services. The job submission scripts are
generated using XML. The web services GRAM translates
the XML scripts into target cluster specific batch schedulers
such as LSF, PBS, or SGE. The high bandwidth file transfer
protocols such as GridFTP are utilized for staging files in
and out of the target machine. The login to remote hosts for
compilation and debugging is only through GSISSH service
which requires resource authentication through X.509
certificates. The authentication and authorization of Grid
jobs are managed by issuing Grid certificates to both users
and hosts. The certificate revocation lists (CRL) are
updated on a daily basis to maintain high security standards
of the TIGRE Grid services. The TIGRE portal [2]
documentation area provides a quick start tutorial on
running jobs on TIGRE.
2.3. Metascheduler
The metascheduler interoperates with the cluster level
batch schedulers (such as LSF, PBS) in the overall Grid
workflow management. In the present work, we have
employed GridWay [8] metascheduler - a Globus incubator
project - to schedule and manage jobs across TIGRE.
The GridWay is a light-weight metascheduler that fully
utilizes Globus functionalities. It is designed to provide
efficient use of dynamic Grid resources by multiple users
for Grid infrastructures built on top of Globus services. The
TIGRE site administrator can control the resource sharing
through a powerful built-in scheduler provided by GridWay
or by extending GridWay"s external scheduling module to
provide their own scheduling policies. Application users
can write job descriptions using GridWay"s simple and
direct job template format (see Section 4 for details) or
standard Job Submission Description Language (JSDL).
See section 4 for implementation details.
2.4. Customer Service Management System
A TIGRE portal [2] was designed and deployed to interface
users and resource providers. It was designed using
GridPort [13] and is maintained by TACC. The TIGRE
environment is supported by open source tools such as the
Open Ticket Request System (OTRS) [14] for servicing
trouble tickets, and MoinMoin [15] Wiki for TIGRE
content and knowledge management for education, outreach
and training. The links for OTRS and Wiki are consumed
by the TIGRE portal [2] - the gateway for users and
resource providers. The TIGRE resource status and loads
are monitored by the Grid Port Information Repository
(GPIR) service of the GridPort toolkit [13] which interfaces
with local cluster load monitoring service such as Ganglia.
The GPIR utilizes cron jobs on each resource to gather
site specific resource characteristics such as jobs that are
running, queued and waiting for resource allocation.
3. ENSEMBLE KALMAN FILTER
APPLICATION
The main goal of hydrocarbon reservoir simulations is to
forecast the production behavior of oil and gas field
(denoted as field hereafter) for its development and optimal
management. In reservoir modeling, the field is divided into
several geological models as shown in Figure 1. For
accurate performance forecasting of the field, it is necessary
to reconcile several geological models to the dynamic
response of the field through history matching [16-20].
Figure 1. Cross-sectional view of the Field. Vertical
layers correspond to different geological models and the
nails are oil wells whose historical information will be
used for forecasting the production behavior.
(Figure Ref:http://faculty.smu.edu/zchen/research.html).
The EnKF is a Monte Carlo method that works with an
ensemble of reservoir models. This method utilizes
crosscovariances [21] between the field measurements and the
reservoir model parameters (derived from several models)
to estimate prediction uncertainties. The geological model
parameters in the ensemble are sequentially updated with a
goal to minimize the prediction uncertainties. Historical
production response of the field for over 50 years is used in
these simulations. The main advantage of EnKF is that it
can be readily linked to any reservoir simulator, and can
assimilate latest production data without the need to re-run
the simulator from initial conditions. Researchers in Texas
are large subscribers of the Schlumberger ECLIPSE [22]
package for reservoir simulations. In the reservoir
modeling, each geological model checks out an ECLIPSE
license. The simulation runtime of the EnKF methodology
depends on the number of geological models used, number
of ECLIPSE licenses available, production history of the
field, and propagated uncertainties in history matching.
The overall EnKF workflow is shown Figure 2.
Figure 2. Ensemble Kaman Filter Data Assimilation
Workflow. Each site has L licenses.
At START, the master/control process (EnKF main
program) reads the simulation configuration file for number
(N) of models, and model-specific input files. Then, N
working directories are created to store the output files. At
the end of iteration, the master/control process collects the
output files from N models and post processes
crosscovariances [21] to estimate the prediction uncertainties.
This information will be used to update models (or input
files) for the next iteration. The simulation continues until
the production histories are exhausted.
Typical EnKF simulation with N=50 and field histories
of 50-60 years, in time steps ranging from three months to a
year, takes about three weeks on a serial computing
environment.
In parallel computing environment, there is no
interprocess communication between the geological models
in the ensemble. However, at the end of each simulation
time-step, model-specific output files are to be collected for
analyzing cross covariances [21] and to prepare next set of
input files. Therefore, master-slave model in
messagepassing (MPI) environment is a suitable paradigm. In this
approach, the geological models are treated as slaves and
are distributed across the available processors. The master
Cluster or (TIGRE/GridWay)
START
Read Configuration File
Create N Working Directories
Create N Input files
Model l Model 2 Model N. . .
ECLIPSE
on site A
ECLIPSE
on Site B
ECLIPSE
on Site Z
Collect N Model Outputs,
Post-process Output files
END
. . .
process collects model-specific output files, analyzes and
prepares next set of input files for the simulation. Since
each geological model checks out an ECLIPSE license,
parallelizability of the simulation depends on the number of
licenses available. When the available number of licenses is
less than the number of models in the ensemble, one or
more of the nodes in the MPI group have to handle more
than one model in a serial fashion and therefore, it takes
longer to complete the simulation.
A Petroleum Engineering Department usually procures
10-15 ECLIPSE licenses while at least ten-fold increase in
the number of licenses would be necessary for industry
standard simulations. The number of licenses can be
increased by involving several Petroleum Engineering
Departments that support ECLIPSE package.
Since MPI does not scale very well for applications that
involve remote compute clusters, and to get around the
firewall issues with license servers across administrative
domains, Grid-enabling the EnKF workflow seems to be
necessary. With this motivation, we have implemented
Grid-enabled EnKF workflow for the TIGRE environment
and demonstrated parallelizability of the application across
TIGRE using GridWay metascheduler. Further details are
provided in the next section.
4. IMPLEMENTATION DETAILS
To Grid-enable the EnKF approach, we have eliminated
the MPI code for parallel processing and replaced with N
single processor jobs (or sub-jobs) where, N is the number
of geological models in the ensemble. These model-specific
sub-jobs were distributed across TIGRE sites that support
ECLIPSE package using the GridWay [8] metascheduler.
For each sub-job, we have constructed a GridWay job
template that specifies the executable, input and output
files, and resource requirements. Since the TIGRE compute
resources are not expected to change frequently, we have
used static resource discovery policy for GridWay and the
sub-jobs were scheduled dynamically across the TIGRE
resources using GridWay. Figure 3 represents the sub-job
template file for the GridWay metascheduler.
Figure 3. GridWay Sub-Job Template
In Figure 3, REQUIREMENTS flag is set to choose the
resources that satisfy the application requirements. In the
case of EnKF application, for example, we need resources
that support ECLIPSE package. ARGUMENTS flag
specifies the model in the ensemble that will invoke
ECLIPSE at a remote site. INPUT_FILES is prepared by
the EnKF main program (or master/control process) and is
transferred by GridWay to the remote site where it is
untared and is prepared for execution. Finally,
OUTPUT_FILES specifies the name and location where the
output files are to be written.
The command-line features of GridWay were used to
collect and process the model-specific outputs to prepare
new set of input files. This step mimics MPI process
synchronization in master-slave model. At the end of each
iteration, the compute resources and licenses are committed
back to the pool. Table 1 shows the sub-jobs in TIGRE
Grid via GridWay using gwps command and for clarity,
only selected columns were shown
.
USER JID DM EM NAME HOST
pingluo 88 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF
pingluo 89 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF
pingluo 90 wrap actv enkf.jt minigar.hpcc.ttu.edu/LSF
pingluo 91 wrap pend enkf.jt minigar.hpcc.ttu.edu/LSF
pingluo 92 wrap done enkf.jt cosmos.tamu.edu/PBS
pingluo 93 wrap epil enkf.jt cosmos.tamu.edu/PBS
Table 1. Job scheduling across TIGRE using GridWay
Metascheduler. DM: Dispatch state, EM: Execution state,
JID is the job id and HOST corresponds to site specific
cluster and its local batch scheduler.
When a job is submitted to GridWay, it will go through a
series of dispatch (DM) and execution (EM) states. For
DM, the states include pend(ing), prol(og), wrap(per),
epil(og), and done. DM=prol means the job has been
scheduled to a resource and the remote working directory is
in preparation. DM=warp implies that GridWay is
executing the wrapper which in turn executes the
application. DM=epil implies the job has finished
running at the remote site and results are being transferred
back to the GridWay server. Similarly, when EM=pend
implies the job is waiting in the queue for resource and the
job is running when EM=actv. For complete list of
message flags and their descriptions, see the documentation
in ref [8].
We have demonstrated the Grid-enabled EnKF runs
using GridWay for TIGRE environment. The jobs are so
chosen that the runtime doesn"t exceed more than a half
hour. The simulation runs involved up to 20 jobs between
A&M and TTU sites with TTU serving 10 licenses. For
resource information, see Table I.
One of the main advantages of Grid-enabled EnKF
simulation is that both the resources and licenses are
released back to the pool at the end of each simulation time
step unlike in the case of MPI implementation where
licenses and nodes are locked until the completion of entire
simulation. However, the fact that each sub-job gets
scheduled independently via GridWay could possibly incur
another time delay caused by waiting in queue for execution
in each simulation time step. Such delays are not expected
EXECUTABLE=runFORWARD
REQUIREMENTS=HOSTNAME=cosmos.tamu.edu |
HOSTNAME=antaeus.hpcc.ttu.edu |
HOSTNAME=minigar.hpcc.ttu.edu |
ARGUMENTS=001
INPUT_FILES=001.in.tar
OUTPUT_FILES=001.out.tar
in MPI implementation where the node is blocked for
processing sub-jobs (model-specific calculation) until the
end of the simulation. There are two main scenarios for
comparing Grid and cluster computing approaches.
Scenario I: The cluster is heavily loaded. The conceived
average waiting time of job requesting large number of
CPUs is usually longer than waiting time of jobs requesting
single CPU. Therefore, overall waiting time could be
shorter in Grid approach which requests single CPU for
each sub-job many times compared to MPI implementation
that requests large number of CPUs at a single time. It is
apparent that Grid scheduling is beneficial especially when
cluster is heavily loaded and requested number of CPUs for
the MPI job is not readily available.
Scenario II: The cluster is relatively less loaded or
largely available. It appears the MPI implementation is
favorable compared to the Grid scheduling. However,
parallelizability of the EnKF application depends on the
number of ECLIPSE licenses and ideally, the number of
licenses should be equal to the number of models in the
ensemble. Therefore, if a single institution does not have
sufficient number of licenses, the cluster availability doesn"t
help as much as it is expected.
Since the collaborative environment such as TIGRE can
address both compute and software resource requirements
for the EnKF application, Grid-enabled approach is still
advantageous over the conventional MPI implementation in
any of the above scenarios.
5. CONCLUSIONS AND FUTURE WORK
TIGRE is a higher education Grid development project
and its purpose is to sustain and extend research and
educational opportunities across Texas. Within the energy
exploration application area, we have Grid-enabled the MPI
implementation of the ensemble Kalman filter data
assimilation methodology for reservoir characterization.
This task was accomplished by removing MPI code for
parallel processing and replacing with single processor jobs
one for each geological model in the ensemble. These
single processor jobs were scheduled across TIGRE via
GridWay metascheduler. We have demonstrated that by
pooling licenses across TIGRE sites, more geological
models can be handled in parallel and therefore conceivably
better simulation accuracy. This approach has several
advantages over MPI implementation especially when a site
specific cluster is heavily loaded and/or the number licenses
required for the simulation is more than those available at a
single site.
Towards the future work, it would be interesting to
compare the runtime between MPI, and Grid
implementations for the EnKF application. This effort could
shed light on quality of service (QoS) of Grid environments
in comparison with cluster computing.
Another aspect of interest in the near future would be
managing both compute and license resources to address
the job (or processor)-to-license ratio management.
6. OBSERVATIONS AND LESSIONS
LEARNED
The Grid-enabling efforts for EnKF application have
provided ample opportunities to gather insights on the
visibility and promise of Grid computing environments for
application development and support. The main issues are
industry standard data security and QoS comparable to
cluster computing.
Since the reservoir modeling research involves
proprietary data of the field, we had to invest substantial
efforts initially in educating the application researchers on
the ability of Grid services in supporting the industry
standard data security through role- and privilege-based
access using X.509 standard.
With respect to QoS, application researchers expect
cluster level QoS with Grid environments. Also, there is a
steep learning curve in Grid computing compared to the
conventional cluster computing. Since Grid computing is
still an emerging technology, and it spans over several
administrative domains, Grid computing is still premature
especially in terms of the level of QoS although, it offers
better data security standards compared to commodity
clusters.
It is our observation that training and outreach programs
that compare and contrast the Grid and cluster computing
environments would be a suitable approach for enhancing
user participation in Grid computing. This approach also
helps users to match their applications and abilities Grids
can offer.
In summary, our efforts through TIGRE in Grid-enabling
the EnKF data assimilation methodology showed
substantial promise in engaging Petroleum Engineering
researchers through intercampus collaborations. Efforts are
under way to involve more schools in this effort. These
efforts may result in increased collaborative research,
educational opportunities, and workforce development
through graduate/faculty research programs across TIGRE
Institutions.
7. ACKNOWLEDGMENTS
The authors acknowledge the State of Texas for supporting
the TIGRE project through the Texas Enterprise Fund, and
TIGRE Institutions for providing the mechanism, in which
the authors (Ravi Vadapalli, Taesung Kim, and Ping Luo)
are also participating. The authors thank the application
researchers Prof. Akhil Datta-Gupta of Texas A&M
University and Prof. Lloyd Heinze of Texas Tech
University for their discussions and interest to exploit the
TIGRE environment to extend opportunities in research and
development.
8. REFERENCES
[1] Foster, I. and Kesselman, C. (eds.) 2004. The Grid: Blueprint
for a new computing infrastructure (The Elsevier series in
Grid computing)
[2] TIGRE Portal: http://tigreportal.hipcat.net
[3] Vadapalli, R. Sill, A., Dooley, R., Murray, M., Luo, P., Kim,
T., Huang, M., Thyagaraja, K., and Chaffin, D. 2007.
Demonstration of TIGRE environment for Grid
enabled/suitable applications. 8th
IEEE/ACM Int. Conf. on
Grid Computing, Sept 19-21, Austin
[4] The High Performance Computing across Texas Consortium
http://www.hipcat.net
[5] Pordes, R. Petravick, D. Kramer, B. Olson, D. Livny, M.
Roy, A. Avery, P. Blackburn, K. Wenaus, T. Würthwein, F.
Foster, I. Gardner, R. Wilde, M. Blatecky, A. McGee, J. and
Quick, R. 2007. The Open Science Grid, J. Phys Conf Series
http://www.iop.org/EJ/abstract/1742-6596/78/1/012057 and
http://www.opensciencegrid.org
[6] Reed, D.A. 2003. Grids, the TeraGrid and Beyond,
Computer, vol 30, no. 1 and http://www.teragrid.org
[7] Evensen, G. 2006. Data Assimilation: The Ensemble Kalman
Filter, Springer
[8] Herrera, J. Huedo, E. Montero, R. S. and Llorente, I. M.
2005. Scientific Programming, vol 12, No. 4. pp 317-331
[9] Avery, P. and Foster, I. 2001. The GriPhyN project: Towards
petascale virtual data grids, technical report
GriPhyN-200115 and http://vdt.cs.wisc.edu
[10] The PacMan documentation and installation guide
http://physics.bu.edu/pacman/htmls
[11] Caskey, P. Murray, M. Perez, J. and Sill, A. 2007. Case
studies in identify management for virtual organizations,
EDUCAUSE Southwest Reg. Conf., Feb 21-23, Austin, TX.
http://www.educause.edu/ir/library/pdf/SWR07058.pdf
[12] The Grid User Management System (GUMS)
https://www.racf.bnl.gov/Facility/GUMS/index.html
[13] Thomas, M. and Boisseau, J. 2003. Building grid computing
portals: The NPACI grid portal toolkit, Grid computing:
making the global infrastructure a reality, Chapter 28,
Berman, F. Fox, G. Thomas, M. Boisseau, J. and Hey, T.
(eds), John Wiley and Sons, Ltd, Chichester
[14] Open Ticket Request System http://otrs.org
[15] The MoinMoin Wiki Engine
http://moinmoin.wikiwikiweb.de
[16] Vasco, D.W. Yoon, S. and Datta-Gupta, A. 1999. Integrating
dynamic data into high resolution reservoir models using
streamline-based analytic sensitivity coefficients, Society of
Petroleum Engineers (SPE) Journal, 4 (4).
[17] Emanuel, A. S. and Milliken, W. J. 1998. History matching
finite difference models with 3D streamlines, SPE 49000,
Proc of the Annual Technical Conf and Exhibition, Sept
2730, New Orleans, LA.
[18] Nævdal, G. Johnsen, L.M. Aanonsen, S.I. and Vefring, E.H.
2003. Reservoir monitoring and Continuous Model Updating
using Ensemble Kalman Filter, SPE 84372, Proc of the
Annual Technical Conf and Exhibition, Oct 5-8, Denver,
CO.
[19] Jafarpour B. and McLaughlin, D.B. 2007. History matching
with an ensemble Kalman filter and discrete cosine
parameterization, SPE 108761, Proc of the Annual Technical
Conf and Exhibition, Nov 11-14, Anaheim, CA
[20] Li, G. and Reynolds, A. C. 2007. An iterative ensemble
Kalman filter for data assimilation, SPE 109808, Proc of the
SPE Annual Technical Conf and Exhibition, Nov 11-14,
Anaheim, CA
[21] Arroyo-Negrete, E. Devagowda, D. Datta-Gupta, A. 2006.
Streamline assisted ensemble Kalman filter for rapid and
continuous reservoir model updating. Proc of the Int. Oil &
Gas Conf and Exhibition, SPE 104255, Dec 5-7, China
[22] ECLIPSE Reservoir Engineering Software
http://www.slb.com/content/services/software/reseng/index.a
sp | pooling license;grid-enabling;ensemble kalman filter;and gridway;cyberinfrastructure development project;tigre grid computing environment;grid computing;hydrocarbon reservoir simulation;gridway metascheduler;enkf;datum assimilation methodology;high performance computing;tigre;energy exploration;tigre grid middleware;strategic application area;reservoir model |
train_C-44 | MSP: Multi-Sequence Positioning of Wireless Sensor Nodes∗ | Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy. | 1 Introduction
Although Wireless Sensor Networks (WSN) have shown
promising prospects in various applications [5], researchers
still face several challenges for massive deployment of such
networks. One of these is to identify the location of
individual sensor nodes in outdoor environments. Because of
unpredictable flow dynamics in airborne scenarios, it is not currently
feasible to localize sensor nodes during massive UVA-based
deployment. On the other hand, geometric information is
indispensable in these networks, since users need to know where
events of interest occur (e.g., the location of intruders or of a
bomb explosion).
Previous research on node localization falls into two
categories: range-based approaches and range-free approaches.
Range-based approaches [13, 17, 19, 24] compute per-node
location information iteratively or recursively based on
measured distances among target nodes and a few anchors which
precisely know their locations. These approaches generally
require costly hardware (e.g., GPS) and have limited
effective range due to energy constraints (e.g., ultrasound-based
TDOA [3, 17]). Although range-based solutions can be
suitably used in small-scale indoor environments, they are
considered less cost-effective for large-scale deployments. On the
other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not
require accurate distance measurements, but localize the node
based on network connectivity (proximity) information.
Unfortunately, since wireless connectivity is highly influenced by the
environment and hardware calibration, existing solutions fail
to deliver encouraging empirical results, or require substantial
survey [2] and calibration [24] on a case-by-case basis.
Realizing the impracticality of existing solutions for the
large-scale outdoor environment, researchers have recently
proposed solutions (e.g., Spotlight [20] and Lighthouse [18])
for sensor node localization using the spatiotemporal
correlation of controlled events (i.e., inferring nodes" locations based
on the detection time of controlled events). These solutions
demonstrate that long range and high accuracy localization can
be achieved simultaneously with little additional cost at
sensor nodes. These benefits, however, come along with an
implicit assumption that the controlled events can be precisely
distributed to a specified location at a specified time. We argue
that precise event distribution is difficult to achieve, especially
at large scale when terrain is uneven, the event distribution
device is not well calibrated and its position is difficult to
maintain (e.g., the helicopter-mounted scenario in [20]).
To address these limitations in current approaches, in this
paper we present a multi-sequence positioning (MSP) method
15
for large-scale stationary sensor node localization, in
deployments where an event source has line-of-sight to all sensors.
The novel idea behind MSP is to estimate each sensor node"s
two-dimensional location by processing multiple easy-to-get
one-dimensional node sequences (e.g., event detection order)
obtained through loosely-guided event distribution.
This design offers several benefits. First, compared to a
range-based approach, MSP does not require additional costly
hardware. It works using sensors typically used by sensor
network applications, such as light and acoustic sensors, both of
which we specifically consider in this work. Second, compared
to a range-free approach, MSP needs only a small number of
anchors (theoretically, as few as two), so high accuracy can be
achieved economically by introducing more events instead of
more anchors. And third, compared to Spotlight, MSP does not
require precise and sophisticated event distribution, an
advantage that significantly simplifies the system design and reduces
calibration cost.
This paper offers the following additional intellectual
contributions:
• We are the first to localize sensor nodes using the concept
of node sequence, an ordered list of sensor nodes, sorted
by the detection time of a disseminated event. We
demonstrate that making full use of the information embedded
in one-dimensional node sequences can significantly
improve localization accuracy. Interestingly, we discover
that repeated reprocessing of one-dimensional node
sequences can further increase localization accuracy.
• We propose a distribution-based location estimation
strategy that obtains the final location of sensor nodes using
the marginal probability of joint distribution among
adjacent nodes within the sequence. This new algorithm
outperforms the widely adopted Centroid estimation [4, 8].
• To the best of our knowledge, this is the first work to
improve the localization accuracy of nodes by adaptive
events. The generation of later events is guided by
localization results from previous events.
• We evaluate line-based MSP on our new Mirage test-bed,
and wave-based MSP in outdoor environments. Through
system implementation, we discover and address several
interesting issues such as partial sequence and sequence
flips. To reveal MSP performance at scale, we provide
analytic results as well as a complete simulation study.
All the simulation and implementation code is available
online at http://www.cs.umn.edu/∼zhong/MSP.
The rest of the paper is organized as follows. Section 2
briefly surveys the related work. Section 3 presents an
overview of the MSP localization system. In sections 4 and 5,
basic MSP and four advanced processing methods are
introduced. Section 6 describes how MSP can be applied in a wave
propagation scenario. Section 7 discusses several
implementation issues. Section 8 presents simulation results, and Section 9
reports an evaluation of MSP on the Mirage test-bed and an
outdoor test-bed. Section 10 concludes the paper.
2 Related Work
Many methods have been proposed to localize wireless
sensor devices in the open air. Most of these can be
classified into two categories: range-based and range-free
localization. Range-based localization systems, such as GPS [23],
Cricket [17], AHLoS [19], AOA [16], Robust
Quadrilaterals [13] and Sweeps [7], are based on fine-grained
point-topoint distance estimation or angle estimation to identify
pernode location. Constraints on the cost, energy and hardware
footprint of each sensor node make these range-based
methods undesirable for massive outdoor deployment. In addition,
ranging signals generated by sensor nodes have a very limited
effective range because of energy and form factor concerns.
For example, ultrasound signals usually effectively propagate
20-30 feet using an on-board transmitter [17]. Consequently,
these range-based solutions require an undesirably high
deployment density. Although the received signal strength
indicator (RSSI) related [2, 24] methods were once considered
an ideal low-cost solution, the irregularity of radio
propagation [26] seriously limits the accuracy of such systems. The
recently proposed RIPS localization system [11] superimposes
two RF waves together, creating a low-frequency envelope that
can be accurately measured. This ranging technique performs
very well as long as antennas are well oriented and
environmental factors such as multi-path effects and background noise
are sufficiently addressed.
Range-free methods don"t need to estimate or measure
accurate distances or angles. Instead, anchors or controlled-event
distributions are used for node localization. Range-free
methods can be generally classified into two types: anchor-based
and anchor-free solutions.
• For anchor-based solutions such as Centroid [4], APIT
[8], SeRLoc [10], Gradient [13] , and APS [15], the main
idea is that the location of each node is estimated based on
the known locations of the anchor nodes. Different anchor
combinations narrow the areas in which the target nodes
can possibly be located. Anchor-based solutions normally
require a high density of anchor nodes so as to achieve
good accuracy. In practice, it is desirable to have as few
anchor nodes as possible so as to lower the system cost.
• Anchor-free solutions require no anchor nodes. Instead,
external event generators and data processing platforms
are used. The main idea is to correlate the event detection
time at a sensor node with the known space-time
relationship of controlled events at the generator so that detection
time-stamps can be mapped into the locations of sensors.
Spotlight [20] and Lighthouse [18] work in this fashion.
In Spotlight [20], the event distribution needs to be
precise in both time and space. Precise event distribution
is difficult to achieve without careful calibration,
especially when the event-generating devices require certain
mechanical maneuvers (e.g., the telescope mount used in
Spotlight). All these increase system cost and reduce
localization speed. StarDust [21], which works much faster,
uses label relaxation algorithms to match light spots
reflected by corner-cube retro-reflectors (CCR) with sensor
nodes using various constraints. Label relaxation
algorithms converge only when a sufficient number of robust
constraints are obtained. Due to the environmental impact
on RF connectivity constraints, however, StarDust is less
accurate than Spotlight.
In this paper, we propose a balanced solution that avoids
the limitations of both anchor-based and anchor-free solutions.
Unlike anchor-based solutions [4, 8], MSP allows a flexible
tradeoff between the physical cost (anchor nodes) with the soft
16
1
A
B
2
3
4
5
Target nodeAnchor node
1A 5 3 B2 4
1 B2 5A 43
1A25B4 3
1 52 AB 4 3
1
2
3
5
4
(b)
(c)(d)
(a)
Event 1
Node Sequence generated by event 1
Event 3
Node Sequence generated by event 2
Node Sequence generated by event 3
Node Sequence generated by event 4
Event 2 Event 4
Figure 1. The MSP System Overview
cost (localization events). MSP uses only a small number of
anchors (theoretically, as few as two). Unlike anchor-free
solutions, MSP doesn"t need to maintain rigid time-space
relationships while distributing events, which makes system design
simpler, more flexible and more robust to calibration errors.
3 System Overview
MSP works by extracting relative location information from
multiple simple one-dimensional orderings of nodes.
Figure 1(a) shows a layout of a sensor network with anchor nodes
and target nodes. Target nodes are defined as the nodes to be
localized. Briefly, the MSP system works as follows. First,
events are generated one at a time in the network area (e.g.,
ultrasound propagations from different locations, laser scans
with diverse angles). As each event propagates, as shown in
Figure 1(a), each node detects it at some particular time
instance. For a single event, we call the ordering of nodes, which
is based on the sequential detection of the event, a node
sequence. Each node sequence includes both the targets and the
anchors as shown in Figure 1(b). Second, a multi-sequence
processing algorithm helps to narrow the possible location of
each node to a small area (Figure 1(c)). Finally, a
distributionbased estimation method estimates the exact location of each
sensor node, as shown in Figure 1(d).
Figure 1 shows that the node sequences can be obtained
much more economically than accurate pair-wise distance
measurements between target nodes and anchor nodes via
ranging methods. In addition, this system does not require a rigid
time-space relationship for the localization events, which is
critical but hard to achieve in controlled event distribution
scenarios (e.g., Spotlight [20]).
For the sake of clarity in presentation, we present our system
in two cases:
• Ideal Case, in which all the node sequences obtained
from the network are complete and correct, and nodes are
time-synchronized [12, 9].
• Realistic Deployment, in which (i) node sequences can
be partial (incomplete), (ii) elements in sequences could
flip (i.e., the order obtained is reversed from reality), and
(iii) nodes are not time-synchronized.
To introduce the MSP algorithm, we first consider a simple
straight-line scan scenario. Then, we describe how to
implement straight-line scans as well as other event types, such as
sound wave propagation.
1
A
2
3
4
5
B
C
6
7
8
9
Straight-line Scan 1
Straight-lineScan2
8
1
5 A
6
C
4
3
7
2
B
9
3
1
C 5
9
2 A 4 6
B
7 8
Target node
Anchor node
Figure 2. Obtaining Multiple Node Sequences
4 Basic MSP
Let us consider a sensor network with N target nodes and
M anchor nodes randomly deployed in an area of size S. The
top-level idea for basic MSP is to split the whole sensor
network area into small pieces by processing node sequences.
Because the exact locations of all the anchors in a node sequence
are known, all the nodes in this sequence can be divided into
O(M +1) parts in the area.
In Figure 2, we use numbered circles to denote target nodes
and numbered hexagons to denote anchor nodes. Basic MSP
uses two straight lines to scan the area from different directions,
treating each scan as an event. All the nodes react to the event
sequentially generating two node sequences. For vertical scan
1, the node sequence is (8,1,5,A,6,C,4,3,7,2,B,9), as shown
outside the right boundary of the area in Figure 2; for
horizontal scan 2, the node sequence is (3,1,C,5,9,2,A,4,6,B,7,8),
as shown under the bottom boundary of the area in Figure 2.
Since the locations of the anchor nodes are available, the
anchor nodes in the two node sequences actually split the area
vertically and horizontally into 16 parts, as shown in Figure 2.
To extend this process, suppose we have M anchor nodes and
perform d scans from different angles, obtaining d node
sequences and dividing the area into many small parts.
Obviously, the number of parts is a function of the number of
anchors M, the number of scans d, the anchors" location as well as
the slop k for each scan line. According to the pie-cutting
theorem [22], the area can be divided into O(M2d2) parts. When
M and d are appropriately large, the polygon for each target
node may become sufficiently small so that accurate
estimation can be achieved. We emphasize that accuracy is affected
not only by the number of anchors M, but also by the number
of events d. In other words, MSP provides a tradeoff between
the physical cost of anchors and the soft cost of events.
Algorithm 1 depicts the computing architecture of basic
MSP. Each node sequence is processed within line 1 to 8. For
each node, GetBoundaries() in line 5 searches for the
predecessor and successor anchors in the sequence so as to
determine the boundaries of this node. Then in line 6 UpdateMap()
shrinks the location area of this node according to the newly
obtained boundaries. After processing all sequences, Centroid
Estimation (line 11) set the center of gravity of the final
polygon as the estimated location of the target node.
Basic MSP only makes use of the order information
between a target node and the anchor nodes in each sequence.
Actually, we can extract much more location information from
17
Algorithm 1 Basic MSP Process
Output: The estimated location of each node.
1: repeat
2: GetOneUnprocessedSeqence();
3: repeat
4: GetOneNodeFromSequenceInOrder();
5: GetBoundaries();
6: UpdateMap();
7: until All the target nodes are updated;
8: until All the node sequences are processed;
9: repeat
10: GetOneUnestimatedNode();
11: CentroidEstimation();
12: until All the target nodes are estimated;
each sequence. Section 5 will introduce advanced MSP, in
which four novel optimizations are proposed to improve the
performance of MSP significantly.
5 Advanced MSP
Four improvements to basic MSP are proposed in this
section. The first three improvements do not need additional
sensing and communication in the networks but require only
slightly more off-line computation. The objective of all these
improvements is to make full use of the information embedded
in the node sequences. The results we have obtained
empirically indicate that the implementation of the first two methods
can dramatically reduce the localization error, and that the third
and fourth methods are helpful for some system deployments.
5.1 Sequence-Based MSP
As shown in Figure 2, each scan line and M anchors, splits
the whole area into M + 1 parts. Each target node falls into
one polygon shaped by scan lines. We noted that in basic MSP,
only the anchors are used to narrow down the polygon of each
target node, but actually there is more information in the node
sequence that we can made use of.
Let"s first look at a simple example shown in Figure 3. The
previous scans narrow the locations of target node 1 and node
2 into two dashed rectangles shown in the left part of Figure 3.
Then a new scan generates a new sequence (1, 2). With
knowledge of the scan"s direction, it is easy to tell that node 1 is
located to the left of node 2. Thus, we can further narrow the
location area of node 2 by eliminating the shaded part of node
2"s rectangle. This is because node 2 is located on the right of
node 1 while the shaded area is outside the lower boundary of
node 1. Similarly, the location area of node 1 can be narrowed
by eliminating the shaded part out of node 2"s right boundary.
We call this procedure sequence-based MSP which means that
the whole node sequence needs to be processed node by node
in order. Specifically, sequence-based MSP follows this exact
processing rule:
1
2
1 2
1
2
Lower boundary of 1 Upper boundary of 1
Lower boundary of 2 Upper boundary of 2
New sequence
New upper boundary of 1
New Lower boundary of 2
EventPropagation
Figure 3. Rule Illustration in Sequence Based MSP
Algorithm 2 Sequence-Based MSP Process
Output: The estimated location of each node.
1: repeat
2: GetOneUnprocessedSeqence();
3: repeat
4: GetOneNodeByIncreasingOrder();
5: ComputeLowbound();
6: UpdateMap();
7: until The last target node in the sequence;
8: repeat
9: GetOneNodeByDecreasingOrder();
10: ComputeUpbound();
11: UpdateMap();
12: until The last target node in the sequence;
13: until All the node sequences are processed;
14: repeat
15: GetOneUnestimatedNode();
16: CentroidEstimation();
17: until All the target nodes are estimated;
Elimination Rule: Along a scanning direction, the lower
boundary of the successor"s area must be equal to or larger
than the lower boundary of the predecessor"s area, and the
upper boundary of the predecessor"s area must be equal to or
smaller than the upper boundary of the successor"s area.
In the case of Figure 3, node 2 is the successor of node 1,
and node 1 is the predecessor of node 2. According to the
elimination rule, node 2"s lower boundary cannot be smaller
than that of node 1 and node 1"s upper boundary cannot exceed
node 2"s upper boundary.
Algorithm 2 illustrates the pseudo code of sequence-based
MSP. Each node sequence is processed within line 3 to 13. The
sequence processing contains two steps:
Step 1 (line 3 to 7): Compute and modify the lower
boundary for each target node by increasing order in the node
sequence. Each node"s lower boundary is determined by the
lower boundary of its predecessor node in the sequence, thus
the processing must start from the first node in the sequence
and by increasing order. Then update the map according to the
new lower boundary.
Step 2 (line 8 to 12): Compute and modify the upper
boundary for each node by decreasing order in the node sequence.
Each node"s upper boundary is determined by the upper
boundary of its successor node in the sequence, thus the processing
must start from the last node in the sequence and by
decreasing order. Then update the map according to the new upper
boundary.
After processing all the sequences, for each node, a polygon
bounding its possible location has been found. Then,
center-ofgravity-based estimation is applied to compute the exact
location of each node (line 14 to 17).
An example of this process is shown in Figure 4. The third
scan generates the node sequence (B,9,2,7,4,6,3,8,C,A,5,1). In
addition to the anchor split lines, because nodes 4 and 7 come
after node 2 in the sequence, node 4 and 7"s polygons could
be narrowed according to node 2"s lower boundary (the lower
right-shaded area); similarly, the shaded area in node 2"s
rectangle could be eliminated since this part is beyond node 7"s
upper boundary indicated by the dotted line. Similar
eliminating can be performed for node 3 as shown in the figure.
18
1
A
2
3
4
5
B
C
6
7
8
9
Straight-line Scan 1
Straight-lineScan2
Straight-line Scan 3
Target node
Anchor node
Figure 4. Sequence-Based MSP Example
1
A
2
3
4
5
B
C
6
7
8
9
Straight-line Scan 1
Straight-lineScan2
Straight-line Scan 3
Reprocessing Scan 1
Target node
Anchor node
Figure 5. Iterative MSP: Reprocessing Scan 1
From above, we can see that the sequence-based MSP
makes use of the information embedded in every sequential
node pair in the node sequence. The polygon boundaries of
the target nodes obtained in prior could be used to further split
other target nodes" areas. Our evaluation in Sections 8 and 9
shows that sequence-based MSP considerably enhances system
accuracy.
5.2 Iterative MSP
Sequence-based MSP is preferable to basic MSP because it
extracts more information from the node sequence. In fact,
further useful information still remains! In sequence-based MSP,
a sequence processed later benefits from information produced
by previously processed sequences (e.g., the third scan in
Figure 5). However, the first several sequences can hardly benefit
from other scans in this way. Inspired by this phenomenon,
we propose iterative MSP. The basic idea of iterative MSP is
to process all the sequences iteratively several times so that the
processing of each single sequence can benefit from the results
of other sequences.
To illustrate the idea more clearly, Figure 4 shows the results
of three scans that have provided three sequences. Now if we
process the sequence (8,1,5,A,6,C,4,3,7,2,B,9) obtained from
scan 1 again, we can make progress, as shown in Figure 5.
The reprocessing of the node sequence 1 provides information
in the way an additional vertical scan would. From
sequencebased MSP, we know that the upper boundaries of nodes 3 and
4 along the scan direction must not extend beyond the upper
boundary of node 7, therefore the grid parts can be eliminated
(a) Central of Gravity (b) Joint Distribution
1 2
2
1 1
2
1
2 2
1
1 2
2
1 1
2
Figure 6. Example of Joint Distribution Estimation
…...
vm
ap[0]
vm
ap[1]
vm
ap[2]
vm
ap[3]
Combine
m
ap
Figure 7. Idea of DBE MSP for Each Node
for the nodes 3 and node 4, respectively, as shown in Figure 5.
From this example, we can see that iterative processing of the
sequence could help further shrink the polygon of each target
node, and thus enhance the accuracy of the system.
The implementation of iterative MSP is straightforward:
process all the sequences multiple times using sequence-based
MSP. Like sequence-based MSP, iterative MSP introduces no
additional event cost. In other words, reprocessing does not
actually repeat the scan physically. Evaluation results in
Section 8 will show that iterative MSP contributes noticeably to
a lower localization error. Empirical results show that after 5
iterations, improvements become less significant. In summary,
iterative processing can achieve better performance with only
a small computation overhead.
5.3 Distribution-Based Estimation
After determining the location area polygon for each node,
estimation is needed for a final decision. Previous research
mostly applied the Center of Gravity (COG) method [4] [8]
[10] which minimizes average error. If every node is
independent of all others, COG is the statistically best solution. In
MSP, however, each node may not be independent. For
example, two neighboring nodes in a certain sequence could have
overlapping polygon areas. In this case, if the marginal
probability of joint distribution is used for estimation, better
statistical results are achieved.
Figure 6 shows an example in which node 1 and node 2 are
located in the same polygon. If COG is used, both nodes are
localized at the same position (Figure 6(a)). However, the node
sequences obtained from two scans indicate that node 1 should
be to the left of and above node 2, as shown in Figure 6(b).
The high-level idea of distribution-based estimation
proposed for MSP, which we call DBE MSP, is illustrated in
Figure 7. The distributions of each node under the ith scan (for the
ith node sequence) are estimated in node.vmap[i], which is a
data structure for remembering the marginal distribution over
scan i. Then all the vmaps are combined to get a single map
and weighted estimation is used to obtain the final location.
For each scan, all the nodes are sorted according to the gap,
which is the diameter of the polygon along the direction of the
scan, to produce a second, gap-based node sequence. Then,
the estimation starts from the node with the smallest gap. This
is because it is statistically more accurate to assume a uniform
distribution of the node with smaller gap. For each node
processed in order from the gap-based node sequence, either if
19
Pred. node"s area
Predecessor node exists:
conditional distribution
based on pred. node"s area
Alone: Uniformly Distributed
Succ. node"s area
Successor node exists:
conditional distribution
based on succ. node"s area
Succ. node"s area
Both predecessor and successor
nodes exist: conditional distribution
based on both of them
Pred. node"s area
Figure 8. Four Cases in DBE Process
no neighbor node in the original event-based node sequence
shares an overlapping area, or if the neighbors have not been
processed due to bigger gaps, a uniform distribution Uniform()
is applied to this isolated node (the Alone case in Figure 8).
If the distribution of its neighbors sharing overlapped areas has
been processed, we calculate the joint distribution for the node.
As shown in Figure 8, there are three possible cases
depending on whether the distribution of the overlapping predecessor
and/or successor nodes have/has already been estimated.
The estimation"s strategy of starting from the most accurate
node (smallest gap node) reduces the problem of estimation
error propagation. The results in the evaluation section indicate
that applying distribution-based estimation could give
statistically better results.
5.4 Adaptive MSP
So far, all the enhancements to basic MSP focus on
improving the multi-sequence processing algorithm given a fixed set
of scan directions. All these enhancements require only more
computing time without any overhead to the sensor nodes.
Obviously, it is possible to have some choice and optimization on
how events are generated. For example, in military situations,
artillery or rocket-launched mini-ultrasound bombs can be used
for event generation at some selected locations. In adaptive
MSP, we carefully generate each new localization event so as
to maximize the contribution of the new event to the refinement
of localization, based on feedback from previous events.
Figure 9 depicts the basic architecture of adaptive MSP.
Through previous localization events, the whole map has been
partitioned into many small location areas. The idea of
adaptive MSP is to generate the next localization event to achieve
best-effort elimination, which ideally could shrink the location
area of individual node as much as possible.
We use a weighted voting mechanism to evaluate candidate
localization events. Every node wants the next event to split its
area evenly, which would shrink the area fast. Therefore, every
node votes for the parameters of the next event (e.g., the scan
angle k of the straight-line scan). Since the area map is
maintained centrally, the vote is virtually done and there is no need
for the real sensor nodes to participate in it. After gathering all
the voting results, the event parameters with the most votes win
the election. There are two factors that determine the weight of
each vote:
• The vote for each candidate event is weighted according
to the diameter D of the node"s location area. Nodes with
bigger location areas speak louder in the voting, because
Map Partitioned by the Localization Events
Diameter of Each
Area
Candidate
Localization Events
Evaluation
Trigger Next
Localization Evet
Figure 9. Basic Architecture of Adaptive MSP
2
3
Diameter D3
1
1
3k
2
3k
3
3k
4
3k
5
3k
6
3k
1
3k 2
3k 3
3k
6
3k4
3k 5
3k
Weight
el
small
i
opt
i
j
ii
j
i
S
S
DkkDfkWeight
arg
),(,()( ⋅=∆=
1
3
opt
k
Target node
Anchor node
Center of Gravity
Node 3's area
Figure 10. Candidate Slops for Node 3 at Anchor 1
overall system error is reduced mostly by splitting the
larger areas.
• The vote for each candidate event is also weighted
according to its elimination efficiency for a location area, which
is defined as how equally in size (or in diameter) an event
can cut an area. In other words, an optimal scan event
cuts an area in the middle, since this cut shrinks the area
quickly and thus reduces localization uncertainty quickly.
Combining the above two aspects, the weight for each vote
is computed according to the following equation (1):
Weight(k
j
i ) = f(Di,△(k
j
i ,k
opt
i )) (1)
k
j
i is node i"s jth supporting parameter for next event
generation; Di is diameter of node i"s location area; △(k
j
i ,k
opt
i ) is the
distance between k
j
i and the optimal parameter k
opt
i for node i,
which should be defined to fit the specific application.
Figure 10 presents an example for node 1"s voting for the
slopes of the next straight-line scan. In the system, there
are a fixed number of candidate slopes for each scan (e.g.,
k1,k2,k3,k4...). The location area of target node 3 is shown
in the figure. The candidate events k1
3,k2
3,k3
3,k4
3,k5
3,k6
3 are
evaluated according to their effectiveness compared to the optimal
ideal event which is shown as a dotted line with appropriate
weights computed according to equation (1). For this
specific example, as is illustrated in the right part of Figure 10,
f(Di,△(k
j
i ,kopt
i )) is defined as the following equation (2):
Weight(kj
i ) = f(Di,△(kj
i ,kopt
i )) = Di ·
Ssmall
Slarge
(2)
Ssmall and Slarge are the sizes of the smaller part and larger
part of the area cut by the candidate line respectively. In this
case, node 3 votes 0 for the candidate lines that do not cross its
area since Ssmall = 0.
We show later that adaptive MSP improves localization
accuracy in WSNs with irregularly shaped deployment areas.
20
5.5 Overhead and MSP Complexity Analysis
This section provides a complexity analysis of the MSP
design. We emphasize that MSP adopts an asymmetric design in
which sensor nodes need only to detect and report the events.
They are blissfully oblivious to the processing methods
proposed in previous sections. In this section, we analyze the
computational cost on the node sequence processing side, where
resources are plentiful.
According to Algorithm 1, the computational complexity of
Basic MSP is O(d · N · S), and the storage space required is
O(N · S), where d is the number of events, N is the number of
target nodes, and S is the area size.
According to Algorithm 2, the computational complexity of
both sequence-based MSP and iterative MSP is O(c·d ·N ·S),
where c is the number of iterations and c = 1 for
sequencebased MSP, and the storage space required is O(N ·S). Both the
computational complexity and storage space are equal within a
constant factor to those of basic MSP.
The computational complexity of the distribution-based
estimation (DBE MSP) is greater. The major overhead comes
from the computation of joint distributions when both
predecessor and successor nodes exit. In order to compute the
marginal probability, MSP needs to enumerate the locations of
the predecessor node and the successor node. For example,
if node A has predecessor node B and successor node C, then
the marginal probability PA(x,y) of node A"s being at location
(x,y) is:
PA(x,y) = ∑
i
∑
j
∑
m
∑
n
1
NB,A,C
·PB(i, j)·PC(m,n) (3)
NB,A,C is the number of valid locations for A satisfying the
sequence (B, A, C) when B is at (i, j) and C is at (m,n);
PB(i, j) is the available probability of node B"s being located
at (i, j); PC(m,n) is the available probability of node C"s
being located at (m,n). A naive algorithm to compute equation
(3) has complexity O(d · N · S3). However, since the marginal
probability indeed comes from only one dimension along the
scanning direction (e.g., a line), the complexity can be reduced
to O(d · N · S1.5) after algorithm optimization. In addition, the
final location areas for every node are much smaller than the
original field S; therefore, in practice, DBE MSP can be
computed much faster than O(d ·N ·S1.5).
6 Wave Propagation Example
So far, the description of MSP has been solely in the
context of straight-line scan. However, we note that MSP is
conceptually independent of how the event is propagated as long
as node sequences can be obtained. Clearly, we can also
support wave-propagation-based events (e.g., ultrasound
propagation, air blast propagation), which are polar coordinate
equivalences of the line scans in the Cartesian coordinate system.
This section illustrates the effects of MSP"s implementation in
the wave propagation-based situation. For easy modelling, we
have made the following assumptions:
• The wave propagates uniformly in all directions,
therefore the propagation has a circle frontier surface. Since
MSP does not rely on an accurate space-time relationship,
a certain distortion in wave propagation is tolerable. If any
directional wave is used, the propagation frontier surface
can be modified accordingly.
1
3
5
9
Target node
Anchor node
Previous Event location
A
2
Center of Gravity
4
8
7
B
6
C
A line of preferred locations for next event
Figure 11. Example of Wave Propagation Situation
• Under the situation of line-of-sight, we allow obstacles to
reflect or deflect the wave. Reflection and deflection are
not problems because each node reacts only to the first
detected event. Those reflected or deflected waves come
later than the line-of-sight waves. The only thing the
system needs to maintain is an appropriate time interval
between two successive localization events.
• We assume that background noise exists, and therefore we
run a band-pass filter to listen to a particular wave
frequency. This reduces the chances of false detection.
The parameter that affects the localization event generation
here is the source location of the event. The different
distances between each node and the event source determine the
rank of each node in the node sequence. Using the node
sequences, the MSP algorithm divides the whole area into many
non-rectangular areas as shown in Figure 11. In this figure,
the stars represent two previous event sources. The previous
two propagations split the whole map into many areas by those
dashed circles that pass one of the anchors. Each node is
located in one of the small areas. Since sequence-based MSP,
iterative MSP and DBE MSP make no assumptions about the
type of localization events and the shape of the area, all three
optimization algorithms can be applied for the wave
propagation scenario.
However, adaptive MSP needs more explanation. Figure 11
illustrates an example of nodes" voting for next event source
locations. Unlike the straight-line scan, the critical parameter
now is the location of the event source, because the distance
between each node and the event source determines the rank of
the node in the sequence. In Figure 11, if the next event breaks
out along/near the solid thick gray line, which perpendicularly
bisects the solid dark line between anchor C and the center of
gravity of node 9"s area (the gray area), the wave would reach
anchor C and the center of gravity of node 9"s area at roughly
the same time, which would relatively equally divide node 9"s
area. Therefore, node 9 prefers to vote for the positions around
the thick gray line.
7 Practical Deployment Issues
For the sake of presentation, until now we have described
MSP in an ideal case where a complete node sequence can be
obtained with accurate time synchronization. In this section
we describe how to make MSP work well under more realistic
conditions.
21
7.1 Incomplete Node Sequence
For diverse reasons, such as sensor malfunction or natural
obstacles, the nodes in the network could fail to detect
localization events. In such cases, the node sequence will not be
complete. This problem has two versions:
• Anchor nodes are missing in the node sequence
If some anchor nodes fail to respond to the localization
events, then the system has fewer anchors. In this case,
the solution is to generate more events to compensate for
the loss of anchors so as to achieve the desired accuracy
requirements.
• Target nodes are missing in the node sequence
There are two consequences when target nodes are
missing. First, if these nodes are still be useful to sensing
applications, they need to use other backup localization
approaches (e.g., Centroid) to localize themselves with help
from their neighbors who have already learned their own
locations from MSP. Secondly, since in advanced MSP
each node in the sequence may contribute to the overall
system accuracy, dropping of target nodes from sequences
could also reduce the accuracy of the localization. Thus,
proper compensation procedures such as adding more
localization events need to be launched.
7.2 Localization without Time Synchronization
In a sensor network without time synchronization support,
nodes cannot be ordered into a sequence using timestamps. For
such cases, we propose a listen-detect-assemble-report
protocol, which is able to function independently without time
synchronization.
listen-detect-assemble-report requires that every node
listens to the channel for the node sequence transmitted from its
neighbors. Then, when the node detects the localization event,
it assembles itself into the newest node sequence it has heard
and reports the updated sequence to other nodes. Figure 12
(a) illustrates an example for the listen-detect-assemble-report
protocol. For simplicity, in this figure we did not differentiate
the target nodes from anchor nodes. A solid line between two
nodes stands for a communication link. Suppose a straight line
scans from left to right. Node 1 detects the event, and then it
broadcasts the sequence (1) into the network. Node 2 and node
3 receive this sequence. When node 2 detects the event, node
2 adds itself into the sequence and broadcasts (1, 2). The
sequence propagates in the same direction with the scan as shown
in Figure 12 (a). Finally, node 6 obtains a complete sequence
(1,2,3,5,7,4,6).
In the case of ultrasound propagation, because the event
propagation speed is much slower than that of radio, the
listendetect-assemble-report protocol can work well in a situation
where the node density is not very high. For instance, if the
distance between two nodes along one direction is 10 meters,
the 340m/s sound needs 29.4ms to propagate from one node
to the other. While normally the communication data rate is
250Kbps in the WSN (e.g., CC2420 [1]), it takes only about
2 ∼ 3 ms to transmit an assembled packet for one hop.
One problem that may occur using the
listen-detectassemble-report protocol is multiple partial sequences as
shown in Figure 12 (b). Two separate paths in the network may
result in two sequences that could not be further combined. In
this case, since the two sequences can only be processed as
separate sequences, some order information is lost. Therefore the
1,2,5,4
1,3,7,4
1,2,3,5 1,2,3,5,7,4
1,2,3,5,7
1,2,3,5
1,3
1,2
1
2
3
5
7
4
6
1
1
1,3 1,2,3,5,7,4,6
1,2,5
1,3,7
1,3
1,2
1
2
3
5
7
4
6
1
1
1,3,7,4,6
1,2,5,4,6
(a)
(b)
(c)
1,3,2,5 1,3,2,5,7,4
1,3,2,5,7
1,3,2,5
1,3
1,2
1
2
3
5
7
4
6
1
1
1,3 1,3,2,5,7,4,6
Event Propagation
Event Propagation
Event Propagation
Figure 12. Node Sequence without Time Synchronization
accuracy of the system would decrease.
The other problem is the sequence flip problem. As shown
in Figure 12 (c), because node 2 and node 3 are too close to
each other along the scan direction, they detect the scan
almost simultaneously. Due to the uncertainty such as media
access delay, two messages could be transmitted out of order.
For example, if node 3 sends out its report first, then the order
of node 2 and node 3 gets flipped in the final node sequence.
The sequence flip problem would appear even in an accurately
synchronized system due to random jitter in node detection if
an event arrives at multiple nodes almost simultaneously. A
method addressing the sequence flip is presented in the next
section.
7.3 Sequence Flip and Protection Band
Sequence flip problems can be solved with and without
time synchronization. We firstly start with a scenario
applying time synchronization. Existing solutions for time
synchronization [12, 6] can easily achieve sub-millisecond-level
accuracy. For example, FTSP [12] achieves 16.9µs (microsecond)
average error for a two-node single-hop case. Therefore, we
can comfortably assume that the network is synchronized with
maximum error of 1000µs. However, when multiple nodes are
located very near to each other along the event propagation
direction, even when time synchronization with less than 1ms
error is achieved in the network, sequence flip may still occur.
For example, in the sound wave propagation case, if two nodes
are less than 0.34 meters apart, the difference between their
detection timestamp would be smaller than 1 millisecond.
We find that sequence flip could not only damage system
accuracy, but also might cause a fatal error in the MSP algorithm.
Figure 13 illustrates both detrimental results. In the left side of
Figure 13(a), suppose node 1 and node 2 are so close to each
other that it takes less than 0.5ms for the localization event to
propagate from node 1 to node 2. Now unfortunately, the node
sequence is mistaken to be (2,1). So node 1 is expected to be
located to the right of node 2, such as at the position of the
dashed node 1. According to the elimination rule in
sequencebased MSP, the left part of node 1"s area is cut off as shown in
the right part of Figure 13(a). This is a potentially fatal error,
because node 1 is actually located in the dashed area which has
been eliminated by mistake. During the subsequent
eliminations introduced by other events, node 1"s area might be cut off
completely, thus node 1 could consequently be erased from the
map! Even in cases where node 1 still survives, its area actually
does not cover its real location.
22
1
2
12
2
Lower boundary of 1 Upper boundary of 1
Flipped Sequence Fatal Elimination Error
EventPropagation
1 1
Fatal Error
1
1
2
12
2
Lower boundary of 1 Upper boundary of 1
Flipped Sequence Safe Elimination
EventPropagation
1 1
New lower boundary of 1
1
B
(a)
(b)
B: Protection band
Figure 13. Sequence Flip and Protection Band
Another problem is not fatal but lowers the localization
accuracy. If we get the right node sequence (1,2), node 1 has a
new upper boundary which can narrow the area of node 1 as in
Figure 3. Due to the sequence flip, node 1 loses this new upper
boundary.
In order to address the sequence flip problem, especially to
prevent nodes from being erased from the map, we propose
a protection band compensation approach. The basic idea of
protection band is to extend the boundary of the location area
a little bit so as to make sure that the node will never be erased
from the map. This solution is based on the fact that nodes
have a high probability of flipping in the sequence if they are
near to each other along the event propagation direction. If
two nodes are apart from each other more than some distance,
say, B, they rarely flip unless the nodes are faulty. The width
of a protection band B, is largely determined by the maximum
error in system time synchronization and the localization event
propagation speed.
Figure 13(b) presents the application of the protection band.
Instead of eliminating the dashed part in Figure 13(a) for node
1, the new lower boundary of node 1 is set by shifting the
original lower boundary of node 2 to the left by distance B. In this
case, the location area still covers node 1 and protects it from
being erased. In a practical implementation, supposing that the
ultrasound event is used, if the maximum error of system time
synchronization is 1ms, two nodes might flip with high
probability if the timestamp difference between the two nodes is
smaller than or equal to 1ms. Accordingly, we set the
protection band B as 0.34m (the distance sound can propagate within
1 millisecond). By adding the protection band, we reduce the
chances of fatal errors, although at the cost of localization
accuracy. Empirical results obtained from our physical test-bed
verified this conclusion.
In the case of using the listen-detect-assemble-report
protocol, the only change we need to make is to select the protection
band according to the maximum delay uncertainty introduced
by the MAC operation and the event propagation speed. To
bound MAC delay at the node side, a node can drop its report
message if it experiences excessive MAC delay. This converts
the sequence flip problem to the incomplete sequence problem,
which can be more easily addressed by the method proposed in
Section 7.1.
8 Simulation Evaluation
Our evaluation of MSP was conducted on three platforms:
(i) an indoor system with 46 MICAz motes using straight-line
scan, (ii) an outdoor system with 20 MICAz motes using sound
wave propagation, and (iii) an extensive simulation under
various kinds of physical settings.
In order to understand the behavior of MSP under
numerous settings, we start our evaluation with simulations.
Then, we implemented basic MSP and all the advanced
MSP methods for the case where time synchronization is
available in the network. The simulation and
implementation details are omitted in this paper due to space
constraints, but related documents [25] are provided online at
http://www.cs.umn.edu/∼zhong/MSP. Full implementation and
evaluation of system without time synchronization are yet to be
completed in the near future.
In simulation, we assume all the node sequences are perfect
so as to reveal the performance of MSP achievable in the
absence of incomplete node sequences or sequence flips. In our
simulations, all the anchor nodes and target nodes are assumed
to be deployed uniformly. The mean and maximum errors are
averaged over 50 runs to obtain high confidence. For legibility
reasons, we do not plot the confidence intervals in this paper.
All the simulations are based on the straight-line scan example.
We implement three scan strategies:
• Random Scan: The slope of the scan line is randomly
chosen at each time.
• Regular Scan: The slope is predetermined to rotate
uniformly from 0 degree to 180 degrees. For example, if the
system scans 6 times, then the scan angles would be: 0,
30, 60, 90, 120, and 150.
• Adaptive Scan: The slope of each scan is determined
based on the localization results from previous scans.
We start with basic MSP and then demonstrate the
performance improvements one step at a time by adding (i)
sequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive
MSP.
8.1 Performance of Basic MSP
The evaluation starts with basic MSP, where we compare the
performance of random scan and regular scan under different
configurations. We intend to illustrate the impact of the number
of anchors M, the number of scans d, and target node density
(number of target nodes N in a fixed-size region) on the
localization error. Table 1 shows the default simulation parameters.
The error of each node is defined as the distance between the
estimated location and the real position. We note that by
default we only use three anchors, which is considerably fewer
than existing range-free solutions [8, 4].
Impact of the Number of Scans: In this experiment, we
compare regular scan with random scan under a different number
of scans from 3 to 30 in steps of 3. The number of anchors
Table 1. Default Configuration Parameters
Parameter Description
Field Area 200×200 (Grid Unit)
Scan Type Regular (Default)/Random Scan
Anchor Number 3 (Default)
Scan Times 6 (Default)
Target Node Number 100 (Default)
Statistics Error Mean/Max
Random Seeds 50 runs
23
0 5 10 15 20 25 30
0
10
20
30
40
50
60
70
80
90
Mean Error and Max Error VS Scan Time
Scan Time
Error Max Error of Random Scan
Max Error of Regular Scan
Mean Error of Random Scan
Mean Error of Regular Scan
(a) Error vs. Number of Scans
0 5 10 15 20 25 30
0
10
20
30
40
50
60
Mean Error and Max Error VS Anchor Number
Anchor Number
Error
Max Error of Random Scan
Max Error of Regular Scan
Mean Error of Random Scan
Mean Error of Regular Scan
(b) Error vs. Anchor Number
0 50 100 150 200
10
20
30
40
50
60
70
Mean Error and Max Error VS Target Node Number
Target Node Number
Error
Max Error of Random Scan
Max Error of Regular Scan
Mean Error of Random Scan
Mean Error of Regular Scan
(c) Error vs. Number of Target Nodes
Figure 14. Evaluation of Basic MSP under Random and Regular Scans
0 5 10 15 20 25 30
0
10
20
30
40
50
60
70
Basic MSP VS Sequence Based MSP II
Scan Time
Error
Max Error of Basic MSP
Max Error of Seq MSP
Mean Error of Basic MSP
Mean Error of Seq MSP
(a) Error vs. Number of Scans
0 5 10 15 20 25 30
0
5
10
15
20
25
30
35
40
45
50
Basic MSP VS Sequence Based MSP I
Anchor Number
Error
Max Error of Basic MSP
Max Error of Seq MSP
Mean Error of Basic MSP
Mean Error of Seq MSP
(b) Error vs. Anchor Number
0 50 100 150 200
5
10
15
20
25
30
35
40
45
50
55
Basic MSP VS Sequence Based MSP III
Target Node Number
Error
Max Error of Basic MSP
Max Error of Seq MSP
Mean Error of Basic MSP
Mean Error of Seq MSP
(c) Error vs. Number of Target Nodes
Figure 15. Improvements of Sequence-Based MSP over Basic MSP
is 3 by default. Figure 14(a) indicates the following: (i) as
the number of scans increases, the localization error decreases
significantly; for example, localization errors drop more than
60% from 3 scans to 30 scans; (ii) statistically, regular scan
achieves better performance than random scan under identical
number of scans. However, the performance gap reduces as
the number of scans increases. This is expected since a large
number of random numbers converges to a uniform
distribution. Figure 14(a) also demonstrates that MSP requires only
a small number of anchors to perform very well, compared to
existing range-free solutions [8, 4].
Impact of the Number of Anchors: In this experiment, we
compare regular scan with random scan under different
number of anchors from 3 to 30 in steps of 3. The results shown in
Figure 14(b) indicate that (i) as the number of anchor nodes
increases, the localization error decreases, and (ii)
statistically, regular scan obtains better results than random scan with
identical number of anchors. By combining Figures 14(a)
and 14(b), we can conclude that MSP allows a flexible tradeoff
between physical cost (anchor nodes) and soft cost
(localization events).
Impact of the Target Node Density: In this experiment, we
confirm that the density of target nodes has no impact on the
accuracy, which motivated the design of sequence-based MSP.
In this experiment, we compare regular scan with random scan
under different number of target nodes from 10 to 190 in steps
of 20. Results in Figure 14(c) show that mean localization
errors remain constant across different node densities. However,
when the number of target nodes increases, the average
maximum error increases.
Summary: From the above experiments, we can conclude that
in basic MSP, regular scan are better than random scan under
different numbers of anchors and scan events. This is because
regular scans uniformly eliminate the map from different
directions, while random scans would obtain sequences with
redundant overlapping information, if two scans choose two similar
scanning slopes.
8.2 Improvements of Sequence-Based MSP
This section evaluates the benefits of exploiting the order
information among target nodes by comparing sequence-based
MSP with basic MSP. In this and the following sections,
regular scan is used for straight-line scan event generation. The
purpose of using regular scan is to keep the scan events and
the node sequences identical for both sequence-based MSP and
basic MSP, so that the only difference between them is the
sequence processing procedure.
Impact of the Number of Scans: In this experiment, we
compare sequence-based MSP with basic MSP under different
number of scans from 3 to 30 in steps of 3. Figure 15(a)
indicates significant performance improvement in sequence-based
MSP over basic MSP across all scan settings, especially when
the number of scans is large. For example, when the number
of scans is 30, errors in sequence-based MSP are about 20%
of that of basic MSP. We conclude that sequence-based MSP
performs extremely well when there are many scan events.
Impact of the Number of Anchors: In this experiment, we
use different number of anchors from 3 to 30 in steps of 3. As
seen in Figure 15(b), the mean error and maximum error of
sequence-based MSP is much smaller than that of basic MSP.
Especially when there is limited number of anchors in the
system, e.g., 3 anchors, the error rate was almost halved by
using sequence-based MSP. This phenomenon has an interesting
explanation: the cutting lines created by anchor nodes are
exploited by both basic MSP and sequence-based MSP, so as the
24
0 2 4 6 8 10
0
5
10
15
20
25
30
35
40
45
50
Basic MSP VS Iterative MSP
Iterative Times
Error
Max Error of Iterative Seq MSP
Mean Error of Iterative Seq MSP
Max Error of Basic MSP
Mean Error of Basic MSP
Figure 16. Improvements of Iterative MSP
0 2 4 6 8 10 12 14 16
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
DBE VS Non−DBE
Error
CumulativeDistrubutioinFunctions(CDF)
Mean Error CDF of DBE MSP
Mean Error CDF of Non−DBE MSP
Max Error CDF of DBE MSP
Max Error CDF of Non−DBE MSP
Figure 17. Improvements of DBE MSP
0 20 40 60 80 100
0
10
20
30
40
50
60
70
Adaptive MSP for 500by80
Target Node Number
Error
Max Error of Regualr Scan
Max Error of Adaptive Scan
Mean Error of Regualr Scan
Mean Error of Adaptive Scan
(a) Adaptive MSP for 500 by 80 field
0 10 20 30 40 50
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Mean Error CDF at Different Angle Steps in Adaptive Scan
Mean Error
CumulativeDistrubutioinFunctions(CDF)
5 Degree Angle Step Adaptive
10 Degree Angle Step Adaptive
20 Degree Angle Step Adaptive
30 Degree Step Regular Scan
(b) Impact of the Number of Candidate Events
Figure 18. The Improvements of Adaptive MSP
number of anchor nodes increases, anchors tend to dominate
the contribution. Therefore the performance gaps lessens.
Impact of the Target Node Density: Figure 15(c)
demonstrates the benefits of exploiting order information among
target nodes. Since sequence-based MSP makes use of the
information among the target nodes, having more target nodes
contributes to the overall system accuracy. As the number of
target nodes increases, the mean error and maximum error of
sequence-based MSP decreases. Clearly the mean error in
basic MSP is not affected by the number of target nodes, as shown
in Figure 15(c).
Summary: From the above experiments, we can conclude that
exploiting order information among target nodes can improve
accuracy significantly, especially when the number of events is
large but with few anchors.
8.3 Iterative MSP over Sequence-Based MSP
In this experiment, the same node sequences were processed
iteratively multiple times. In Figure 16, the two single marks
are results from basic MSP, since basic MSP doesn"t perform
iterations. The two curves present the performance of
iterative MSP under different numbers of iterations c. We note that
when only a single iteration is used, this method degrades to
sequence-based MSP. Therefore, Figure 16 compares the three
methods to one another.
Figure 16 shows that the second iteration can reduce the
mean error and maximum error dramatically. After that, the
performance gain gradually reduces, especially when c > 5.
This is because the second iteration allows earlier scans to
exploit the new boundaries created by later scans in the first
iteration. Such exploitation decays quickly over iterations.
8.4 DBE MSP over Iterative MSP
Figure 17, in which we augment iterative MSP with
distribution-based estimation (DBE MSP), shows that DBE
MSP could bring about statistically better performance.
Figure 17 presents cumulative distribution localization errors. In
general, the two curves of the DBE MSP lay slightly to the left
of that of non-DBE MSP, which indicates that DBE MSP has
a smaller statistical mean error and averaged maximum error
than non-DBE MSP. We note that because DBE is augmented
on top of the best solution so far, the performance
improvement is not significant. When we apply DBE on basic MSP
methods, the improvement is much more significant. We omit
these results because of space constraints.
8.5 Improvements of Adaptive MSP
This section illustrates the performance of adaptive MSP
over non-adaptive MSP. We note that feedback-based
adaptation can be applied to all MSP methods, since it affects only
the scanning angles but not the sequence processing. In this
experiment, we evaluated how adaptive MSP can improve the
best solution so far. The default angle granularity (step) for
adaptive searching is 5 degrees.
Impact of Area Shape: First, if system settings are regular,
the adaptive method hardly contributes to the results. For a
square area (regular), the performance of adaptive MSP and
regular scans are very close. However, if the shape of the area
is not regular, adaptive MSP helps to choose the appropriate
localization events to compensate. Therefore, adaptive MSP
can achieve a better mean error and maximum error as shown
in Figure 18(a). For example, adaptive MSP improves
localization accuracy by 30% when the number of target nodes is
10.
Impact of the Target Node Density: Figure 18(a) shows that
when the node density is low, adaptive MSP brings more
benefit than when node density is high. This phenomenon makes
statistical sense, because the law of large numbers tells us that
node placement approaches a truly uniform distribution when
the number of nodes is increased. Adaptive MSP has an edge
25
Figure 19. The Mirage Test-bed (Line Scan) Figure 20. The 20-node Outdoor Experiments (Wave)
when layout is not uniform.
Impact of Candidate Angle Density: Figure 18(b) shows that
the smaller the candidate scan angle step, the better the
statistical performance in terms of mean error. The rationale is clear,
as wider candidate scan angles provide adaptive MSP more
opportunity to choose the one approaching the optimal angle.
8.6 Simulation Summary
Starting from basic MSP, we have demonstrated
step-bystep how four optimizations can be applied on top of each other
to improve localization performance. In other words, these
optimizations are compatible with each other and can jointly
improve the overall performance. We note that our simulations
were done under assumption that the complete node sequence
can be obtained without sequence flips. In the next section, we
present two real-system implementations that reveal and
address these practical issues.
9 System Evaluation
In this section, we present a system implementation of MSP
on two physical test-beds. The first one is called Mirage, a
large indoor test-bed composed of six 4-foot by 8-foot boards,
illustrated in Figure 19. Each board in the system can be used
as an individual sub-system, which is powered, controlled and
metered separately. Three Hitachi CP-X1250 projectors,
connected through a Matorx Triplehead2go graphics expansion
box, are used to create an ultra-wide integrated display on six
boards. Figure 19 shows that a long tilted line is generated by
the projectors. We have implemented all five versions of MSP
on the Mirage test-bed, running 46 MICAz motes. Unless
mentioned otherwise, the default setting is 3 anchors and 6 scans at
the scanning line speed of 8.6 feet/s. In all of our graphs, each
data point represents the average value of 50 trials. In the
outdoor system, a Dell A525 speaker is used to generate 4.7KHz
sound as shown in Figure 20. We place 20 MICAz motes in the
backyard of a house. Since the location is not completely open,
sound waves are reflected, scattered and absorbed by various
objects in the vicinity, causing a multi-path effect. In the
system evaluation, simple time synchronization mechanisms are
applied on each node.
9.1 Indoor System Evaluation
During indoor experiments, we encountered several
realworld problems that are not revealed in the simulation. First,
sequences obtained were partial due to misdetection and
message losses. Second, elements in the sequences could flip due
to detection delay, uncertainty in media access, or error in time
synchronization. We show that these issues can be addressed
by using the protection band method described in Section 7.3.
9.1.1 On Scanning Speed and Protection Band
In this experiment, we studied the impact of the scanning
speed and the length of protection band on the performance of
the system. In general, with increasing scanning speed, nodes
have less time to respond to the event and the time gap between
two adjacent nodes shrinks, leading to an increasing number of
partial sequences and sequence flips.
Figure 21 shows the node flip situations for six scans with
distinct angles under different scan speeds. The x-axis shows
the distance between the flipped nodes in the correct node
sequence. y-axis shows the total number of flips in the six scans.
This figure tells us that faster scan brings in not only
increasing number of flips, but also longer-distance flips that require
wider protection band to prevent from fatal errors.
Figure 22(a) shows the effectiveness of the protection band
in terms of reducing the number of unlocalized nodes. When
we use a moderate scan speed (4.3feet/s), the chance of flipping
is rare, therefore we can achieve 0.45 feet mean accuracy
(Figure 22(b)) with 1.6 feet maximum error (Figure 22(c)). With
increasing speeds, the protection band needs to be set to a larger
value to deal with flipping. Interesting phenomena can be
observed in Figures 22: on one hand, the protection band can
sharply reduce the number of unlocalized nodes; on the other
hand, protection bands enlarge the area in which a target would
potentially reside, introducing more uncertainty. Thus there is
a concave curve for both mean and maximum error when the
scan speed is at 8.6 feet/s.
9.1.2 On MSP Methods and Protection Band
In this experiment, we show the improvements resulting
from three different methods. Figure 23(a) shows that a
protection band of 0.35 feet is sufficient for the scan speed of
8.57feet/s. Figures 23(b) and 23(c) show clearly that iterative
MSP (with adaptation) achieves best performance. For
example, Figures 23(b) shows that when we set the protection band
at 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which
is 42% more accurate than the basic design. Similarly,
Figures 23(b) and 23(c) show the double-edged effects of
protection band on the localization accuracy.
0 5 10 15 20
0
20
40
(3) Flip Distribution for 6 Scans at Line Speed of 14.6feet/s
Flips
Node Distance in the Ideal Node Sequence
0 5 10 15 20
0
20
40
(2) Flip Distribution for 6 Scans at Line Speed of 8.6feet/s
Flips
0 5 10 15 20
0
20
40
(1) Flip Distribution for 6 Scans at Line Speed of 4.3feet/s
Flips
Figure 21. Number of Flips for Different Scan Speeds
26
0 0.2 0.4 0.6 0.8 1
0
2
4
6
8
10
12
14
16
18
20
Unlocalized Node Number(Line Scan at Different Speed)
Protection Band (in feet)
UnlocalizedNodeNumber
Scan Line Speed: 14.6feet/s
Scan Line Speed: 8.6feet/s
Scan Line Speed: 4.3feet/s
(a) Number of Unlocalized Nodes
0 0.2 0.4 0.6 0.8 1
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
Mean Error(Line Scan at Different Speed)
Protection Band (in feet)
Error(infeet)
Scan Line Speed:14.6feet/s
Scan Line Speed: 8.6feet/s
Scan Line Speed: 4.3feet/s
(b) Mean Localization Error
0 0.2 0.4 0.6 0.8 1
1.5
2
2.5
3
3.5
4
Max Error(Line Scan at Different Speed)
Protection Band (in feet)
Error(infeet)
Scan Line Speed: 14.6feet/s
Scan Line Speed: 8.6feet/s
Scan Line Speed: 4.3feet/s
(c) Max Localization Error
Figure 22. Impact of Protection Band and Scanning Speed
0 0.2 0.4 0.6 0.8 1
0
2
4
6
8
10
12
14
16
18
20
Unlocalized Node Number(Scan Line Speed 8.57feet/s)
Protection Band (in feet)
Numberofunlocalizednodeoutof46
Unlocalized node of Basic MSP
Unlocalized node of Sequence Based MSP
Unlocalized node of Iterative MSP
(a) Number of Unlocalized Nodes
0 0.2 0.4 0.6 0.8 1
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
Mean Error(Scan Line Speed 8.57feet/s)
Protection Band (in feet)
Error(infeet)
Mean Error of Basic MSP
Mean Error of Sequence Based MSP
Mean Error of Iterative MSP
(b) Mean Localization Error
0 0.2 0.4 0.6 0.8 1
1.5
2
2.5
3
3.5
4
Max Error(Scan Line Speed 8.57feet/s)
Protection Band (in feet)
Error(infeet)
Max Error of Basic MSP
Max Error of Sequence Based MSP
Max Error of Iterative MSP
(c) Max Localization Error
Figure 23. Impact of Protection Band under Different MSP Methods
3 4 5 6 7 8 9 10 11
0
0.5
1
1.5
2
2.5
Unlocalized Node Number(Protection Band: 0.35 feet)
Anchor Number
UnlocalizedNodeNumber
4 Scan Events at Speed 8.75feet/s
6 Scan Events at Speed 8.75feet/s
8 Scan Events at Speed 8.75feet/s
(a) Number of Unlocalized Nodes
3 4 5 6 7 8 9 10 11
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Mean Error(Protection Band: 0.35 feet)
Anchor Number
Error(infeet)
Mean Error of 4 Scan Events at Speed 8.75feet/s
Mean Error of 6 Scan Events at Speed 8.75feet/s
Mean Error of 8 Scan Events at Speed 8.75feet/s
(b) Mean Localization Error
3 4 5 6 7 8 9 10 11
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
Max Error(Protection Band: 0.35 feet)
Anchor Number
Error(infeet)
Max Error of 4 Scan Events at Speed 8.75feet/s
Max Error of 6 Scan Events at Speed 8.75feet/s
Max Error of 8 Scan Events at Speed 8.75feet/s
(c) Max Localization Error
Figure 24. Impact of the Number of Anchors and Scans
9.1.3 On Number of Anchors and Scans
In this experiment, we show a tradeoff between hardware
cost (anchors) with soft cost (events). Figure 24(a) shows that
with more cutting lines created by anchors, the chance of
unlocalized nodes increases slightly. We note that with a 0.35 feet
protection band, the percentage of unlocalized nodes is very
small, e.g., in the worst-case with 11 anchors, only 2 out of 46
nodes are not localized due to flipping. Figures 24(b) and 24(c)
show the tradeoff between number of anchors and the number
of scans. Obviously, with the number of anchors increases, the
error drops significantly. With 11 anchors we can achieve a
localization accuracy as low as 0.25 ∼ 0.35 feet, which is nearly a
60% improvement. Similarly, with increasing number of scans,
the accuracy drops significantly as well. We can observe about
30% across all anchor settings when we increase the number of
scans from 4 to 8. For example, with only 3 anchors, we can
achieve 0.6-foot accuracy with 8 scans.
9.2 Outdoor System Evaluation
The outdoor system evaluation contains two parts: (i)
effective detection distance evaluation, which shows that the
node sequence can be readily obtained, and (ii) sound
propagation based localization, which shows the results of
wavepropagation-based localization.
9.2.1 Effective Detection Distance Evaluation
We firstly evaluate the sequence flip phenomenon in wave
propagation. As shown in Figure 25, 20 motes were placed as
five groups in front of the speaker, four nodes in each group
at roughly the same distances to the speaker. The gap between
each group is set to be 2, 3, 4 and 5 feet respectively in four
experiments. Figure 26 shows the results. The x-axis in each
subgraph indicates the group index. There are four nodes in each
group (4 bars). The y-axis shows the detection rank (order)
of each node in the node sequence. As distance between each
group increases, number of flips in the resulting node sequence
27
Figure 25. Wave Detection
1 2 3 4 5
0
5
10
15
20
2 feet group distance
Rank
Group Index
1 2 3 4 5
0
5
10
15
20
3 feet group distance
Rank
Group Index
1 2 3 4 5
0
5
10
15
20
4 feet group distance
Rank
Group Index
1 2 3 4 5
0
5
10
15
20
5 feet group distance
Rank
Group Index
Figure 26. Ranks vs. Distances
0
2
4
6
8
10
12
14
16
18
20
22
24
0 2 4 6 8 10 12 14
Y-Dimension(feet)
X-Dimension (feet)
Node
0
2
4
6
8
10
12
14
16
18
20
22
24
0 2 4 6 8 10 12 14
Y-Dimension(feet)
X-Dimension (feet)
Anchor
Figure 27. Localization Error (Sound)
decreases. For example, in the 2-foot distance subgraph, there
are quite a few flips between nodes in adjacent and even
nonadjacent groups, while in the 5-foot subgraph, flips between
different groups disappeared in the test.
9.2.2 Sound Propagation Based Localization
As shown in Figure 20, 20 motes are placed as a grid
including 5 rows with 5 feet between each row and 4 columns with
4 feet between each column. Six 4KHz acoustic wave
propagation events are generated around the mote grid by a speaker.
Figure 27 shows the localization results using iterative MSP
(3 times iterative processing) with a protection band of 3 feet.
The average error of the localization results is 3 feet and the
maximum error is 5 feet with one un-localized node.
We found that sequence flip in wave propagation is more
severe than that in the indoor, line-based test. This is expected
due to the high propagation speed of sound. Currently we use
MICAz mote, which is equipped with a low quality
microphone. We believe that using a better speaker and more events,
the system can yield better accuracy. Despite the hardware
constrains, the MSP algorithm still successfully localized most of
the nodes with good accuracy.
10 Conclusions
In this paper, we present the first work that exploits the
concept of node sequence processing to localize sensor nodes. We
demonstrated that we could significantly improve localization
accuracy by making full use of the information embedded in
multiple easy-to-get one-dimensional node sequences. We
proposed four novel optimization methods, exploiting order and
marginal distribution among non-anchor nodes as well as the
feedback information from early localization results.
Importantly, these optimization methods can be used together, and
improve accuracy additively. The practical issues of partial
node sequence and sequence flip were identified and addressed
in two physical system test-beds. We also evaluated
performance at scale through analysis as well as extensive
simulations. Results demonstrate that requiring neither costly
hardware on sensor nodes nor precise event distribution, MSP can
achieve a sub-foot accuracy with very few anchor nodes
provided sufficient events.
11 References
[1] CC2420 Data Sheet. Avaiable at http://www.chipcon.com/.
[2] P. Bahl and V. N. Padmanabhan. Radar: An In-Building RF-Based User
Location and Tracking System. In IEEE Infocom "00.
[3] M. Broxton, J. Lifton, and J. Paradiso. Localizing A Sensor Network via
Collaborative Processing of Global Stimuli. In EWSN "05.
[4] N. Bulusu, J. Heidemann, and D. Estrin. GPS-Less Low Cost Outdoor
Localization for Very Small Devices. IEEE Personal Communications
Magazine, 7(4), 2000.
[5] D. Culler, D. Estrin, and M. Srivastava. Overview of Sensor Networks.
IEEE Computer Magazine, 2004.
[6] J. Elson, L. Girod, and D. Estrin. Fine-Grained Network Time
Synchronization Using Reference Broadcasts. In OSDI "02.
[7] D. K. Goldenberg, P. Bihler, M. Gao, J. Fang, B. D. Anderson, A. Morse,
and Y. Yang. Localization in Sparse Networks Using Sweeps. In
MobiCom "06.
[8] T. He, C. Huang, B. M. Blum, J. A. Stankovic, and T. Abdelzaher.
RangeFree Localization Schemes in Large-Scale Sensor Networks. In
MobiCom "03.
[9] B. Kusy, P. Dutta, P. Levis, M. Mar, A. Ledeczi, and D. Culler. Elapsed
Time on Arrival: A Simple and Versatile Primitive for Canonical Time
Synchronization Services. International Journal of ad-hoc and
Ubiquitous Computing, 2(1), 2006.
[10] L. Lazos and R. Poovendran. SeRLoc: Secure Range-Independent
Localization for Wireless Sensor Networks. In WiSe "04.
[11] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar,
S. Dora, and A. Ledeczi. Radio Interferometric Geolocation. In
SenSys "05.
[12] M. Maroti, B. Kusy, G. Simon, and A. Ledeczi. The Flooding Time
Synchronization Protocol. In SenSys "04.
[13] D. Moore, J. Leonard, D. Rus, and S. Teller. Robust Distributed Network
Localization with Noise Range Measurements. In SenSys "04.
[14] R. Nagpal and D. Coore. An Algorithm for Group Formation in an
Amorphous Computer. In PDCS "98.
[15] D. Niculescu and B. Nath. ad-hoc Positioning System. In GlobeCom
"01.
[16] D. Niculescu and B. Nath. ad-hoc Positioning System (APS) Using
AOA. In InfoCom "03.
[17] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan. The Cricket
Location-Support System. In MobiCom "00.
[18] K. R¨omer. The Lighthouse Location System for Smart Dust. In MobiSys
"03.
[19] A. Savvides, C. C. Han, and M. B. Srivastava. Dynamic Fine-Grained
Localization in ad-hoc Networks of Sensors. In MobiCom "01.
[20] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke. A High-Accuracy,
Low-Cost Localization System for Wireless Sensor Networks. In SenSys
"05.
[21] R. Stoleru, P. Vicaire, T. He, and J. A. Stankovic. StarDust: a Flexible
Architecture for Passive Localization in Wireless Sensor Networks. In
SenSys "06.
[22] E. W. Weisstein. Plane Division by Lines. mathworld.wolfram.com.
[23] B. H. Wellenhoff, H. Lichtenegger, and J. Collins. Global Positions
System: Theory and Practice,Fourth Edition. Springer Verlag, 1997.
[24] K. Whitehouse. The Design of Calamari: an ad-hoc Localization System
for Sensor Networks. In University of California at Berkeley, 2002.
[25] Z. Zhong. MSP Evaluation and Implementation Report. Avaiable at
http://www.cs.umn.edu/∼zhong/MSP.
[26] G. Zhou, T. He, and J. A. Stankovic. Impact of Radio Irregularity on
Wireless Sensor Networks. In MobiSys "04.
28 | marginal distribution;node localization;multi-sequence positioning;listen-detect-assemble-report protocol;event distribution;range-based approach;spatiotemporal correlation;localization;node sequence process;distribution-based location estimation;massive uva-based deploment;wireless sensor network |
train_C-45 | StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks | The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 × 60 ft2 area. The localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes. | 1 Introduction
Wireless Sensor Networks (WSN) have been envisioned
to revolutionize the way humans perceive and interact with
the surrounding environment. One vision is to embed tiny
sensor devices in outdoor environments, by aerial
deployments from unmanned air vehicles. The sensor nodes form
a network and collaborate (to compensate for the extremely
scarce resources available to each of them: computational
power, memory size, communication capabilities) to
accomplish the mission. Through collaboration, redundancy and
fault tolerance, the WSN is then able to achieve
unprecedented sensing capabilities.
A major step forward has been accomplished by
developing systems for several domains: military surveillance [1]
[2] [3], habitat monitoring [4] and structural monitoring [5].
Even after these successes, several research problems remain
open. Among these open problems is sensor node
localization, i.e., how to find the physical position of each sensor
node. Despite the attention the localization problem in WSN
has received, no universally acceptable solution has been
developed. There are several reasons for this. On one hand,
localization schemes that use ranging are typically high end
solutions. GPS ranging hardware consumes energy, it is
relatively expensive (if high accuracy is required) and poses
form factor challenges that move us away from the vision
of dust size sensor nodes. Ultrasound has a short range and
is highly directional. Solutions that use the radio transceiver
for ranging either have not produced encouraging results (if
the received signal strength indicator is used) or are sensitive
to environment (e.g., multipath). On the other hand,
localization schemes that only use the connectivity information
for inferring location information are characterized by low
accuracies: ≈ 10 ft in controlled environments, 40−50 ft in
realistic ones.
To address these challenges, we propose a framework for
WSN localization, called StarDust, in which the
complexity associated with the node localization is completely
removed from the sensor node. The basic principle of the
framework is localization through passivity: each sensor
node is equipped with a corner-cube retro-reflector and
possibly an optical filter (a coloring device). An aerial
vehicle projects light onto the deployment area and records
images containing retro-reflected light beams (they appear as
luminous spots). Through image processing techniques, the
locations of the retro-reflectors (i.e., sensor nodes) is
deter57
mined. For inferring the identity of the sensor node present
at a particular location, the StarDust framework develops a
constraint-based node ID relaxation algorithm.
The main contributions of our work are the following. We
propose a novel framework for node localization in WSNs
that is very promising and allows for many future extensions
and more accurate results. We propose a constraint-based
label relaxation algorithm for mapping node IDs to the
locations, and four constraints (node, connectivity, time and
space), which are building blocks for very accurate and very
fast localization systems. We develop a sensor node
hardware prototype, called a SensorBall. We evaluate the
performance of a localization system for which we obtain location
accuracies of 2 − 5 ft with a localization duration ranging
from 10 milliseconds to 2 minutes. We investigate the range
of a system built on our framework by considering realities
of physical phenomena that occurs during light propagation
through the atmosphere.
The rest of the paper is structured as follows. Section 2
is an overview of the state of art. The design of the
StarDust framework is presented in Section 3. One
implementation and its performance evaluation are in Sections 4 and
5, followed by a suite of system optimization techniques, in
Section 6. In Section 7 we present our conclusions.
2 Related Work
We present the prior work in localization in two major
categories: the range-based, and the range-free schemes.
The range-based localization techniques have been
designed to use either more expensive hardware (and hence
higher accuracy) or just the radio transceiver. Ranging
techniques dependent on hardware are the time-of-flight (ToF)
and the time-difference-of-arrival(TDoA). Solutions that use
the radio are based on the received signal strength indicator
(RSSI) and more recently on radio interferometry.
The ToF localization technique that is most widely used is
the GPS. GPS is a costly solution for a high accuracy
localization of a large scale sensor network. AHLoS [6] employs
a TDoA ranging technique that requires extensive hardware
and solves relatively large nonlinear systems of equations.
The Cricket location-support system (TDoA) [7] can achieve
a location granularity of tens of inches with highly
directional and short range ultrasound transceivers. In [2] the
location of a sniper is determined in an urban terrain, by
using the TDoA between an acoustic wave and a radio beacon.
The PushPin project [8] uses the TDoA between ultrasound
pulses and light flashes for node localization. The RADAR
system [9] uses the RSSI to build a map of signal strengths
as emitted by a set of beacon nodes. A mobile node is
located by the best match, in the signal strength space, with a
previously acquired signature. In MAL [10], a mobile node
assists in measuring the distances (acting as constraints)
between nodes until a rigid graph is generated. The localization
problem is formulated as an on-line state estimation in a
nonlinear dynamic system [11]. A cooperative ranging that
attempts to achieve a global positioning from distributed local
optimizations is proposed in [12]. A very recent, remarkable,
localization technique is based on radio interferometry, RIPS
[13], which utilizes two transmitters to create an interfering
signal. The frequencies of the emitters are very close to each
other, thus the interfering signal will have a low frequency
envelope that can be easily measured. The ranging technique
performs very well. The long time required for localization
and multi-path environments pose significant challenges.
Real environments create additional challenges for the
range based localization schemes. These have been
emphasized by several studies [14] [15] [16]. To address these
challenges, and others (hardware cost, the energy expenditure,
the form factor, the small range, localization time), several
range-free localization schemes have been proposed. Sensor
nodes use primarily connectivity information for inferring
proximity to a set of anchors. In the Centroid localization
scheme [17], a sensor node localizes to the centroid of its
proximate beacon nodes. In APIT [18] each node decides its
position based on the possibility of being inside or outside of
a triangle formed by any three beacons within node"s
communication range. The Gradient algorithm [19], leverages
the knowledge about the network density to infer the average
one hop length. This, in turn, can be transformed into
distances to nodes with known locations. DV-Hop [20] uses the
hop by hop propagation capability of the network to forward
distances to landmarks. More recently, several localization
schemes that exploit the sensing capabilities of sensor nodes,
have been proposed. Spotlight [21] creates well controlled
(in time and space) events in the network while the sensor
nodes detect and timestamp this events. From the
spatiotemporal knowledge for the created events and the temporal
information provided by sensor nodes, nodes" spatial
information can be obtained. In a similar manner, the Lighthouse
system [22] uses a parallel light beam, that is emitted by an
anchor which rotates with a certain period. A sensor node
detects the light beam for a period of time, which is
dependent on the distance between it and the light emitting device.
Many of the above localization solutions target specific
sets of requirements and are useful for specific applications.
StarDust differs in that it addresses a particular demanding
set of requirements that are not yet solved well. StarDust is
meant for localizing air dropped nodes where node
passiveness, high accuracy, low cost, small form factor and rapid
localization are all required. Many military applications have
such requirements.
3 StarDust System Design
The design of the StarDust system (and its name) was
inspired by the similarity between a deployed sensor network,
in which sensor nodes indicate their presence by emitting
light, and the Universe consisting of luminous and
illuminated objects: stars, galaxies, planets, etc.
The main difficulty when applying the above ideas to the
real world is the complexity of the hardware that needs to
be put on a sensor node so that the emitted light can be
detected from thousands of feet. The energy expenditure for
producing an intense enough light beam is also prohibitive.
Instead, what we propose to use for sensor node
localization is a passive optical element called a retro-reflector.
The most common retro-reflective optical component is a
Corner-Cube Retroreflector (CCR), shown in Figure 1(a). It
consists of three mutually perpendicular mirrors. The
inter58
(a) (b)
Figure 1. Corner-Cube Retroreflector (a) and an array of
CCRs molded in plastic (b)
esting property of this optical component is that an incoming
beam of light is reflected back, towards the source of the
light, irrespective of the angle of incidence. This is in
contrast with a mirror, which needs to be precisely positioned to
be perpendicular to the incident light. A very common and
inexpensive implementation of an array of CCRs is the
retroreflective plastic material used on cars and bicycles for night
time detection, shown in Figure 1(b).
In the StarDust system, each node is equipped with a
small (e.g. 0.5in2) array of CCRs and the enclosure has
self-righting capabilities that orient the array of CCRs
predominantly upwards. It is critical to understand that the
upward orientation does not need to be exact. Even when large
angular variations from a perfectly upward orientation are
present, a CCR will return the light in the exact same
direction from which it came.
In the remaining part of the section, we present the
architecture of the StarDust system and the design of its main
components.
3.1 System Architecture
The envisioned sensor network localization scenario is as
follows:
• The sensor nodes are released, possibly in a controlled
manner, from an aerial vehicle during the night.
• The aerial vehicle hovers over the deployment area and
uses a strobe light to illuminate it. The sensor nodes,
equipped with CCRs and optical filters (acting as
coloring devices) have self-righting capabilities and
retroreflect the incoming strobe light. The retro-reflected
light is either white, as the originating source light,
or colored, due to optical filters.
• The aerial vehicle records a sequence of two images
very close in time (msec level). One image is taken
when the strobe light is on, the other when the strobe
light is off. The acquired images are used for obtaining
the locations of sensor nodes (which appear as luminous
spots in the image).
• The aerial vehicle executes the mapping of node IDs to
the identified locations in one of the following ways: a)
by using the color of a retro-reflected light, if a sensor
node has a unique color; b) by requiring sensor nodes
to establish neighborhood information and report it to
a base station; c) by controlling the time sequence of
sensor nodes deployment and recording additional
imLight Emitter
Sensor Node i
Transfer Function
Φi(λ)
Ψ(λ)
Φ(Ψ(λ))
Image
Processing
Node ID Matching
Radio Model
R
G(Λ,E)
Central Device
V"
V"
Figure 2. The StarDust system architecture
ages; d) by controlling the location where a sensor node
is deployed.
• The computed locations are disseminated to the sensor
network.
The architecture of the StarDust system is shown in
Figure 2. The architecture consists of two main components:
the first is centralized and it is located on a more powerful
device. The second is distributed and it resides on all
sensor nodes. The Central Device consists of the following: the
Light Emitter, the Image Processing module, the Node ID
Mapping module and the Radio Model. The distributed
component of the architecture is the Transfer Function, which
acts as a filter for the incoming light. The aforementioned
modules are briefly described below:
• Light Emitter - It is a strobe light, capable of producing
very intense, collimated light pulses. The emitted light
is non-monochromatic (unlike a laser) and it is
characterized by a spectral density Ψ(λ), a function of the
wavelength. The emitted light is incident on the CCRs
present on sensor nodes.
• Transfer Function Φ(Ψ(λ)) - This is a bandpass filter
for the incident light on the CCR. The filter allows a
portion of the original spectrum, to be retro-reflected.
From here on, we will refer to the transfer function as
the color of a sensor node.
• Image Processing - The Image Processing module
acquires high resolution images. From these images the
locations and the colors of sensor nodes are obtained.
If only one set of pictures can be taken (i.e., one
location of the light emitter/image analysis device), then the
map of the field is assumed to be known as well as the
distance between the imaging device and the field. The
aforementioned assumptions (field map and distance to
it) are not necessary if the images can be simultaneously
taken from different locations. It is important to remark
here that the identity of a node can not be directly
obtained through Image Processing alone, unless a
specific characteristic of a sensor node can be identified in
the image.
• Node ID Matching - This module uses the detected
locations and through additional techniques (e.g., sensor
node coloring and connectivity information (G(Λ,E))
from the deployed network) to uniquely identify the
sensor nodes observed in the image. The connectivity
information is represented by neighbor tables sent from
59
Algorithm 1 Image Processing
1: Background filtering
2: Retro-reflected light recognition through intensity
filtering
3: Edge detection to obtain the location of sensor nodes
4: Color identification for each detected sensor node
each sensor node to the Central Device.
• Radio Model - This component provides an estimate of
the radio range to the Node ID Matching module. It
is only used by node ID matching techniques that are
based on the radio connectivity in the network. The
estimate of the radio range R is based on the sensor node
density (obtained through the Image Processing
module) and the connectivity information (i.e., G(Λ,E)).
The two main components of the StarDust architecture
are the Image Processing and the Node ID Mapping. Their
design and analysis is presented in the sections that follow.
3.2 Image Processing
The goal of the Image Processing Algorithm (IPA) is to
identify the location of the nodes and their color. Note that
IPA does not identify which node fell where, but only what
is the set of locations where the nodes fell.
IPA is executed after an aerial vehicle records two
pictures: one in which the field of deployment is illuminated and
one when no illuminations is present. Let Pdark be the
picture of the deployment area, taken when no light was emitted
and Plight be the picture of the same deployment area when a
strong light beam was directed towards the sensor nodes.
The proposed IPA has several steps, as shown in
Algorithm 1. The first step is to obtain a third picture Pfilter where
only the differences between Pdark and Plight remain. Let us
assume that Pdark has a resolution of n × m, where n is the
number of pixels in a row of the picture, while m is the
number of pixels in a column of the picture. Then Pdark is
composed of n × m pixels noted Pdark(i, j), i ∈ 1 ≤ i ≤ n,1 ≤
j ≤ m. Similarly Plight is composed of n × m pixels noted
Plight(i, j), 1 ≤ i ≤ n,1 ≤ j ≤ m.
Each pixel P is described by an RGB value where the R
value is denoted by PR, the G value is denoted by PG, and
the B value is denoted by PB. IPA then generates the third
picture, Pfilter, through the following transformations:
PR
filter(i, j) = PR
light(i, j)−PR
dark(i, j)
PG
filter(i, j) = PG
light(i, j)−PG
dark(i, j)
PB
filter(i, j) = PB
light(i, j)−PB
dark(i, j)
(1)
After this transformation, all the features that appeared in
both Pdark and Plight are removed from Pfilter. This simplifies
the recognition of light retro-reflected by sensor nodes.
The second step consists of identifying the elements
contained in Pfilter that retro-reflect light. For this, an intensity
filter is applied to Pfilter. First IPA converts Pfilter into a
grayscale picture. Then the brightest pixels are identified and
used to create Preflect. This step is eased by the fact that the
reflecting nodes should appear much brighter than any other
illuminated object in the picture.
Support: Q(λk)
ni
P1
...
P2
...
PN
λ1
...
λk
...
λN
Figure 3. Probabilistic label relaxation
The third step runs an edge detection algorithm on Preflect
to identify the boundary of the nodes present. A tool such as
Matlab provides a number of edge detection techniques. We
used the bwboundaries function. For the obtained edges, the
location (x,y) (in the image) of each node is determined by
computing the centroid of the points constituting its edges.
Standard computer graphics techniques [23] are then used
to transform the 2D locations of sensor nodes detected in
multiple images into 3D sensor node locations. The color of
the node is obtained as the color of the pixel located at (x,y)
in Plight.
3.3 Node ID Matching
The goal of the Node ID Matching module is to
obtain the identity (node ID) of a luminous spot in the
image, detected to be a sensor node. For this, we define V =
{(x1,y1),(x2,y2),...,(xm,ym)} to be the set of locations of
the sensor nodes, as detected by the Image Processing
module and Λ = {λ1,λ2,...,λm} to be the set of unique node IDs
assigned to the m sensor nodes, before deployment. From
here on, we refer to node IDs as labels.
We model the problem of finding the label λj of a node ni
as a probabilistic label relaxation problem, frequently used
in image processing/understanding. In the image processing
domain, scene labeling (i.e., identifying objects in an
image) plays a major role. The goal of scene labeling is to
assign a label to each object detected in an image, such that
an appropriate image interpretation is achieved. It is
prohibitively expensive to consider the interactions among all
the objects in an image. Instead, constraints placed among
nearby objects generate local consistencies and through
iteration, global consistencies can be obtained.
The main idea of the sensor node localization through
probabilistic label relaxation is to iteratively compute the
probability of each label being the correct label for a
sensor node, by taking into account, at each iteration, the
support for a label. The support for a label can be understood
as a hint or proof, that a particular label is more likely to be
the correct one, when compared with the other potential
labels for a sensor node. We pictorially depict this main idea
in Figure 3. As shown, node ni has a set of candidate
labels {λ1,...,λk}. Each of the labels has a different value
for the Support function Q(λk). We defer the explanation
of how the Support function is implemented until the
subsections that follow, where we provide four concrete
techniques. Formally, the algorithm is outlined in Algorithm 2,
where the equations necessary for computing the new
probability Pni(λk) for a label λk of a node ni, are expressed by the
60
Algorithm 2 Label Relaxation
1: for each sensor node ni do
2: assign equal prob. to all possible labels
3: end for
4: repeat
5: converged ← true
6: for each sensor node ni do
7: for each each label λj of ni do
8: compute the Support label λj: Equation 4
9: end for
10: compute K for the node ni: Equation 3
11: for each each label λj do
12: update probability of label λj: Equation 2
13: if |new prob.−old prob.| ≥ ε then
14: converged ← false
15: end if
16: end for
17: end for
18: until converged = true
following equations:
Ps+1
ni
(λk) =
1
Kni
Ps
ni
(λk)Qs
ni
(λk) (2)
where Kni is a normalizing constant, given by:
Kni =
N
∑
k=1
Ps
ni
(λk)Qs
ni
(λk) (3)
and Qs
ni
(λk) is:
Qs
ni
(λk) = support for label λk of node ni (4)
The label relaxation algorithm is iterative and it is
polynomial in the size of the network(number of nodes). The
pseudo-code is shown in Algorithm 2. It initializes the
probabilities associated with each possible label, for a node ni,
through a uniform distribution. At each iteration s, the
algorithm updates the probability associated with each label, by
considering the Support Qs
ni
(λk) for each candidate label of
a sensor node.
In the sections that follow, we describe four different
techniques for implementing the Support function: based on
node coloring, radio connectivity, the time of deployment
(time) and the location of deployment (space). While some
of these techniques are simplistic, they are primitives which,
when combined, can create powerful localization systems.
These design techniques have different trade-offs, which we
will present in Section 3.3.6.
3.3.1 Relaxation with Color Constraints
The unique mapping between a sensor node"s position
(identified by the image processing) and a label can be
obtained by assigning a unique color to each sensor node. For
this we define C = {c1,c2,...,cn} to be the set of unique
colors available and M : Λ → C to be a one-to-one mapping of
labels to colors. This mapping is known prior to the sensor
node deployment (from node manufacturing).
In the case of color constrained label relaxation, the
support for label λk is expressed as follows:
Qs
ni
(λk) = 1 (5)
As a result, the label relaxation algorithm (Algorithm 2)
consists of the following steps: one label is assigned to each
sensor node (lines 1-3 of the algorithm), implicitly having
a probability Pni(λk) = 1 ; the algorithm executes a single
iteration, when the support function, simply, reiterates the
confidence in the unique labeling.
However, it is often the case that unique colors for each
node will not be available. It is interesting to discuss here the
influence that the size of the coloring space (i.e., |C|) has on
the accuracy of the localization algorithm. Several cases are
discussed below:
• If |C| = 0, no colors are used and the sensor nodes are
equipped with simple CCRs that reflect back all the
incoming light (i.e., no filtering, and no coloring of the
incoming light). From the image processing system, the
position of sensor nodes can still be obtained. Since
all nodes appear white, no single sensor node can be
uniquely identified.
• If |C| = m − 1 then there are enough unique colors for
all nodes (one node remains white, i.e. no coloring), the
problem is trivially solved. Each node can be identified,
based on its unique color. This is the scenario for the
relaxation with color constraints.
• If |C| ≥ 1, there are several options for how to
partition the coloring space. If C = {c1} one possibility is
to assign the color c1 to a single node, and leave the
remaining m−1 sensor nodes white, or to assign the color
c1 to more than one sensor node. One can observe that
once a color is assigned uniquely to a sensor node, in
effect, that sensor node is given the status of anchor,
or node with known location.
It is interesting to observe that there is an entire spectrum
of possibilities for how to partition the set of sensor nodes
in equivalence classes (where an equivalence class is
represented by one color), in order to maximize the success of the
localization algorithm. One of the goals of this paper is to
understand how the size of the coloring space and its
partitioning affect localization accuracy.
Despite the simplicity of this method of constraining the
set of labels that can be assigned to a node, we will show that
this technique is very powerful, when combined with other
relaxation techniques.
3.3.2 Relaxation with Connectivity Constraints
Connectivity information, obtained from the sensor
network through beaconing, can provide additional information
for locating sensor nodes. In order to gather connectivity
information, the following need to occur: 1) after deployment,
through beaconing of HELLO messages, sensor nodes build
their neighborhood tables; 2) each node sends its neighbor
table information to the Central device via a base station.
First, let us define G = (Λ,E) to be the weighted
connectivity graph built by the Central device from the received
neighbor table information. In G the edge (λi,λj) has a
61
λ1
λ2
...
λN
ni nj
gi2,j2
λ1
λ2
...
λN
Pj,λ1
Pj,λ2
...
Pj,λN
Pi,λ1
Pi,λ1
...
Pi,λN gi2,jm
Figure 4. Label relaxation with connectivity constraints
weight gij represented by the number of beacons sent by λj
and received by λi. In addition, let R be the radio range of
the sensor nodes.
The main idea of the connectivity constrained label
relaxation is depicted in Figure 4 in which two nodes ni and
nj have been assigned all possible labels. The confidence in
each of the candidate labels for a sensor node, is represented
by a probability, shown in a dotted rectangle.
It is important to remark that through beaconing and the
reporting of neighbor tables to the Central Device, a global
view of all constraints in the network can be obtained. It
is critical to observe that these constraints are among labels.
As shown in Figure 4 two constraints exist between nodes ni
and nj. The constraints are depicted by gi2,j2 and gi2,jM, the
number of beacons sent the labels λj2 and λjM and received
by the label λi2.
The support for the label λk of sensor node ni, resulting
from the interaction (i.e., within radio range) with sensor
node nj is given by:
Qs
ni
(λk) =
M
∑
m=1
gλkλm
Ps
nj
(λm) (6)
As a result, the localization algorithm (Algorithm 3
consists of the following steps: all labels are assigned to each
sensor node (lines 1-3 of the algorithm), and implicitly each
label has a probability initialized to Pni(λk) = 1/|Λ|; in each
iteration, the probabilities for the labels of a sensor node are
updated, when considering the interaction with the labels of
sensor nodes within R. It is important to remark that the
identity of the nodes within R is not known, only the candidate
labels and their probabilities. The relaxation algorithm
converges when, during an iteration, the probability of no label
is updated by more than ε.
The label relaxation algorithm based on connectivity
constraints, enforces such constraints between pairs of sensor
nodes. For a large scale sensor network deployment, it is not
feasible to consider all pairs of sensor nodes in the network.
Hence, the algorithm should only consider pairs of sensor
nodes that are within a reasonable communication range (R).
We assume a circular radio range and a symmetric
connectivity. In the remaining part of the section we propose a
simple analytical model that estimates the radio range R for
medium-connected networks (less than 20 neighbors per R).
We consider the following to be known: the size of the
deployment field (L), the number of sensor nodes deployed (N)
Algorithm 3 Localization
1: Estimate the radio range R
2: Execute the Label Relaxation Algorithm with Support
Function given by Equation 6 for neighbors less than R
apart
3: for each sensor node ni do
4: node identity is λk with max. prob.
5: end for
and the total number of unidirectional (i.e., not symmetric)
one-hop radio connections in the network (k). For our
analysis, we uniformly distribute the sensor nodes in a square area
of length L, by using a grid of unit length L/
√
N. We use the
substitution u = L/
√
N to simplify the notation, in order to
distinguish the following cases: if u ≤ R ≤
√
2u each node
has four neighbors (the expected k = 4N); if
√
2u ≤ R ≤ 2u
each node has eight neighbors (the expected k = 8N); if
2u ≤ R ≤
√
5u each node has twelve neighbors ( the expected
k = 12N); if
√
5u ≤ R ≤ 3u each node has twenty neighbors
(the expected k = 20N)
For a given t = k/4N we take R to be the middle of the
interval. As an example, if t = 5 then R = (3 +
√
5)u/2. A
quadratic fitting for R over the possible values of t, produces
the following closed-form solution for the communication
range R, as a function of network connectivity k, assuming L
and N constant:
R(k) =
L
√
N
−0.051
k
4N
2
+0.66
k
4N
+0.6 (7)
We investigate the accuracy of our model in Section 5.2.1.
3.3.3 Relaxation with Time Constraints
Time constraints can be treated similarly with color
constraints. The unique identification of a sensor node can be
obtained by deploying sensor nodes individually, one by one,
and recording a sequence of images. The sensor node that is
identified as new in the last picture (it was not identified in
the picture before last) must be the last sensor node dropped.
In a similar manner with color constrained label
relaxation, the time constrained approach is very simple, but may
take too long, especially for large scale systems. While it
can be used in practice, it is unlikely that only a time
constrained label relaxation is used. As we will see, by
combining constrained-based primitives, realistic localization
systems can be implemented.
The support function for the label relaxation with time
constraints is defined identically with the color constrained
relaxation:
Qs
ni
(λk) = 1 (8)
The localization algorithm (Algorithm 2 consists of the
following steps: one label is assigned to each sensor node
(lines 1-3 of the algorithm), and implicitly having a
probability Pni(λk) = 1 ; the algorithm executes a single iteration,
62
D1
D2
D4
D
3Node
Label-1
Label-2
Label-3
Label-4
0.2
0.1
0.5
0.2
Figure 5. Relaxation with space
constraints
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 1 2 3 4 5 6 7 8
PDF
Distance D
σ = 0.5
σ = 1
σ = 2
Figure 6. Probability distribution of
distances
-4
-3
-2
-1
0
1
2
3
4
X
-4
-3
-2
-1
0
1
2
3
4
Y
0
0.2
0.4
0.6
0.8
1
Node Density
Figure 7. Distribution of nodes
when the support function, simply, reiterates the confidence
in the unique labeling.
3.3.4 Relaxation with Space Constraints
Spatial information related to sensor deployment can also
be employed as another input to the label relaxation
algorithm. To do that, we use two types of locations: the node
location pn and the label location pl. The former pn is defined
as the position of nodes (xn,yn,zn) after deployment, which
can be obtained through Image Processing as mentioned in
Section 3.3. The latter pl is defined as the location (xl,yl,zl)
where a node is dropped. We use Dni
λm
to denote the
horizontal distance between the location of the label λm and the
location of the node ni. Clearly, Dni
λm
= (xn −xl)2 +(yn −yl)2.
At the time of a sensor node release, the one-to-one
mapping between the node and its label is known. In other words,
the label location is the same as the node location at the
release time. After release, the label location information is
partially lost due to the random factors such as wind and
surface impact. However, statistically, the node locations are
correlated with label locations. Such correlation depends on
the airdrop methods employed and environments. For the
sake of simplicity, let"s assume nodes are dropped from the
air through a helicopter hovering in the air. Wind can be
decomposed into three components X,Y and Z. Only X and
Y affect the horizontal distance a node can travel.
According to [24], we can assume that X and Y follow an
independent normal distribution. Therefore, the absolute value of
the wind speed follows a Rayleigh distribution. Obviously
the higher the wind speed is, the further a node would land
away horizontally from the label location. If we assume that
the distance D is a function of the wind speed V [25] [26],
we can obtain the probability distribution of D under a given
wind speed distribution. Without loss of generality, we
assume that D is proportional to the wind speed. Therefore,
D follows the Rayleigh distribution as well. As shown in
Figure 5, the spatial-based relaxation is a recursive process
to assign the probability that a nodes has a certain label by
using the distances between the location of a node with
multiple label locations.
We note that the distribution of distance D affects the
probability with which a label is assigned. It is not
necessarily true that the nearest label is always chosen. For example,
if D follows the Rayleigh(σ2) distribution, we can obtain the
Probability Density Function (PDF) of distances as shown
in Figure 6. This figure indicates that the possibility of a
node to fall vertically is very small under windy conditions
(σ > 0), and that the distance D is affected by the σ. The
spatial distribution of nodes for σ = 1 is shown in Figure 7.
Strong wind with a high σ value leads to a larger node
dispersion. More formally, given a probability density function
PDF(D), the support for label λk of sensor node ni can be
formulated as:
Qs
ni
(λk) = PDF(Dni
λk
) (9)
It is interesting to point out two special cases. First, if all
nodes are released at once (i.e., only one label location for
all released nodes), the distance D from a node to all labels
is the same. In this case, Ps+1
ni
(λk) = Ps
ni
(λk), which indicates
that we can not use the spatial-based relaxation to recursively
narrow down the potential labels for a node. Second, if nodes
are released at different locations that are far away from each
other, we have: (i) If node ni has label λk, Ps
ni
(λk) → 1 when
s → ∞, (ii) If node ni does not have label λk, Ps
ni
(λk) → 0
when s → ∞. In this second scenario, there are multiple
labels (one label per release), hence it is possible to correlate
release times (labels) with positions on the ground. These
results indicate that spatial-based relaxation can label the node
with a very high probability if the physical separation among
nodes is large.
3.3.5 Relaxation with Color and Connectivity
Constraints
One of the most interesting features of the StarDust
architecture is that it allows for hybrid localization solutions to be
built, depending on the system requirements. One example
is a localization system that uses the color and connectivity
constraints. In this scheme, the color constraints are used for
reducing the number of candidate labels for sensor nodes,
to a more manageable value. As a reminder, in the
connectivity constrained relaxation, all labels are candidate labels
for each sensor node. The color constraints are used in the
initialization phase of Algorithm 3 (lines 1-3). After the
initialization, the standard connectivity constrained relaxation
algorithm is used.
For a better understanding of how the label relaxation
algorithm works, we give a concrete example, exemplified in
Figure 8. In part (a) of the figure we depict the data structures
63
11
8
4
1
12
9
7
5
3
ni nj
12
8
10
11
10
0.2
0.2
0.2
0.2
0.2
0.25
0.25
0.25
0.25
(a)
11
8
4
1
12
9
7
5
3
ni nj
12
8
10
11
10
0.2
0.2
0.2
0.2
0.2
0.32
0
0.68
0
(b)
Figure 8. A step through the algorithm. After
initialization (a) and after the 1st iteration for node ni (b)
associated with nodes ni and nj after the initialization steps
of the algorithm (lines 1-6), as well as the number of beacons
between different labels (as reported by the network, through
G(Λ,E)). As seen, the potential labels (shown inside the
vertical rectangles) are assigned to each node. Node ni can be
any of the following: 11,8,4,1. Also depicted in the figure
are the probabilities associated with each of the labels. After
initialization, all probabilities are equal.
Part (b) of Figure 8 shows the result of the first iteration
of the localization algorithm for node ni, assuming that node
nj is the first wi chosen in line 7 of Algorithm 3. By using
Equation 6, the algorithm computes the support Q(λi) for
each of the possible labels for node ni. Once the Q(λi)"s
are computed, the normalizing constant, given by Equation
3 can be obtained. The last step of the iteration is to update
the probabilities associated with all potential labels of node
ni, as given by Equation 2.
One interesting problem, which we explore in the
performance evaluation section, is to assess the impact the
partitioning of the color set C has on the accuracy of
localization. When the size of the coloring set is smaller than the
number of sensor nodes (as it is the case for our hybrid
connectivity/color constrained relaxation), the system designer
has the option of allowing one node to uniquely have a color
(acting as an anchor), or multiple nodes. Intuitively, by
assigning one color to more than one node, more constraints
(distributed) can be enforced.
3.3.6 Relaxation Techniques Analysis
The proposed label relaxation techniques have different
trade-offs. For our analysis of the trade-offs, we consider
the following metrics of interest: the localization time
(duration), the energy consumed (overhead), the network size
(scale) that can be handled by the technique and the
localization accuracy. The parameters of interest are the following:
the number of sensor nodes (N), the energy spent for one
aerial drop (εd), the energy spent in the network for
collecting and reporting neighbor information εb and the time Td
taken by a sensor node to reach the ground after being
aerially deployed. The cost comparison of the different label
relaxation techniques is shown in Table 1.
As shown, the relaxation techniques based on color and
space constraints have the lowest localization duration, zero,
for all practical purposes. The scalability of the color based
relaxation technique is, however, limited to the number of
(a) (b)
Figure 9. SensorBall with self-righting capabilities (a)
and colored CCRs (b)
unique color filters that can be built. The narrower the
Transfer Function Ψ(λ), the larger the number of unique colors
that can be created. The manufacturing costs, however, are
increasing as well. The scalability issue is addressed by all
other label relaxation techniques. Most notably, the time
constrained relaxation, which is very similar to the
colorconstrained relaxation, addresses the scale issue, at a higher
deployment cost.
Criteria Color Connectivity Time Space
Duration 0 NTb NTd 0
Overhead εd εd +Nεb Nεd εd
Scale |C| |N| |N| |N|
Accuracy High Low High Medium
Table 1. Comparison of label relaxation techniques
4 System Implementation
The StarDust localization framework, depicted in Figure
2, is flexible in that it enables the development of new
localization systems, based on the four proposed label
relaxation schemes, or the inclusion of other, yet to be invented,
schemes. For our performance evaluation we implemented a
version of the StarDust framework, namely the one proposed
in Section 3.3.5, where the constraints are based on color and
connectivity.
The Central device of the StarDust system consists of the
following: the Light Emitter - we used a
common-off-theshelf flash light (QBeam, 3 million candlepower); the
image acquisition was done with a 3 megapixel digital camera
(Sony DSC-S50) which provided the input to the Image
Processing algorithm, implemented in Matlab.
For sensor nodes we built a custom sensor node, called
SensorBall, with self-righting capabilities, shown in Figure
9(a). The self-righting capabilities are necessary in order to
orient the CCR predominantly upwards. The CCRs that we
used were inexpensive, plastic molded, night time warning
signs commonly available on bicycles, as shown in Figure
9(b). We remark here the low quality of the CCRs we used.
The reflectivity of each CCR (there are tens molded in the
plastic container) is extremely low, and each CCR is not built
with mirrors. A reflective effect is achieved by employing
finely polished plastic surfaces. We had 5 colors available,
in addition to the standard CCR, which reflects all the
incoming light (white CCR). For a slightly higher price (ours
were 20cents/piece), better quality CCRs can be employed.
64
Figure 10. The field in the dark Figure 11. The illuminated field Figure 12. The difference: Figure
10Figure 11
Higher quality (better mirrors) would translate in more
accurate image processing (better sensor node detection) and
smaller form factor for the optical component (an array of
CCRs with a smaller area can be used).
The sensor node platform we used was the micaZ mote.
The code that runs on each node is a simple application
which broadcasts 100 beacons, and maintains a neighbor
table containing the percentage of successfully received
beacons, for each neighbor. On demand, the neighbor table is
reported to a base station, where the node ID mapping is
performed.
5 System Evaluation
In this section we present the performance evaluation of
a system implementation of the StarDust localization
framework. The three major research questions that our evaluation
tries to answer are: the feasibility of the proposed framework
(can sensor nodes be optically detected at large distances),
the localization accuracy of one actual implementation of the
StarDust framework, and whether or not atmospheric
conditions can affect the recognition of sensor nodes in an
image. The first two questions are investigated by evaluating
the two main components of the StarDust framework: the
Image Processing and the Node ID Matching. These
components have been evaluated separately mainly because of
lack of adequate facilities. We wanted to evaluate the
performance of the Image Processing Algorithm in a long range,
realistic, experimental set-up, while the Node ID Matching
required a relatively large area, available for long periods of
time (for connectivity data gathering). The third research
question is investigated through a computer modeling of
atmospheric phenomena.
For the evaluation of the Image Processing module, we
performed experiments in a football stadium where we
deploy 6 sensor nodes in a 3×2 grid. The distance between the
Central device and the sensor nodes is approximately 500 ft.
The metrics of interest are the number of false positives and
false negatives in the Image Processing Algorithm.
For the evaluation of the Node ID Mapping component,
we deploy 26 sensor nodes in an 120 × 60 ft2 flat area of
a stadium. In order to investigate the influence the radio
connectivity has on localization accuracy, we vary the height
above ground of the deployed sensor nodes. Two set-ups are
used: one in which the sensor nodes are on the ground, and
the second one, in which the sensor nodes are raised 3 inches
above ground. From here on, we will refer to these two
experimental set-ups as the low connectivity and the high
connectivity networks, respectively because when nodes are
on the ground the communication range is low resulting in
less neighbors than when the nodes are elevated and have a
greater communication range. The metrics of interest are:
the localization error (defined as the distance between the
computed location and the true location - known from the
manual placement), the percentage of nodes correctly
localized, the convergence of the label relaxation algorithm, the
time to localize and the robustness of the node ID mapping
to errors in the Image Processing module.
The parameters that we vary experimentally are: the
angle under which images are taken, the focus of the camera,
and the degree of connectivity. The parameters that we vary
in simulations (subsequent to image acquisition and
connectivity collection) the number of colors, the number of
anchors, the number of false positives or negatives as input
to the Node ID Matching component, the distance between
the imaging device and sensor network (i.e., range),
atmospheric conditions (light attenuation coefficient) and CCR
reflectance (indicative of its quality).
5.1 Image Processing
For the IPA evaluation, we deploy 6 sensor nodes in a
3 × 2 grid. We take 13 sets of pictures using different
orientations of the camera and different zooming factors. All
pictures were taken from the same location. Each set is
composed of a picture taken in the dark and of a picture taken
with a light beam pointed at the nodes. We process the
pictures offline using a Matlab implementation of IPA. Since we
are interested in the feasibility of identifying colored sensor
nodes at large distance, the end result of our IPA is the 2D
location of sensor nodes (position in the image). The
transformation to 3D coordinates can be done through standard
computer graphics techniques [23].
One set of pictures obtained as part of our experiment is
shown in Figures 10 and 11. The execution of our IPA
algorithm results in Figure 12 which filters out the background,
and Figure 13 which shows the output of the edge detection
step of IPA. The experimental results are depicted in
Figure 14. For each set of pictures the graph shows the number
of false positives (the IPA determines that there is a node
65
Figure 13. Retroreflectors detected in Figure 12
0
1
2
3
1 2 3 4 5 6 7 8 9 10 11
Experiment Number
Count
False Positives
False Negatives
Figure 14. False Positives and Negatives for the 6 nodes
while there is none), and the number of false negatives (the
IPA determines that there is no node while there is one). In
about 45% of the cases, we obtained perfect results, i.e., no
false positives and no false negatives. In the remaining cases,
we obtained a number of false positives of at most one, and
a number of false negatives of at most two.
We exclude two pairs of pictures from Figure 14. In the
first excluded pair, we obtain 42 false positives and in the
second pair 10 false positives and 7 false negatives. By
carefully examining the pictures, we realized that the first pair
was taken out of focus and that a car temporarily appeared
in one of the pictures of the second pair. The anomaly in
the second set was due to the fact that we waited too long to
take the second picture. If the pictures had been taken a few
milliseconds apart, the car would have been represented on
either both or none of the pictures and the IPA would have
filtered it out.
5.2 Node ID Matching
We evaluate the Node ID Matching component of our
system by collecting empirical data (connectivity information)
from the outdoor deployment of 26 nodes in the 120×60 ft2
area. We collect 20 sets of data for the high connectivity
and low connectivity network deployments. Off-line we
investigate the influence of coloring on the metrics of interest,
by randomly assigning colors to the sensor nodes. For one
experimental data set we generate 50 random assignments
of colors to sensor nodes. It is important to observe that, for
the evaluation of the Node ID Matching algorithm (color and
connectivity constrained), we simulate the color assignment
to sensor nodes. As mentioned in Section 4 the size of the
coloring space available to us was 5 (5 colors). Through
simulations of color assignment (not connectivity) we are able
to investigate the influence that the size of the coloring space
has on the accuracy of localization. The value of the
param0
10
20
30
40
50
60
0 10 20 30 40 50 60 70 80 90
Distance [feet]
Count
Connected
Not Connected
Figure 15. The number of existing and missing radio
connections in the sparse connectivity experiment
0
10
20
30
40
50
60
70
0 10 20 30 40 50 60 70 80 90 10 11 12
Distance [feet]
Count
Connected
Not Connected
Figure 16. The number of existing and missing radio
connections in the high connectivity experiment
eter ε used in Algorithm 2 was 0.001. The results presented
here represent averages over the randomly generated
colorings and over all experimental data sets.
We first investigate the accuracy of our proposed Radio
Model, and subsequently use the derived values for the radio
range in the evaluation of the Node ID matching component.
5.2.1 Radio Model
From experiments, we obtain the average number of
observed beacons (k, defined in Section 3.3.2) for the low
connectivity network of 180 beacons and for the high
connectivity network of 420 beacons. From our Radio Model
(Equation 7, we obtain a radio range R = 25 ft for the low
connectivity network and R = 40 ft for the high connectivity
network.
To estimate the accuracy of our simple model, we plot
the number of radio links that exist in the networks, and the
number of links that are missing, as functions of the distance
between nodes. The results are shown in Figures 15 and 16.
We define the average radio range R to be the distance over
which less than 20% of potential radio links, are missing.
As shown in Figure 15, the radio range is between 20 ft and
25 ft. For the higher connectivity network, the radio range
was between 30 ft and 40 ft.
We choose two conservative estimates of the radio range:
20 ft for the low connectivity case and 35 ft for the high
connectivity case, which are in good agreement with the values
predicted by our Radio Model.
5.2.2 Localization Error vs. Coloring Space Size
In this experiment we investigate the effect of the number
of colors on the localization accuracy. For this, we randomly
assign colors from a pool of a given size, to the sensor nodes.
66
0
5
10
15
20
25
30
35
40
45
0 5 10 15 20
Number of Colors
LocalizationError[feet]
R=15feet
R=20feet
R=25feet
Figure 17. Localization error
0
10
20
30
40
50
60
70
80
90
100
0 5 10 15 20
Number of Colors
%CorrectLocalized[x100]
R=15feet
R=20feet
R=25feet
Figure 18. Percentage of nodes correctly localized
We then execute the localization algorithm, which uses the
empirical data. The algorithm is run for three different radio
ranges: 15, 20 and 25 ft, to investigate its influence on the
localization error.
The results are depicted in Figure 17 (localization error)
and Figure 18 (percentage of nodes correctly localized). As
shown, for an estimate of 20 ft for the radio range (as
predicted by our Radio Model) we obtain the smallest
localization errors, as small as 2 ft, when enough colors are used.
Both Figures 17 and 18 confirm our intuition that a larger
number of colors available significantly decrease the error in
localization.
The well known fact that relaxation algorithms do not
always converge, was observed during our experiments. The
percentage of successful runs (when the algorithm
converged) is depicted in Figure 19. As shown, in several
situations, the algorithm failed to converge (the algorithm
execution was stopped after 100 iterations per node). If the
algorithm does not converge in a predetermined number of steps,
it will terminate and the label with the highest probability
will provide the identity of the node. It is very probable that
the chosen label is incorrect, since the probabilities of some
of labels are constantly changing (with each iteration).The
convergence of relaxation based algorithms is a well known
issue.
5.2.3 Localization Error vs. Color Uniqueness
As mentioned in the Section 3.3.1, a unique color gives a
sensor node the statute of an anchor. A sensor node that is
an anchor can unequivocally be identified through the Image
Processing module. In this section we investigate the effect
unique colors have on the localization accuracy. Specifically,
we want to experimentally verify our intuition that assigning
more nodes to a color can benefit the localization accuracy,
by enforcing more constraints, as opposed to uniquely
assigning a color to a single node.
90
95
100
105
0 5 10 15 20
Number of Colors
ConvergenceRate[x100]
R=15feet
R=20feet
R=25feet
Figure 19. Convergence error
0
2
4
6
8
10
12
14
16
4 6 8
Number of Colors
LocalizationError[feet]
0 anchors
2 anchors
4 anchors
6 anchors
8 anchors
Figure 20. Localization error vs. number of colors
For this, we fix the number of available colors to either 4,
6 or 8 and vary the number of nodes that are given unique
colors, from 0, up to the maximum number of colors (4, 6 or
8). Naturally, if we have a maximum number of colors of 4,
we can assign at most 4 anchors. The experimental results
are depicted in Figure 20 (localization error) and Figure 21
(percentage of sensor node correctly localized). As expected,
the localization accuracy increases with the increase in the
number of colors available (larger coloring space). Also, for
a given size of the coloring space (e.g., 6 colors available), if
more colors are uniquely assigned to sensor nodes then the
localization accuracy decreases. It is interesting to observe
that by assigning colors uniquely to nodes, the benefit of
having additional colors is diminished. Specifically, if 8 colors
are available and all are assigned uniquely, the system would
be less accurately localized (error ≈ 7 ft), when compared
to the case of 6 colors and no unique assignments of colors
(≈ 5 ft localization error).
The same trend, of a less accurate localization can be
observed in Figure 21, which shows the percentage of nodes
correctly localized (i.e., 0 ft localization error). As shown, if
we increase the number of colors that are uniquely assigned,
the percentage of nodes correctly localized decreases.
5.2.4 Localization Error vs. Connectivity
We collected empirical data for two network deployments
with different degrees of connectivity (high and low) in
order to assess the influence of connectivity on location
accuracy. The results obtained from running our localization
algorithm are depicted in Figure 22 and Figure 23. We
varied the number of colors available and assigned no anchors
(i.e., no unique assignments of colors).
In both scenarios, as expected, localization error decrease
with an increase in the number of colors. It is interesting
to observe, however, that the low connectivity scenario
im67
0
20
40
60
80
100
120
140
4 6 8
Number of Colors
%CorrectLocalized[x100]
0 anchors
2 anchors
4 anchors
6 anchors
8 anchors
Figure 21. Percentage of nodes correctly localized vs.
number of colors
0
5
10
15
20
25
30
35
40
45
0 2 4 6 8 10 12
Number of Colors
LocalizationError[feet]
Low Connectivity
High Connectivity
Figure 22. Localization error vs. number of colors
proves the localization accuracy quicker, from the additional
number of colors available. When the number of colors
becomes relatively large (twelve for our 26 sensor node
network), both scenarios (low and high connectivity) have
comparable localization errors, of less that 2 ft. The same trend
of more accurate location information is evidenced by
Figure 23 which shows that the percentage of nodes that are
localized correctly grows quicker for the low connectivity
deployment.
5.3 Localization Error vs. Image Processing
Errors
So far we investigated the sources for error in
localization that are intrinsic to the Node ID Matching component.
As previously presented, luminous objects can be
mistakenly detected to be sensor nodes during the location
detection phase of the Image Processing module. These false
positives can be eliminated by the color recognition procedure
of the Image Processing module. More problematic are false
negatives (when a sensor node does not reflect back enough
light to be detected). They need to be handled by the
localization algorithm. In this case, the localization algorithm
is presented with two sets of nodes of different sizes, that
need to be matched: one coming from the Image Processing
(which misses some nodes) and one coming from the
network, with the connectivity information (here we assume a
fully connected network, so that all sensor nodes report their
connectivity information). In this experiment we investigate
how Image Processing errors (false negatives) influence the
localization accuracy.
For this evaluation, we ran our localization algorithm with
empirical data, but dropped a percentage of nodes from the
list of nodes detected by the Image Processing algorithm (we
artificially introduced false negatives in the Image
Process0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12
Number of Colors
%CorrectLocalized[x100]
Low Connectivity
High Connectivity
Figure 23. Percentage of nodes correctly localized
0
2
4
6
8
10
12
14
0 4 8 12 16
% False Negatives [x100]
LocalizationError[feet]
4 colors
8 colors
12 colors
Figure 24. Impact of false negatives on the localization
error
ing). The effect of false negatives on localization accuracy is
depicted in Figure 24. As seen in the figure if the number of
false negatives is 15%, the error in position estimation
doubles when 4 colors are available. It is interesting to observe
that the scenario when more colors are available (e.g., 12
colors) is being affected more drastically than the scenario with
less colors (e.g., 4 colors). The benefit of having more colors
available is still being maintained, at least for the range of
colors we investigated (4 through 12 colors).
5.4 Localization Time
In this section we look more closely at the duration for
each of the four proposed relaxation techniques and two
combinations of them: color-connectivity and color-time.
We assume that 50 unique color filters can be manufactured,
that the sensor network is deployed from 2,400 ft
(necessary for the time-constrained relaxation) and that the time
required for reporting connectivity grows linearly, with an
initial reporting period of 160sec, as used in a real world
tracking application [1]. The localization duration results, as
presented in Table 1, are depicted in Figure 25.
As shown, for all practical purposes the time required
by the space constrained relaxation techniques is 0sec. The
same applies to the color constrained relaxation, for which
the localization time is 0sec (if the number of colors is
sufficient). Considering our assumptions, only for a network of
size 50 the color constrained relaxation works. The
localization duration for all other network sizes (100, 150 and 200)
is infinite (i.e., unique color assignments to sensor nodes
can not be made, since only 50 colors are unique), when
only color constrained relaxation is used. Both the
connectivity constrained and time constrained techniques increase
linearly with the network size (for the time constrained, the
Central device deploys sensor nodes one by one, recording
an image after the time a sensor node is expected to reach the
68
0
500
1000
1500
2000
2500
3000
Color Connectivity Time Space
ColorConenctivity
Color-Time
Localization technique
Localizationtime[sec]
50 nodes
100 nodes
150 nodes
200 nodes
Figure 25. Localization time for different
label relaxation schemes
0 2000 4000 6000 8000
0
0.5
1
1.5
2
2.5
3
3.5
4
r [feet]
C
r
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 26. Apparent contrast in a
clear atmosphere
0 2000 4000 6000 8000
0
0.5
1
1.5
2
2.5
3
3.5
4
r [feet]
C
r
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 27. Apparent contrast in a
hazing atmosphere
ground).
It is interesting to notice in Figure 25 the improvement in
the localization time obtained by simply combining the color
and the connectivity constrained techniques. The
localization duration in this case is identical with the connectivity
constrained technique.
The combination of color and time constrained
relaxations is even more interesting. For a reasonable
localization duration of 52seconds a perfect (i.e., 0 ft localization
error) localization system can be built. In this scenario, the
set of sensor nodes is split in batches, with each batch
having a set of unique colors. It would be very interesting to
consider other scenarios, where the strength of the space
constrained relaxation (0sec for any sensor network size) is
used for improving the other proposed relaxation techniques.
We leave the investigation and rigorous classification of such
technique combination for future work.
5.5 System Range
In this section we evaluate the feasibility of the
StarDust localization framework when considering the realities
of light propagation through the atmosphere.
The main factor that determines the range of our system is
light scattering, which redirects the luminance of the source
into the medium (in essence equally affecting the luminosity
of the target and of the background). Scattering limits the
visibility range by reducing the apparent contrast between
the target and its background (approaches zero, as the
distance increases). The apparent contrast Cr is quantitatively
expressed by the formula:
Cr = (Nt
r −Nb
r )/Nb
r (10)
where Nt
r and Nb
r are the apparent target radiance and
apparent background radiance at distance r from the light source,
respectively. The apparent radiance Nt
r of a target at a
distance r from the light source, is given by:
Nt
r = Na +
Iρte−2σr
πr2
(11)
where I is the intensity of the light source, ρt is the
target reflectance, σ is the spectral attenuation coefficient (≈
0.12km−1 and ≈ 0.60km−1 for a clear and a hazy
atmosphere, respectively) and Na is the radiance of the
atmospheric backscatter, and it can be expressed as follows:
Na =
Gσ2I
2π
2σrZ
0.02σr
e−x
x2
dx (12)
where G = 0.24 is a backscatter gain. The apparent
background radiance Nb
r is given by formulas similar with
Equations 11 and 12, where only the target reflectance ρt is
substituted with the background reflectance ρb. It is important
to remark that when Cr reaches its lower limit, no increase
in the source luminance or receiver sensitivity will increase
the range of the system. From Equations 11 and 12 it can be
observed that the parameter which can be controlled and can
influence the range of the system is ρt, the target reflectance.
Figures 26 and 27 depict the apparent contrast Cr as a
function of the distance r for a clear and for a hazy
atmosphere, respectively. The apparent contrast is investigated for
reflectance coefficients ρt ranging from 0.3 to 1.0 (perfect
reflector). For a contrast C of at least 0.5, as it can be seen in
Figure 26 a range of approximately 4,500 ft can be achieved
if the atmosphere is clear. The performance dramatically
deteriorates, when the atmospheric conditions are problematic.
As shown in Figure 27 a range of up to 1,500 ft is
achievable, when using highly reflective CCR components.
While our light source (3 million candlepower) was
sufficient for a range of a few hundred feet, we remark that there
exist commercially available light sources (20 million
candlepower) or military (150 million candlepower [27]),
powerful enough for ranges of a few thousand feet.
6 StarDust System Optimizations
In this section we describe extensions of the proposed
architecture that can constitute future research directions.
6.1 Chained Constraint Primitives
In this paper we proposed four primitives for
constraintbased relaxation algorithms: color, connectivity, time and
space. To demonstrate the power that can be obtained by
combining them, we proposed and evaluated one
combination of such primitives: color and connectivity. An
interesting research direction to pursue could be to chain more than
two of these primitives. An example of such chain is: color,
temporal, spatial and connectivity. Other research directions
could be to use voting scheme for deciding which primitive
to use or assign different weights to different relaxation
algorithms.
69
6.2 Location Learning
If after several iterations of the algorithm, none of the
label probabilities for a node ni converges to a higher value, the
confidence in our labeling of that node is relatively low. It
would be interesting to associate with a node, more than one
label (implicitly more than one location) and defer the label
assignment decision until events are detected in the network
(if the network was deployed for target tracking).
6.3 Localization in Rugged Environments
The initial driving force for the StarDust localization
framework was to address the sensor node localization in
extremely rugged environments. Canopies, dense vegetation,
extremely obstructing environments pose significant
challenges for sensor nodes localization. The hope, and our
original idea, was to consider the time period between the aerial
deployment and the time when the sensor node disappears
under the canopy. By recording the last visible position of a
sensor node (as seen from the aircraft) a reasonable estimate
of the sensor node location can be obtained. This would
require that sensor nodes posses self-righting capabilities,
while in mid-air. Nevertheless, we remark on the suitability
of our localization framework for rugged, non-line-of-sight
environments.
7 Conclusions
StarDust solves the localization problem for aerial
deployments where passiveness, low cost, small form factor
and rapid localization are required. Results show that
accuracy can be within 2 ft and localization time within
milliseconds. StarDust also shows robustness with respect to errors.
We predict the influence the atmospheric conditions can have
on the range of a system based on the StarDust framework,
and show that hazy environments or daylight can pose
significant challenges.
Most importantly, the properties of StarDust support
the potential for even more accurate localization solutions
as well as solutions for rugged, non-line-of-sight
environments.
8 References
[1] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo,
R. Stoleru, T. Yan, L. Gu, J. Hui, and B. Krogh, An energy-efficient
surveillance system using wireless sensor networks, in MobiSys, 2004.
[2] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas,
G. Pap, J. Sallai, and K. Frampton, Sensor network-based
countersniper system, in SenSys, 2004.
[3] A. Arora, P. Dutta, and B. Bapat, A line in the sand: A wireless sensor
network for trage detection, classification and tracking, in Computer
Networks, 2004.
[4] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, and D. Culler,
An analysis of a large scale habitat monitoring application, in ACM
SenSys, 2004.
[5] N. Xu, S. Rangwala, K. K. Chintalapudi, D. Ganesan, A. Broad,
R. Govindan, and D. Estrin, A wireless sensor network for structural
monitoring, in ACM SenSys, 2004.
[6] A. Savvides, C. Han, and M. Srivastava, Dynamic fine-grained
localization in ad-hoc networks of sensors, in Mobicom, 2001.
[7] N. Priyantha, A. Chakraborty, and H. Balakrishnan, The cricket
location-support system, in Mobicom, 2000.
[8] M. Broxton, J. Lifton, and J. Paradiso, Localizing a sensor network
via collaborative processing of global stimuli, in EWSN, 2005.
[9] P. Bahl and V. N. Padmanabhan, Radar: An in-building rf-based user
location and tracking system, in IEEE Infocom, 2000.
[10] N. Priyantha, H. Balakrishnan, E. Demaine, and S. Teller,
Mobileassisted topology generation for auto-localization in sensor networks,
in IEEE Infocom, 2005.
[11] P. N. Pathirana, A. Savkin, S. Jha, and N. Bulusu, Node localization
using mobile robots in delay-tolerant sensor networks, IEEE
Transactions on Mobile Computing, 2004.
[12] C. Savarese, J. M. Rabaey, and J. Beutel, Locationing in distribued
ad-hoc wireless sensor networks, in ICAASSP, 2001.
[13] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar,
S. Dora, and A. Ledeczi, Radio interferometric geolocation, in ACM
SenSys, 2005.
[14] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, and D. Culler, The
effects of ranging noise on multi-hop localization: An empirical study,
in IPSN, 2005.
[15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, and G. Agha, Resilient
localization for sensor networks in outdoor environment, UIUC, Tech.
Rep., 2004.
[16] R. Stoleru and J. A. Stankovic, Probability grid: A location
estimation scheme for wireless sensor networks, in SECON, 2004.
[17] N. Bulusu, J. Heidemann, and D. Estrin, GPS-less low cost outdoor
localization for very small devices, IEEE Personal Communications
Magazine, 2000.
[18] T. He, C. Huang, B. Blum, J. A. Stankovic, and T. Abdelzaher,
Range-Free localization schemes in large scale sensor networks, in
ACM Mobicom, 2003.
[19] R. Nagpal, H. Shrobe, and J. Bachrach, Organizing a global
coordinate system from local information on an ad-hoc sensor network, in
IPSN, 2003.
[20] D. Niculescu and B. Nath, ad-hoc positioning system, in IEEE
GLOBECOM, 2001.
[21] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke, A high-accuracy
low-cost localization system for wireless sensor networks, in ACM
SenSys, 2005.
[22] K. R¨omer, The lighthouse location system for smart dust, in
ACM/USENIX MobiSys, 2003.
[23] R. Y. Tsai, A versatile camera calibration technique for
highaccuracy 3d machine vision metrology using off-the-shelf tv cameras
and lenses, IEEE JRA, 1987.
[24] C. L. Archer and M. Z. Jacobson, Spatial and temporal distributions
of U.S. winds and wind power at 80m derived from measurements,
Geophysical Research Jrnl., 2003.
[25] Team for advanced flow simulation and modeling. [Online].
Available: http://www.mems.rice.edu/TAFSM/RES/
[26] K. Stein, R. Benney, T. Tezduyar, V. Kalro, and J. Leonard, 3-D
computation of parachute fluid-structure interactions - performance and
control, in Aerodynamic Decelerator Systems Conference, 1999.
[27] Headquarters Department of the Army, Technical manual for
searchlight infrared AN/GSS-14(V)1, 1982.
70 | range;unique mapping;performance;image processing;connectivity;localization;scene labeling;probability;sensor node;aerial vehicle;consistency;wireless sensor network;corner-cube retro-reflector |
train_C-46 | TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs | Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks. | 1. Introduction
1.1 Motivation
Many different kinds of networked data-centric sensor
applications have emerged in recent years. Sensors in these applications
sense the environment and generate data that must be processed,
filtered, interpreted, and archived in order to provide a useful
infrastructure to its users. To achieve its goals, a typical sensor
application needs access to both live and past sensor data. Whereas
access to live data is necessary in monitoring and surveillance
applications, access to past data is necessary for applications such as
mining of sensor logs to detect unusual patterns, analysis of
historical trends, and post-mortem analysis of particular events. Archival
storage of past sensor data requires a storage system, the key
attributes of which are: where the data is stored, whether it is indexed,
and how the application can access this data in an energy-efficient
manner with low latency.
There have been a spectrum of approaches for constructing
sensor storage systems. In the simplest, sensors stream data or events
to a server for long-term archival storage [3], where the server
often indexes the data to permit efficient access at a later time. Since
sensors may be several hops from the nearest base station, network
costs are incurred; however, once data is indexed and archived,
subsequent data accesses can be handled locally at the server without
incurring network overhead. In this approach, the storage is
centralized, reads are efficient and cheap, while writes are expensive.
Further, all data is propagated to the server, regardless of whether
it is ever used by the application.
An alternate approach is to have each sensor store data or events
locally (e.g., in flash memory), so that all writes are local and incur
no communication overheads. A read request, such as whether an
event was detected by a particular sensor, requires a message to
be sent to the sensor for processing. More complex read requests
are handled by flooding. For instance, determining if an intruder
was detected over a particular time interval requires the request to
be flooded to all sensors in the system. Thus, in this approach,
the storage is distributed, writes are local and inexpensive, while
reads incur significant network overheads. Requests that require
flooding, due to the lack of an index, are expensive and may waste
precious sensor resources, even if no matching data is stored at
those sensors. Research efforts such as Directed Diffusion [17]
have attempted to reduce these read costs, however, by intelligent
message routing.
Between these two extremes lie a number of other sensor storage
systems with different trade-offs, summarized in Table 1. The
geographic hash table (GHT) approach [24, 26] advocates the use of
an in-network index to augment the fully distributed nature of
sensor storage. In this approach, each data item has a key associated
with it, and a distributed or geographic hash table is used to map
keys to nodes that store the corresponding data items. Thus, writes
cause data items to be sent to the hashed nodes and also trigger
updates to the in-network hash table. A read request requires a lookup
in the in-network hash table to locate the node that stores the data
39
item; observe that the presence of an index eliminates the need for
flooding in this approach.
Most of these approaches assume a flat, homogeneous
architecture in which every sensor node is energy-constrained. In this
paper, we propose a novel storage architecture called TSAR1
that
reflects and exploits the multi-tier nature of emerging sensor
networks, where the application is comprised of tens of tethered
sensor proxies (or more), each controlling tens or hundreds of
untethered sensors. TSAR is a component of our PRESTO [8] predictive
storage architecture, which combines archival storage with caching
and prediction. We believe that a fundamentally different storage
architecture is necessary to address the multi-tier nature of future
sensor networks. Specifically, the storage architecture needs to
exploit the resource-rich nature of proxies, while respecting resource
constraints at the remote sensors. No existing sensor storage
architecture explicitly addresses this dichotomy in the resource
capabilities of different tiers.
Any sensor storage system should also carefully exploit current
technology trends, which indicate that the capacities of flash
memories continue to rise as per Moore"s Law, while their costs continue
to plummet. Thus it will soon be feasible to equip each sensor with
1 GB of flash storage for a few tens of dollars. An even more
compelling argument is the energy cost of flash storage, which can be
as much as two orders of magnitude lower than that for
communication. Newer NAND flash memories offer very low write and
erase energy costs - our comparison of a 1GB Samsung NAND
flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio
[4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost
between the two devices, even before accounting for network protocol
overheads. These trends, together with the energy-constrained
nature of untethered sensors, indicate that local storage offers a viable,
energy-efficient alternative to communication in sensor networks.
TSAR exploits these trends by storing data or events locally on
the energy-efficient flash storage at each sensor. Sensors send
concise identifying information, which we term metadata, to a nearby
proxy; depending on the representation used, this metadata may be
an order of magnitude or more smaller than the data itself,
imposing much lower communication costs. The resource-rich proxies
interact with one another to construct a distributed index of the
metadata reported from all sensors, and thus an index of the
associated data stored at the sensors. This index provides a unified,
logical view of the distributed data, and enables an application to
query and read past data efficiently - the index is used to
pinpoint all data that match a read request, followed by messages to
retrieve that data from the corresponding sensors. In-network
index lookups are eliminated, reducing network overheads for read
requests. This separation of data, which is stored at the sensors,
and the metadata, which is stored at the proxies, enables TSAR to
reduce energy overheads at the sensors, by leveraging resources at
tethered proxies.
1.2 Contributions
This paper presents TSAR, a novel two-tier storage architecture
for sensor networks. To the best of our knowledge, this is the first
sensor storage system that is explicitly tailored for emerging
multitier sensor networks. Our design and implementation of TSAR has
resulted in four contributions.
At the core of the TSAR architecture is a novel distributed index
structure based on interval skip graphs that we introduce in this
paper. This index structure can store coarse summaries of sensor
data and organize them in an ordered manner to be easily
search1
TSAR: Tiered Storage ARchitecture for sensor networks.
able. This data structure has O(log n) expected search and update
complexity. Further, the index provides a logically unified view of
all data in the system.
Second, at the sensor level, each sensor maintains a local archive
that stores data on flash memory. Our storage architecture is fully
stateless at each sensor from the perspective of the metadata index;
all index structures are maintained at the resource-rich proxies, and
only direct requests or simple queries on explicitly identified
storage locations are sent to the sensors. Storage at the remote sensor
is in effect treated as appendage of the proxy, resulting in low
implementation complexity, which makes it ideal for small,
resourceconstrained sensor platforms. Further, the local store is optimized
for time-series access to archived data, as is typical in many
applications. Each sensor periodically sends a summary of its data to a
proxy. TSAR employs a novel adaptive summarization technique
that adapts the granularity of the data reported in each summary to
the ratio of false hits for application queries. More fine grain
summaries are sent whenever more false positives are observed, thereby
balancing the energy cost of metadata updates and false positives.
Third, we have implemented a prototype of TSAR on a multi-tier
testbed comprising Stargate-based proxies and Mote-based sensors.
Our implementation supports spatio-temporal, value, and
rangebased queries on sensor data.
Fourth, we conduct a detailed experimental evaluation of TSAR
using a combination of EmStar/EmTOS [10] and our prototype.
While our EmStar/EmTOS experiments focus on the scalability of
TSAR in larger settings, our prototype evaluation involves latency
and energy measurements in a real setting. Our results demonstrate
the logarithmic scaling property of the sparse skip graph and the
low latency of end-to-end queries in a duty-cycled multi-hop
network .
The remainder of this paper is structured as follows. Section 2
presents key design issues that guide our work. Section 3 and 4
present the proxy-level index and the local archive and
summarization at a sensor, respectively. Section 5 discusses our prototype
implementation, and Section 6 presents our experimental results. We
present related work in Section 7 and our conclusions in Section 8.
2. Design Considerations
In this section, we first describe the various components of a
multi-tier sensor network assumed in our work. We then present a
description of the expected usage models for this system, followed
by several principles addressing these factors which guide the
design of our storage system.
2.1 System Model
We envision a multi-tier sensor network comprising multiple tiers
- a bottom tier of untethered remote sensor nodes, a middle tier of
tethered sensor proxies, and an upper tier of applications and user
terminals (see Figure 1).
The lowest tier is assumed to form a dense deployment of
lowpower sensors. A canonical sensor node at this tier is equipped
with low-power sensors, a micro-controller, and a radio as well as
a significant amount of flash memory (e.g., 1GB). The common
constraint for this tier is energy, and the need for a long lifetime
in spite of a finite energy constraint. The use of radio, processor,
RAM, and the flash memory all consume energy, which needs to
be limited. In general, we assume radio communication to be
substantially more expensive than accesses to flash memory.
The middle tier consists of power-rich sensor proxies that have
significant computation, memory and storage resources and can use
40
Table 1: Characteristics of sensor storage systems
System Data Index Reads Writes Order preserving
Centralized store Centralized Centralized index Handled at store Send to store Yes
Local sensor store Fully distributed No index Flooding, diffusion Local No
GHT/DCS [24] Fully distributed In-network index Hash to node Send to hashed node No
TSAR/PRESTO Fully distributed Distributed index at proxies Proxy lookup + sensor query Local plus index update Yes
User
Unified Logical Store
Queries
(time, space, value)
Query
Response
Cache
Query forwarding
Proxy
Remote
Sensors
Local Data Archive
on Flash Memory
Interval
Skip Graph
Query
forwarding
summaries
start index
end index
linear
traversal
Query
Response
Cache-miss
triggered
query forwarding
summaries
Figure 1: Architecture of a multi-tier sensor network.
these resources continuously. In urban environments, the proxy tier
would comprise a tethered base-station class nodes (e.g., Crossbow
Stargate), each with with multiple radios-an 802.11 radio that
connects it to a wireless mesh network and a low-power radio (e.g.
802.15.4) that connects it to the sensor nodes. In remote sensing
applications [10], this tier could comprise a similar Stargate node
with a solar power cell. Each proxy is assumed to manage several
tens to hundreds of lower-tier sensors in its vicinity. A typical
sensor network deployment will contain multiple geographically
distributed proxies. For instance, in a building monitoring application,
one sensor proxy might be placed per floor or hallway to monitor
temperature, heat and light sensors in their vicinity.
At the highest tier of our infrastructure are applications that query
the sensor network through a query interface[20]. In this work, we
focus on applications that require access to past sensor data. To
support such queries, the system needs to archive data on a
persistent store. Our goal is to design a storage system that exploits the
relative abundance of resources at proxies to mask the scarcity of
resources at the sensors.
2.2 Usage Models
The design of a storage system such as TSAR is affected by the
queries that are likely to be posed to it. A large fraction of queries
on sensor data can be expected to be spatio-temporal in nature.
Sensors provide information about the physical world; two key
attributes of this information are when a particular event or activity
occurred and where it occurred. Some instances of such queries
include the time and location of target or intruder detections (e.g.,
security and monitoring applications), notifications of specific types
of events such as pressure and humidity values exceeding a
threshold (e.g., industrial applications), or simple data collection queries
which request data from a particular time or location (e.g., weather
or environment monitoring).
Expected queries of such data include those requesting ranges
of one or more attributes; for instance, a query for all image data
from cameras within a specified geographic area for a certain
period of time. In addition, it is often desirable to support efficient
access to data in a way that maintains spatial and temporal
ordering. There are several ways of supporting range queries, such as
locality-preserving hashes such as are used in DIMS [18].
However, the most straightforward mechanism, and one which naturally
provides efficient ordered access, is via the use of order-preserving
data structures. Order-preserving structures such as the well-known
B-Tree maintain relationships between indexed values and thus
allow natural access to ranges, as well as predecessor and successor
operations on their key values.
Applications may also pose value-based queries that involve
determining if a value v was observed at any sensor; the query
returns a list of sensors and the times at which they observed this
value. Variants of value queries involve restricting the query to a
geographical region, or specifying a range (v1, v2) rather than a
single value v. Value queries can be handled by indexing on the
values reported in the summaries. Specifically, if a sensor reports
a numerical value, then the index is constructed on these values. A
search involves finding matching values that are either contained in
the search range (v1, v2) or match the search value v exactly.
Hybrid value and spatio-temporal queries are also possible. Such
queries specify a time interval, a value range and a spatial region
and request all records that match these attributes - find all
instances where the temperature exceeded 100o
F at location R
during the month of August. These queries require an index on both
time and value.
In TSAR our focus is on range queries on value or time, with
planned extensions to include spatial scoping.
2.3 Design Principles
Our design of a sensor storage system for multi-tier networks is
based on the following set of principles, which address the issues
arising from the system and usage models above.
• Principle 1: Store locally, access globally: Current
technology allows local storage to be significantly more
energyefficient than network communication, while technology
trends show no signs of erasing this gap in the near future.
For maximum network life a sensor storage system should
leverage the flash memory on sensors to archive data locally,
substituting cheap memory operations for expensive radio
transmission. But without efficient mechanisms for retrieval,
the energy gains of local storage may be outweighed by
communication costs incurred by the application in searching for
data. We believe that if the data storage system provides
the abstraction of a single logical store to applications, as
41
does TSAR, then it will have additional flexibility to
optimize communication and storage costs.
• Principle 2: Distinguish data from metadata: Data must
be identified so that it may be retrieved by the application
without exhaustive search. To do this, we associate
metadata with each data record - data fields of known syntax
which serve as identifiers and may be queried by the storage
system. Examples of this metadata are data attributes such as
location and time, or selected or summarized data values. We
leverage the presence of resource-rich proxies to index
metadata for resource-constrained sensors. The proxies share this
metadata index to provide a unified logical view of all data in
the system, thereby enabling efficient, low-latency lookups.
Such a tier-specific separation of data storage from metadata
indexing enables the system to exploit the idiosyncrasies of
multi-tier networks, while improving performance and
functionality.
• Principle 3: Provide data-centric query support: In a sensor
application the specific location (i.e. offset) of a record in a
stream is unlikely to be of significance, except if it conveys
information concerning the location and/or time at which the
information was generated. We thus expect that applications
will be best served by a query interface which allows them
to locate data by value or attribute (e.g. location and time),
rather than a read interface for unstructured data. This in turn
implies the need to maintain metadata in the form of an index
that provides low cost lookups.
2.4 System Design
TSAR embodies these design principles by employing local
storage at sensors and a distributed index at the proxies. The key
features of the system design are as follows:
In TSAR, writes occur at sensor nodes, and are assumed to
consist of both opaque data as well as application-specific metadata.
This metadata is a tuple of known types, which may be used by the
application to locate and identify data records, and which may be
searched on and compared by TSAR in the course of locating data
for the application. In a camera-based sensing application, for
instance, this metadata might include coordinates describing the field
of view, average luminance, and motion values, in addition to basic
information such as time and sensor location. Depending on the
application, this metadata may be two or three orders of magnitude
smaller than the data itself, for instance if the metadata consists of
features extracted from image or acoustic data.
In addition to storing data locally, each sensor periodically sends
a summary of reported metadata to a nearby proxy. The summary
contains information such as the sensor ID, the interval (t1, t2)
over which the summary was generated, a handle identifying the
corresponding data record (e.g. its location in flash memory),
and a coarse-grain representation of the metadata associated with
the record. The precise data representation used in the summary
is application-specific; for instance, a temperature sensor might
choose to report the maximum and minimum temperature values
observed in an interval as a coarse-grain representation of the
actual time series.
The proxy uses the summary to construct an index; the index
is global in that it stores information from all sensors in the
system and it is distributed across the various proxies in the system.
Thus, applications see a unified view of distributed data, and can
query the index at any proxy to get access to data stored at any
sensor. Specifically, each query triggers lookups in this distributed
index and the list of matches is then used to retrieve the
corresponding data from the sensors. There are several distributed index and
lookup methods which might be used in this system; however, the
index structure described in Section 3 is highly suited for the task.
Since the index is constructed using a coarse-grain summary,
instead of the actual data, index lookups will yield approximate
matches. The TSAR summarization mechanism guarantees that
index lookups will never yield false negatives - i.e. it will never miss
summaries which include the value being searched for. However,
index lookups may yield false positives, where a summary matches
the query but when queried the remote sensor finds no matching
value, wasting network resources. The more coarse-grained the
summary, the lower the update overhead and the greater the
fraction of false positives, while finer summaries incur update overhead
while reducing query overhead due to false positives. Remote
sensors may easily distinguish false positives from queries which result
in search hits, and calculate the ratio between the two; based on this
ratio, TSAR employs a novel adaptive technique that dynamically
varies the granularity of sensor summaries to balance the metadata
overhead and the overhead of false positives.
3. Data Structures
At the proxy tier, TSAR employs a novel index structure called
the Interval Skip Graph, which is an ordered, distributed data
structure for finding all intervals that contain a particular point or range
of values. Interval skip graphs combine Interval Trees [5], an
interval-based binary search tree, with Skip Graphs [1], a ordered,
distributed data structure for peer-to-peer systems [13]. The
resulting data structure has two properties that make it ideal for
sensor networks. First, it has O(log n) search complexity for
accessing the first interval that matches a particular value or range, and
constant complexity for accessing each successive interval.
Second, indexing of intervals rather than individual values makes the
data structure ideal for indexing summaries over time or value.
Such summary-based indexing is a more natural fit for
energyconstrained sensor nodes, since transmitting summaries incurs less
energy overhead than transmitting all sensor data.
Definitions: We assume that there are Np proxies and Ns
sensors in a two-tier sensor network. Each proxy is responsible for
multiple sensor nodes, and no assumption is made about the
number of sensors per proxy. Each sensor transmits interval summaries
of data or events regularly to one or more proxies that it is
associated with, where interval i is represented as [lowi, highi]. These
intervals can correspond to time or value ranges that are used for
indexing sensor data. No assumption is made about the size of an
interval or about the amount of overlap between intervals.
Range queries on the intervals are posed by users to the network
of proxies and sensors; each query q needs to determine all index
values that overlap the interval [lowq, highq]. The goal of the
interval skip graph is to index all intervals such that the set that overlaps
a query interval can be located efficiently. In the rest of this section,
we describe the interval skip graph in greater detail.
3.1 Skip Graph Overview
In order to inform the description of the Interval Skip Graph, we
first provide a brief overview of the Skip Graph data structure; for
a more extensive description the reader is referred to [1]. Figure 2
shows a skip graph which indexes 8 keys; the keys may be seen
along the bottom, and above each key are the pointers associated
with that key. Each data element, consisting of a key and its
associated pointers, may reside on a different node in the network,
42
7 9 13 17 21 25 311
level 0
level 1
level 2
key
single skip graph element
(each may be on different node)
find(21)
node-to-node messages
Figure 2: Skip Graph of 8 Elements
[6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5]
5 14 14 16 23 23 27 30
[low,high]
max
contains(13)
match
no
match
halt
Figure 3: Interval Skip Graph
[6,14]
[9,12]
[14,16]
[15,23]
[18,19]
[20,27]
[21,30]
[2,5]
Node 1
Node 2
Node 3
level 2
level 1
level 0
Figure 4: Distributed Interval Skip Graph
and pointers therefore identify both a remote node as well as a data
element on that node. In this figure we may see the following
properties of a skip graph:
• Ordered index: The keys are members of an ordered data
type, for instance integers. Lookups make use of ordered
comparisons between the search key and existing index
entries. In addition, the pointers at the lowest level point
directly to the successor of each item in the index.
• In-place indexing: Data elements remain on the nodes
where they were inserted, and messages are sent between
nodes to establish links between those elements and others
in the index.
• Log n height: There are log2 n pointers associated with each
element, where n is the number of data elements indexed.
Each pointer belongs to a level l in [0... log2 n − 1], and
together with some other pointers at that level forms a chain
of n/2l
elements.
• Probabilistic balance: Rather than relying on re-balancing
operations which may be triggered at insert or delete, skip
graphs implement a simple random balancing mechanism
which maintains close to perfect balance on average, with
an extremely low probability of significant imbalance.
• Redundancy and resiliency: Each data element forms an
independent search tree root, so searches may begin at any
node in the network, eliminating hot spots at a single search
root. In addition the index is resilient against node failure;
data on the failed node will not be accessible, but remaining
data elements will be accessible through search trees rooted
on other nodes.
In Figure 2 we see the process of searching for a particular value
in a skip graph. The pointers reachable from a single data element
form a binary tree: a pointer traversal at the highest level skips over
n/2 elements, n/4 at the next level, and so on. Search consists
of descending the tree from the highest level to level 0, at each
level comparing the target key with the next element at that level
and deciding whether or not to traverse. In the perfectly balanced
case shown here there are log2 n levels of pointers, and search will
traverse 0 or 1 pointers at each level. We assume that each data
element resides on a different node, and measure search cost by the
number messages sent (i.e. the number of pointers traversed); this
will clearly be O(log n).
Tree update proceeds from the bottom, as in a B-Tree, with the
root(s) being promoted in level as the tree grows. In this way, for
instance, the two chains at level 1 always contain n/2 entries each,
and there is never a need to split chains as the structure grows. The
update process then consists of choosing which of the 2l
chains to
insert an element into at each level l, and inserting it in the proper
place in each chain.
Maintaining a perfectly balanced skip graph as shown in
Figure 2 would be quite complex; instead, the probabilistic balancing
method introduced in Skip Lists [23] is used, which trades off a
small amount of overhead in the expected case in return for simple
update and deletion. The basis for this method is the observation
that any element which belongs to a particular chain at level l can
only belong to one of two chains at level l+1. To insert an element
we ascend levels starting at 0, randomly choosing one of the two
possible chains at each level, an stopping when we reach an empty
chain.
One means of implementation (e.g. as described in [1]) is to
assign each element an arbitrarily long random bit string. Each
chain at level l is then constructed from those elements whose bit
strings match in the first l bits, thus creating 2l
possible chains
at each level and ensuring that each chain splits into exactly two
chains at the next level. Although the resulting structure is not
perfectly balanced, following the analysis in [23] we can show that
the probability of it being significantly out of balance is extremely
small; in addition, since the structure is determined by the random
number stream, input data patterns cannot cause the tree to become
imbalanced.
3.2 Interval Skip Graph
A skip graph is designed to store single-valued entries. In this
section, we introduce a novel data structure that extends skip graphs
to store intervals [lowi, highi] and allows efficient searches for all
intervals covering a value v, i.e. {i : lowi ≤ v ≤ highi}. Our data
structure can be extended to range searches in a straightforward
manner.
The interval skip graph is constructed by applying the method of
augmented search trees, as described by Cormen, Leiserson, and
Rivest [5] and applied to binary search trees to create an Interval
Tree. The method is based on the observation that a search structure
based on comparison of ordered keys, such as a binary tree, may
also be used to search on a secondary key which is non-decreasing
in the first key.
Given a set of intervals sorted by lower bound - lowi ≤
lowi+1 - we define the secondary key as the cumulative maximum,
maxi = maxk=0...i (highk). The set of intervals intersecting a
value v may then be found by searching for the first interval (and
thus the interval with least lowi) such that maxi ≥ v. We then
43
traverse intervals in increasing order lower bound, until we find the
first interval with lowi > v, selecting those intervals which
intersect v.
Using this approach we augment the skip graph data structure, as
shown in Figure 3, so that each entry stores a range (lower bound
and upper bound) and a secondary key (cumulative maximum of
upper bound). To efficiently calculate the secondary key maxi for
an entry i, we take the greatest of highi and the maximum values
reported by each of i"s left-hand neighbors.
To search for those intervals containing the value v, we first
search for v on the secondary index, maxi, and locate the first entry
with maxi ≥ v. (by the definition of maxi, for this data element
maxi = highi.) If lowi > v, then this interval does not contain
v, and no other intervals will, either, so we are done. Otherwise we
traverse the index in increasing order of mini, returning matching
intervals, until we reach an entry with mini > v and we are done.
Searches for all intervals which overlap a query range, or which
completely contain a query range, are straightforward extensions
of this mechanism.
Lookup Complexity: Lookup for the first interval that matches
a given value is performed in a manner very similar to an interval
tree. The complexity of search is O(log n). The number of
intervals that match a range query can vary depending on the amount of
overlap in the intervals being indexed, as well as the range specified
in the query.
Insert Complexity: In an interval tree or interval skip list, the
maximum value for an entry need only be calculated over the
subtree rooted at that entry, as this value will be examined only when
searching within the subtree rooted at that entry. For a simple
interval skip graph, however, this maximum value for an entry must be
computed over all entries preceding it in the index, as searches may
begin anywhere in the data structure, rather than at a distinguished
root element. It may be easily seen that in the worse case the
insertion of a single interval (one that covers all existing intervals in
the index) will trigger the update of all entries in the index, for a
worst-case insertion cost of O(n).
3.3 Sparse Interval Skip Graph
The final extensions we propose take advantage of the
difference between the number of items indexed in a skip graph and the
number of systems on which these items are distributed. The cost
in network messages of an operation may be reduced by
arranging the data structure so that most structure traversals occur locally
on a single node, and thus incur zero network cost. In addition,
since both congestion and failure occur on a per-node basis, we
may eliminate links without adverse consequences if those links
only contribute to load distribution and/or resiliency within a
single node. These two modifications allow us to achieve reductions
in asymptotic complexity of both update and search.
As may be in Section 3.2, insert and delete cost on an
interval skip graph has a worst case complexity of O(n), compared to
O(log n) for an interval tree. The main reason for the difference
is that skip graphs have a full search structure rooted at each
element, in order to distribute load and provide resilience to system
failures in a distributed setting. However, in order to provide load
distribution and failure resilience it is only necessary to provide a
full search structure for each system. If as in TSAR the number
of nodes (proxies) is much smaller than the number of data
elements (data summaries indexed), then this will result in significant
savings.
Implementation: To construct a sparse interval skip graph, we
ensure that there is a single distinguished element on each system,
the root element for that system; all searches will start at one of
these root elements. When adding a new element, rather than
splitting lists at increasing levels l until the element is in a list with no
others, we stop when we find that the element would be in a list
containing no root elements, thus ensuring that the element is reachable
from all root elements. An example of applying this optimization
may be seen in Figure 5. (In practice, rather than designating
existing data elements as roots, as shown, it may be preferable to insert
null values at startup.)
When using the technique of membership vectors as in [1], this
may be done by broadcasting the membership vectors of each root
element to all other systems, and stopping insertion of an element
at level l when it does not share an l-bit prefix with any of the Np
root elements. The expected number of roots sharing a log2Np-bit
prefix is 1, giving an expected expected height for each element of
log2Np +O(1). An alternate implementation, which distributes
information concerning root elements at pointer establishment time,
is omitted due to space constraints; this method eliminates the need
for additional messages.
Performance: In a (non-interval) sparse skip graph, since the
expected height of an inserted element is now log2 Np + O(1),
expected insertion complexity is O(log Np), rather than O(log n),
where Np is the number of root elements and thus the number of
separate systems in the network. (In the degenerate case of a
single system we have a skip list; with splitting probability 0.5 the
expected height of an individual element is 1.) Note that since
searches are started at root elements of expected height log2 n,
search complexity is not improved.
For an interval sparse skip graph, update performance is
improved considerably compared to the O(n) worst case for the
nonsparse case. In an augmented search structure such as this, an
element only stores information for nodes which may be reached from
that element-e.g. the subtree rooted at that element, in the case of
a tree. Thus, when updating the maximum value in an interval tree,
the update is only propagated towards the root. In a sparse interval
skip graph, updates to a node only propagate towards the Np root
elements, for a worst-case cost of Np log2 n.
Shortcut search: When beginning a search for a value v, rather
than beginning at the root on that proxy, we can find the element
that is closest to v (e.g. using a secondary local index), and then
begin the search at that element. The expected distance between
this element and the search terminus is log2 Np, and the search
will now take on average log2 Np + O(1) steps. To illustrate this
optimization, in Figure 4 depending on the choice of search root, a
search for [21, 30] beginning at node 2 may take 3 network hops,
traversing to node 1, then back to node 2, and finally to node 3
where the destination is located, for a cost of 3 messages. The
shortcut search, however, locates the intermediate data element on
node 2, and then proceeds directly to node 3 for a cost of 1 message.
Performance: This technique may be applied to the primary key
search which is the first of two insertion steps in an interval skip
graph. By combining the short-cut optimization with sparse
interval skip graphs, the expected cost of insertion is now O(log Np),
independent of the size of the index or the degree of overlap of the
inserted intervals.
3.4 Alternative Data Structures
Thus far we have only compared the sparse interval skip graph
with similar structures from which it is derived. A comparison with
several other data structures which meet at least some of the
requirements for the TSAR index is shown in Table 2.
44
Table 2: Comparison of Distributed Index Structures
Range Query Support Interval Representation Re-balancing Resilience Small Networks Large Networks
DHT, GHT no no no yes good good
Local index, flood query yes yes no yes good bad
P-tree, RP* (distributed B-Trees) yes possible yes no good good
DIMS yes no yes yes yes yes
Interval Skipgraph yes yes no yes good good
[6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5]
Roots Node 1
Node 2
Figure 5: Sparse Interval Skip Graph
The hash-based systems, DHT [25] and GHT [26], lack the
ability to perform range queries and are thus not well-suited to indexing
spatio-temporal data. Indexing locally using an appropriate
singlenode structure and then flooding queries to all proxies is a
competitive alternative for small networks; for large networks the linear
dependence on the number of proxies becomes an issue. Two
distributed B-Trees were examined - P-Trees [6] and RP* [19]. Each
of these supports range queries, and in theory could be modified
to support indexing of intervals; however, they both require
complex re-balancing, and do not provide the resilience characteristics
of the other structures. DIMS [18] provides the ability to perform
spatio-temporal range queries, and has the necessary resilience to
failures; however, it cannot be used index intervals, which are used
by TSAR"s data summarization algorithm.
4. Data Storage and Summarization
Having described the proxy-level index structure, we turn to the
mechanisms at the sensor tier. TSAR implements two key
mechanisms at the sensor tier. The first is a local archival store at each
sensor node that is optimized for resource-constrained devices. The
second is an adaptive summarization technique that enables each
sensor to adapt to changing data and query characteristics. The rest
of this section describes these mechanisms in detail.
4.1 Local Storage at Sensors
Interval skip graphs provide an efficient mechanism to lookup
sensor nodes containing data relevant to a query. These queries are
then routed to the sensors, which locate the relevant data records
in the local archive and respond back to the proxy. To enable such
lookups, each sensor node in TSAR maintains an archival store of
sensor data. While the implementation of such an archival store
is straightforward on resource-rich devices that can run a database,
sensors are often power and resource-constrained. Consequently,
the sensor archiving subsystem in TSAR is explicitly designed to
exploit characteristics of sensor data in a resource-constrained
setting.
Timestamp
Calibration
Parameters
Opaque DataData/Event Attributes size
Figure 6: Single storage record
Sensor data has very distinct characteristics that inform our
design of the TSAR archival store. Sensors produce time-series data
streams, and therefore, temporal ordering of data is a natural and
simple way of storing archived sensor data. In addition to
simplicity, a temporally ordered store is often suitable for many sensor data
processing tasks since they involve time-series data processing.
Examples include signal processing operations such as FFT, wavelet
transforms, clustering, similarity matching, and target detection.
Consequently, the local archival store is a collection of records,
designed as an append-only circular buffer, where new records are
appended to the tail of the buffer. The format of each data record is
shown in Figure 6. Each record has a metadata field which includes
a timestamp, sensor settings, calibration parameters, etc. Raw
sensor data is stored in the data field of the record. The data field
is opaque and application-specific-the storage system does not
know or care about interpreting this field. A camera-based sensor,
for instance, may store binary images in this data field. In order
to support a variety of applications, TSAR supports variable-length
data fields; as a result, record sizes can vary from one record to
another.
Our archival store supports three operations on records: create,
read, and delete. Due to the append-only nature of the store,
creation of records is simple and efficient. The create operation simply
creates a new record and appends it to the tail of the store. Since
records are always written at the tail, the store need not maintain
a free space list. All fields of the record need to be specified at
creation time; thus, the size of the record is known a priori and the
store simply allocates the the corresponding number of bytes at the
tail to store the record. Since writes are immutable, the size of a
record does not change once it is created.
proxy
proxy
proxy
record
3 record
summary
local archive in
flash memory
data summary
start,end offset
time interval
sensor
summary
sent to proxy
Insert summaries
into interval skip graph
Figure 7: Sensor Summarization
45
The read operation enables stored records to be retrieved in
order to answer queries. In a traditional database system, efficient
lookups are enabled by maintaining a structure such as a B-tree that
indexes certain keys of the records. However, this can be quite
complex for a small sensor node with limited resources. Consequently,
TSAR sensors do not maintain any index for the data stored in their
archive. Instead, they rely on the proxies to maintain this metadata
index-sensors periodically send the proxy information
summarizing the data contained in a contiguous sequence of records, as well
as a handle indicating the location of these records in flash memory.
The mechanism works as follows: In addition to the summary
of sensor data, each node sends metadata to the proxy containing
the time interval corresponding to the summary, as well as the start
and end offsets of the flash memory location where the raw data
corresponding is stored (as shown in Figure 7). Thus, random
access is enabled at granularity of a summary-the start offset of each
chunk of records represented by a summary is known to the proxy.
Within this collection, records are accessed sequentially. When a
query matches a summary in the index, the sensor uses these offsets
to access the relevant records on its local flash by sequentially
reading data from the start address until the end address. Any
queryspecific operation can then be performed on this data. Thus, no
index needs to be maintained at the sensor, in line with our goal
of simplifying sensor state management. The state of the archive
is captured in the metadata associated with the summaries, and is
stored and maintained at the proxy.
While we anticipate local storage capacity to be large, eventually
there might be a need to overwrite older data, especially in high
data rate applications. This may be done via techniques such as
multi-resolution storage of data [9], or just simply by overwriting
older data. When older data is overwritten, a delete operation is
performed, where an index entry is deleted from the interval skip
graph at the proxy and the corresponding storage space in flash
memory at the sensor is freed.
4.2 Adaptive Summarization
The data summaries serve as glue between the storage at the
remote sensor and the index at the proxy. Each update from a sensor
to the proxy includes three pieces of information: the summary, a
time period corresponding to the summary, and the start and end
offsets for the flash archive. In general, the proxy can index the
time interval representing a summary or the value range reported
in the summary (or both). The former index enables quick lookups
on all records seen during a certain interval, while the latter index
enables quick lookups on all records matching a certain value.
As described in Section 2.4, there is a trade-off between the
energy used in sending summaries (and thus the frequency and
resolution of those summaries) and the cost of false hits during queries.
The coarser and less frequent the summary information, the less
energy required, while false query hits in turn waste energy on
requests for non-existent data.
TSAR employs an adaptive summarization technique that
balances the cost of sending updates against the cost of false positives.
The key intuition is that each sensor can independently identify the
fraction of false hits and true hits for queries that access its local
archive. If most queries result in true hits, then the sensor
determines that the summary can be coarsened further to reduce update
costs without adversely impacting the hit ratio. If many queries
result in false hits, then the sensor makes the granularity of each
summary finer to reduce the number and overhead of false hits.
The resolution of the summary depends on two
parametersthe interval over which summaries of the data are constructed and
transmitted to the proxy, as well as the size of the
applicationspecific summary. Our focus in this paper is on the interval over
which the summary is constructed. Changing the size of the data
summary can be performed in an application-specific manner (e.g.
using wavelet compression techniques as in [9]) and is beyond the
scope of this paper. Currently, TSAR employs a simple
summarization scheme that computes the ratio of false and true hits and
decreases (increases) the interval between summaries whenever this
ratio increases (decreases) beyond a threshold.
5. TSAR Implementation
We have implemented a prototype of TSAR on a multi-tier
sensor network testbed. Our prototype employs Crossbow Stargate
nodes to implement the proxy tier. Each Stargate node employs a
400MHz Intel XScale processor with 64MB RAM and runs the
Linux 2.4.19 kernel and EmStar release 2.1. The proxy nodes
are equipped with two wireless radios, a Cisco Aironet 340-based
802.11b radio and a hostmote bridge to the Mica2 sensor nodes
using the EmStar transceiver. The 802.11b wireless network is
used for inter-proxy communication within the proxy tier, while
the wireless bridge enables sensor-proxy communication. The
sensor tier consists of Crossbow Mica2s and Mica2dots, each
consisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb
on-board flash memory and an ATMega 128L processor. The
sensor nodes run TinyOS 1.1.8. In addition to the on-board flash, the
sensor nodes can be equipped with external MMC/SD flash cards
using a custom connector. The proxy nodes can be equipped with
external storage such as high-capacity compact flash (up to 4GB),
6GB micro-drives, or up to 60GB 1.8inch mobile disk drives.
Since sensor nodes may be several hops away from the nearest
proxy, the sensor tier employs multi-hop routing to communicate
with the proxy tier. In addition, to reduce the power consumption
of the radio while still making the sensor node available for queries,
low power listening is enabled, in which the radio receiver is
periodically powered up for a short interval to sense the channel for
transmissions, and the packet preamble is extended to account for
the latency until the next interval when the receiving radio wakes
up. Our prototype employs the MultiHopLEPSM routing protocol
with the BMAC layer configured in the low-power mode with a
11% duty cycle (one of the default BMAC [22] parameters)
Our TSAR implementation on the Mote involves a data
gathering task that periodically obtains sensor readings and logs these
reading to flash memory. The flash memory is assumed to be a
circular append-only store and the format of the logged data is
depicted in Figure 6. The Mote sends a report to the proxy every N
readings, summarizing the observed data. The report contains: (i)
the address of the Mote, (ii) a handle that contains an offset and the
length of the region in flash memory containing data referred to by
the summary, (iii) an interval (t1, t2) over which this report is
generated, (iv) a tuple (low, high) representing the minimum and the
maximum values observed at the sensor in the interval, and (v) a
sequence number. The sensor updates are used to construct a sparse
interval skip graph that is distributed across proxies, via network
messages between proxies over the 802.11b wireless network.
Our current implementation supports queries that request records
matching a time interval (t1, t2) or a value range (v1, v2). Spatial
constraints are specified using sensor IDs. Given a list of matching
intervals from the skip graph, TSAR supports two types of
messages to query the sensor: lookup and fetch. A lookup message
triggers a search within the corresponding region in flash memory
and returns the number of matching records in that memory region
(but does not retrieve data). In contrast, a fetch message not only
46
0
10
20
30
40
50
60
70
80
128512 1024 2048 4096
NumberofMessages
Index size (entries)
Insert (skipgraph)
Insert (sparse skipgraph)
Initial lookup
(a) James Reserve Data
0
10
20
30
40
50
60
70
80
512 1024 2048 4096
NumberofMessages
Index size (entries)
Insert (skipgraph)
Insert (sparse skipgraph)
Initial lookup
(b) Synthetic Data
Figure 8: Skip Graph Insert Performance
triggers a search but also returns all matching data records to the
proxy. Lookup messages are useful for polling a sensor, for
instance, to determine if a query matches too many records.
6. Experimental Evaluation
In this section, we evaluate the efficacy of TSAR using our
prototype and simulations. The testbed for our experiments consists
of four Stargate proxies and twelve Mica2 and Mica2dot sensors;
three sensors each are assigned to each proxy. Given the limited
size of our testbed, we employ simulations to evaluate the
behavior of TSAR in larger settings. Our simulation employs the EmTOS
emulator [10], which enables us to run the same code in simulation
and the hardware platform.
Rather than using live data from a real sensor, to ensure
repeatable experiments, we seed each sensor node with a dataset
(i.e., a trace) that dictates the values reported by that node to the
proxy. One section of the flash memory on each sensor node is
programmed with data points from the trace; these observations
are then replayed during an experiment, logged to the local archive
(located in flash memory, as well), and reported to the proxy. The
first dataset used to evaluate TSAR is a temperature dataset from
James Reserve [27] that includes data from eleven temperature
sensor nodes over a period of 34 days. The second dataset is
synthetically generated; the trace for each sensor is generated using a
uniformly distributed random walk though the value space.
Our experimental evaluation has four parts. First, we run
EmTOS simulations to evaluate the lookup, update and delete overhead
for sparse interval skip graphs using the real and synthetic datasets.
Second, we provide summary results from micro-benchmarks of
the storage component of TSAR, which include empirical
characterization of the energy costs and latency of reads and writes for the
flash memory chip as well as the whole mote platform, and
comparisons to published numbers for other storage and
communication technologies. These micro-benchmarks form the basis for our
full-scale evaluation of TSAR on a testbed of four Stargate proxies
and twelve Motes. We measure the end-to-end query latency in our
multi-hop testbed as well as the query processing overhead at the
mote tier. Finally, we demonstrate the adaptive summarization
capability at each sensor node. The remainder of this section presents
our experimental results.
6.1 Sparse Interval Skip Graph Performance
This section evaluates the performance of sparse interval skip
graphs by quantifying insert, lookup and delete overheads.
We assume a proxy tier with 32 proxies and construct sparse
interval skip graphs of various sizes using our datasets. For each skip
0
5
10
15
20
25
30
35
409620481024512
NumberofMessages
Index size (entries)
Initial lookup
Traversal
(a) James Reserve Data
0
2
4
6
8
10
12
14
409620481024512
NumberofMessages
Index size (entries)
Initial lookup
Traversal
(b) Synthetic Data
Figure 9: Skip Graph Lookup Performance
0
10
20
30
40
50
60
70
1 4 8 16 24 32 48
Numberofmessages
Number of proxies
Skipgraph insert
Sparse skipgraph insert
Initial lookup
(a) Impact of Number of
Proxies
0
20
40
60
80
100
120
512 1024 2048 4096
NumberofMessages
Index size (entries)
Insert (redundant)
Insert (non-redundant)
Lookup (redundant)
Lookup (non-redundant)
(b) Impact of Redundant
Summaries
Figure 10: Skip Graph Overheads
graph, we evaluate the cost of inserting a new value into the index.
Each entry was deleted after its insertion, enabling us to quantify
the delete overhead as well. Figure 8(a) and (b) quantify the insert
overhead for our two datasets: each insert entails an initial traversal
that incurs log n messages, followed by neighbor pointer update at
increasing levels, incurring a cost of 4 log n messages. Our results
demonstrate this behavior, and show as well that performance of
delete-which also involves an initial traversal followed by pointer
updates at each level-incurs a similar cost.
Next, we evaluate the lookup performance of the index
structure. Again, we construct skip graphs of various sizes using our
datasets and evaluate the cost of a lookup on the index structure.
Figures 9(a) and (b) depict our results. There are two components
for each lookup-the lookup of the first interval that matches the
query and, in the case of overlapping intervals, the subsequent
linear traversal to identify all matching intervals. The initial lookup
can be seen to takes log n messages, as expected. The costs of
the subsequent linear traversal, however, are highly data dependent.
For instance, temperature values for the James Reserve data exhibit
significant spatial correlations, resulting in significant overlap
between different intervals and variable, high traversal cost (see
Figure 9(a)). The synthetic data, however, has less overlap and incurs
lower traversal overhead as shown in Figure 9(b).
Since the previous experiments assumed 32 proxies, we evaluate
the impact of the number of proxies on skip graph performance. We
vary the number of proxies from 10 to 48 and distribute a skip graph
with 4096 entries among these proxies. We construct regular
interval skip graphs as well as sparse interval skip graphs using these
entries and measure the overhead of inserts and lookups. Thus, the
experiment also seeks to demonstrate the benefits of sparse skip
graphs over regular skip graphs. Figure 10(a) depicts our results.
In regular skip graphs, the complexity of insert is O(log2n) in the
47
expected case (and O(n) in the worst case) where n is the number
of elements. This complexity is unaffected by changing the
number of proxies, as indicated by the flat line in the figure. Sparse
skip graphs require fewer pointer updates; however, their overhead
is dependent on the number of proxies, and is O(log2Np) in the
expected case, independent of n. This can be seen to result in
significant reduction in overhead when the number of proxies is small,
which decreases as the number of proxies increases.
Failure handling is an important issue in a multi-tier sensor
architecture since it relies on many components-proxies, sensor nodes
and routing nodes can fail, and wireless links can fade. Handling
of many of these failure modes is outside the scope of this
paper; however, we consider the case of resilience of skip graphs
to proxy failures. In this case, skip graph search (and subsequent
repair operations) can follow any one of the other links from a
root element. Since a sparse skip graph has search trees rooted
at each node, searching can then resume once the lookup request
has routed around the failure. Together, these two properties
ensure that even if a proxy fails, the remaining entries in the skip
graph will be reachable with high probability-only the entries on
the failed proxy and the corresponding data at the sensors becomes
inaccessible.
To ensure that all data on sensors remains accessible, even in the
event of failure of a proxy holding index entries for that data, we
incorporate redundant index entries. TSAR employs a simple
redundancy scheme where additional coarse-grain summaries are used
to protect regular summaries. Each sensor sends summary data
periodically to its local proxy, but less frequently sends a
lowerresolution summary to a backup proxy-the backup summary
represents all of the data represented by the finer-grained summaries,
but in a lossier fashion, thus resulting in higher read overhead (due
to false hits) if the backup summary is used. The cost of
implementing this in our system is low - Figure 10(b) shows the overhead of
such a redundancy scheme, where a single coarse summary is send
to a backup for every two summaries sent to the primary proxy.
Since a redundant summary is sent for every two summaries, the
insert cost is 1.5 times the cost in the normal case. However, these
redundant entries result in only a negligible increase in lookup
overhead, due the logarithmic dependence of lookup cost on the index
size, while providing full resilience to any single proxy failure.
6.2 Storage Microbenchmarks
Since sensors are resource-constrained, the energy consumption
and the latency at this tier are important measures for evaluating the
performance of a storage architecture. Before performing an
endto-end evaluation of our system, we provide more detailed
information on the energy consumption of the storage component used
to implement the TSAR local archive, based on empirical
measurements. In addition we compare these figures to those for other
local storage technologies, as well as to the energy consumption of
wireless communication, using information from the literature. For
empirical measurements we measure energy usage for the storage
component itself (i.e. current drawn by the flash chip), as well as
for the entire Mica2 mote.
The power measurements in Table 3 were performed for the
AT45DB041 [15] flash memory on a Mica2 mote, which is an older
NOR flash device. The most promising technology for low-energy
storage on sensing devices is NAND flash, such as the Samsung
K9K4G08U0M device [16]; published power numbers for this
device are provided in the table. Published energy requirements for
wireless transmission using the Chipcon [4] CC2420 radio (used
in MicaZ and Telos motes) are provided for comparison, assuming
Energy Energy/byte
Mote flash
Read 256 byte page 58µJ* /
136µJ* total
0.23µJ*
Write 256 byte page 926µJ* /
1042µJ* total
3.6µJ*
NAND Flash
Read 512 byte page 2.7µJ 1.8nJ
Write 512 byte page 7.8µJ 15nJ
Erase 16K byte sector 60µJ 3.7nJ
CC2420 radio
Transmit 8 bits
(-25dBm)
0.8µJ 0.8µJ
Receive 8 bits 1.9µJ 1.9µJ
Mote AVR processor
In-memory search,
256 bytes
1.8µJ 6.9nJ
Table 3: Storage and Communication Energy Costs (*measured
values)
0
200
400
600
800
1000
1 2 3
Latency(ms)
Number of hops
(a) Multi-hop query
performance
0
100
200
300
400
500
1 5121024 2048 4096
Latency(ms)
Index size (entries)
Sensor communication
Proxy communication
Sensor lookup, processing
(b) Query Performance
Figure 11: Query Processing Latency
zero network and protocol overhead. Comparing the total energy
cost for writing flash (erase + write) to the total cost for
communication (transmit + receive), we find that the NAND flash is almost
150 times more efficient than radio communication, even assuming
perfect network protocols.
6.3 Prototype Evaluation
This section reports results from an end-to-end evaluation of the
TSAR prototype involving both tiers. In our setup, there are four
proxies connected via 802.11 links and three sensors per proxy. The
multi-hop topology was preconfigured such that sensor nodes were
connected in a line to each proxy, forming a minimal tree of depth
0
400
800
1200
1600
0 20 40 60 80 100 120 140 160
Retrievallatency(ms)
Archived data retrieved (bytes)
(a) Data Query and Fetch
Time
0
2
4
6
8
10
12 4 8 16 32
Latency(ms)
Number of 34-byte records searched
(b) Sensor query
processing delay
Figure 12: Query Latency Components
48
3. Due to resource constraints we were unable to perform
experiments with dozens of sensor nodes, however this topology ensured
that the network diameter was as large as for a typical network of
significantly larger size.
Our evaluation metric is the end-to-end latency of query
processing. A query posed on TSAR first incurs the latency of a sparse
skip graph lookup, followed by routing to the appropriate sensor
node(s). The sensor node reads the required page(s) from its local
archive, processes the query on the page that is read, and transmits
the response to the proxy, which then forwards it to the user. We
first measure query latency for different sensors in our multi-hop
topology. Depending on which of the sensors is queried, the total
latency increases almost linearly from about 400ms to 1 second, as
the number of hops increases from 1 to 3 (see Figure 11(a)).
Figure 11(b) provides a breakdown of the various components
of the end-to-end latency. The dominant component of the total
latency is the communication over one or more hops. The
typical time to communicate over one hop is approximately 300ms.
This large latency is primarily due to the use of a duty-cycled MAC
layer; the latency will be larger if the duty cycle is reduced (e.g.
the 2% setting as opposed to the 11.5% setting used in this
experiment), and will conversely decrease if the duty cycle is increased.
The figure also shows the latency for varying index sizes; as
expected, the latency of inter-proxy communication and skip graph
lookups increases logarithmically with index size. Not surprisingly,
the overhead seen at the sensor is independent of the index size.
The latency also depends on the number of packets transmitted
in response to a query-the larger the amount of data retrieved by a
query, the greater the latency. This result is shown in Figure 12(a).
The step function is due to packetization in TinyOS; TinyOS sends
one packet so long as the payload is smaller than 30 bytes and splits
the response into multiple packets for larger payloads. As the data
retrieved by a query is increased, the latency increases in steps,
where each step denotes the overhead of an additional packet.
Finally, Figure 12(b) shows the impact of searching and
processing flash memory regions of increasing sizes on a sensor. Each
summary represents a collection of records in flash memory, and
all of these records need to be retrieved and processed if that
summary matches a query. The coarser the summary, the larger the
memory region that needs to be accessed. For the search sizes
examined, amortization of overhead when searching multiple flash
pages and archival records, as well as within the flash chip and its
associated driver, results in the appearance of sub-linear increase
in latency with search size. In addition, the operation can be seen
to have very low latency, in part due to the simplicity of our query
processing, requiring only a compare operation with each stored
element. More complex operations, however, will of course incur
greater latency.
6.4 Adaptive Summarization
When data is summarized by the sensor before being reported
to the proxy, information is lost. With the interval summarization
method we are using, this information loss will never cause the
proxy to believe that a sensor node does not hold a value which it in
fact does, as all archived values will be contained within the interval
reported. However, it does cause the proxy to believe that the sensor
may hold values which it does not, and forward query messages to
the sensor for these values. These false positives constitute the cost
of the summarization mechanism, and need to be balanced against
the savings achieved by reducing the number of reports. The goal
of adaptive summarization is to dynamically vary the summary size
so that these two costs are balanced.
0
0.1
0.2
0.3
0.4
0.5
0 5 10 15 20 25 30
Fractionoftruehits
Summary size (number of records)
(a) Impact of summary
size
0
5
10
15
20
25
30
35
0 5000 10000 15000 20000 25000 30000
Summarizationsize(num.records)
Normalized time (units)
query rate 0.2
query rate 0.03
query rate 0.1
(b) Adaptation to query
rate
Figure 13: Impact of Summarization Granularity
Figure 13(a) demonstrates the impact of summary granularity
on false hits. As the number of records included in a summary
is increased, the fraction of queries forwarded to the sensor which
match data held on that sensor (true positives) decreases. Next,
in Figure 13(b) we run the a EmTOS simulation with our
adaptive summarization algorithm enabled. The adaptive algorithm
increases the summary granularity (defined as the number of records
per summary) when Cost(updates)
Cost(falsehits)
> 1 + and reduces it if
Cost(updates)
Cost(falsehits)
> 1 − , where is a small constant. To
demonstrate the adaptive nature of our technique, we plot a time series
of the summarization granularity. We begin with a query rate of 1
query per 5 samples, decrease it to 1 every 30 samples, and then
increase it again to 1 query every 10 samples. As shown in
Figure 13(b), the adaptive technique adjusts accordingly by sending
more fine-grain summaries at higher query rates (in response to the
higher false hit rate), and fewer, coarse-grain summaries at lower
query rates.
7. Related Work
In this section, we review prior work on storage and indexing
techniques for sensor networks. While our work addresses both
problems jointly, much prior work has considered them in isolation.
The problem of archival storage of sensor data has received
limited attention in sensor network literature. ELF [7] is a
logstructured file system for local storage on flash memory that
provides load leveling and Matchbox is a simple file system that is
packaged with the TinyOS distribution [14]. Both these systems
focus on local storage, whereas our focus is both on storage at the
remote sensors as well as providing a unified view of distributed
data across all such local archives. Multi-resolution storage [9] is
intended for in-network storage and search in systems where there
is significant data in comparison to storage resources. In contrast,
TSAR addresses the problem of archival storage in two-tier systems
where sufficient resources can be placed at the edge sensors. The
RISE platform [21] being developed as part of the NODE project
at UCR addresses the issues of hardware platform support for large
amounts of storage in remote sensor nodes, but not the indexing
and querying of this data.
In order to efficiently access a distributed sensor store, an index
needs to be constructed of the data. Early work on sensor networks
such as Directed Diffusion [17] assumes a system where all useful
sensor data was stored locally at each sensor, and spatially scoped
queries are routed using geographic co-ordinates to locations where
the data is stored. Sources publish the events that they detect, and
sinks with interest in specific events can subscribe to these events.
The Directed Diffusion substrate routes queries to specific locations
49
if the query has geographic information embedded in it (e.g.: find
temperature in the south-west quadrant), and if not, the query is
flooded throughout the network.
These schemes had the drawback that for queries that are not
geographically scoped, search cost (O(n) for a network of n nodes)
may be prohibitive in large networks with frequent queries.
Local storage with in-network indexing approaches address this
issue by constructing indexes using frameworks such as Geographic
Hash Tables [24] and Quad Trees [9]. Recent research has seen
a growing body of work on data indexing schemes for sensor
networks[26][11][18]. One such scheme is DCS [26], which provides
a hash function for mapping from event name to location. DCS
constructs a distributed structure that groups events together
spatially by their named type. Distributed Index of Features in
Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor
Networks (DIM [18]) extend the data-centric storage approach to
provide spatially distributed hierarchies of indexes to data.
While these approaches advocate in-network indexing for sensor
networks, we believe that indexing is a task that is far too
complicated to be performed at the remote sensor nodes since it involves
maintaining significant state and large tables. TSAR provides a
better match between resource requirements of storage and indexing
and the availability of resources at different tiers. Thus complex
operations such as indexing and managing metadata are performed
at the proxies, while storage at the sensor remains simple.
In addition to storage and indexing techniques specific to sensor
networks, many distributed, peer-to-peer and spatio-temporal index
structures are relevant to our work. DHTs [25] can be used for
indexing events based on their type, quad-tree variants such as
Rtrees [12] can be used for optimizing spatial searches, and K-D
trees [2] can be used for multi-attribute search. While this paper
focuses on building an ordered index structure for range queries, we
will explore the use of other index structures for alternate queries
over sensor data.
8. Conclusions
In this paper, we argued that existing sensor storage systems
are designed primarily for flat hierarchies of homogeneous sensor
nodes and do not fully exploit the multi-tier nature of emerging
sensor networks. We presented the design of TSAR, a fundamentally
different storage architecture that envisions separation of data from
metadata by employing local storage at the sensors and distributed
indexing at the proxies. At the proxy tier, TSAR employs a novel
multi-resolution ordered distributed index structure, the Sparse
Interval Skip Graph, for efficiently supporting spatio-temporal and
range queries. At the sensor tier, TSAR supports energy-aware
adaptive summarization that can trade-off the energy cost of
transmitting metadata to the proxies against the overhead of false hits
resulting from querying a coarser resolution index structure. We
implemented TSAR in a two-tier sensor testbed comprising
Stargatebased proxies and Mote-based sensors. Our experimental
evaluation of TSAR demonstrated the benefits and feasibility of
employing our energy-efficient low-latency distributed storage architecture
in multi-tier sensor networks.
9. REFERENCES
[1] James Aspnes and Gauri Shah. Skip graphs. In Fourteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, pages 384-393, Baltimore, MD, USA,
12-14 January 2003.
[2] Jon Louis Bentley. Multidimensional binary search trees used for associative
searching. Commun. ACM, 18(9):509-517, 1975.
[3] Philippe Bonnet, J. E. Gehrke, and Praveen Seshadri. Towards sensor database
systems. In Proceedings of the Second International Conference on Mobile
Data Management., January 2001.
[4] Chipcon. CC2420 2.4 GHz IEEE 802.15.4 / ZigBee-ready RF transceiver, 2004.
[5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.
Introduction to Algorithms. The MIT Press and McGraw-Hill, second edition
edition, 2001.
[6] Adina Crainiceanu, Prakash Linga, Johannes Gehrke, and Jayavel
Shanmugasundaram. Querying Peer-to-Peer Networks Using P-Trees.
Technical Report TR2004-1926, Cornell University, 2004.
[7] Hui Dai, Michael Neufeld, and Richard Han. ELF: an efficient log-structured
flash file system for micro sensor nodes. In SenSys "04: Proceedings of the 2nd
international conference on Embedded networked sensor systems, pages
176-187, New York, NY, USA, 2004. ACM Press.
[8] Peter Desnoyers, Deepak Ganesan, Huan Li, and Prashant Shenoy. PRESTO: A
predictive storage architecture for sensor networks. In Tenth Workshop on Hot
Topics in Operating Systems (HotOS X)., June 2005.
[9] Deepak Ganesan, Ben Greenstein, Denis Perelyubskiy, Deborah Estrin, and
John Heidemann. An evaluation of multi-resolution storage in sensor networks.
In Proceedings of the First ACM Conference on Embedded Networked Sensor
Systems (SenSys)., 2003.
[10] L. Girod, T. Stathopoulos, N. Ramanathan, J. Elson, D. Estrin, E. Osterweil,
and T. Schoellhammer. A system for simulation, emulation, and deployment of
heterogeneous sensor networks. In Proceedings of the Second ACM Conference
on Embedded Networked Sensor Systems, Baltimore, MD, 2004.
[11] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker. DIFS: A
distributed index for features in sensor networks. Elsevier Journal of ad-hoc
Networks, 2003.
[12] Antonin Guttman. R-trees: a dynamic index structure for spatial searching. In
SIGMOD "84: Proceedings of the 1984 ACM SIGMOD international
conference on Management of data, pages 47-57, New York, NY, USA, 1984.
ACM Press.
[13] Nicholas Harvey, Michael B. Jones, Stefan Saroiu, Marvin Theimer, and Alec
Wolman. Skipnet: A scalable overlay network with practical locality properties.
In In proceedings of the 4th USENIX Symposium on Internet Technologies and
Systems (USITS "03), Seattle, WA, March 2003.
[14] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, and
Kristofer Pister. System architecture directions for networked sensors. In
Proceedings of the Ninth International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS-IX), pages 93-104,
Cambridge, MA, USA, November 2000. ACM.
[15] Atmel Inc. 4-megabit 2.5-volt or 2.7-volt DataFlash AT45DB041B, 2005.
[16] Samsung Semiconductor Inc. K9W8G08U1M, K9K4G08U0M: 512M x 8 bit /
1G x 8 bit NAND flash memory, 2003.
[17] Chalermek Intanagonwiwat, Ramesh Govindan, and Deborah Estrin. Directed
diffusion: A scalable and robust communication paradigm for sensor networks.
In Proceedings of the Sixth Annual International Conference on Mobile
Computing and Networking, pages 56-67, Boston, MA, August 2000. ACM
Press.
[18] Xin Li, Young-Jin Kim, Ramesh Govindan, and Wei Hong. Multi-dimensional
range queries in sensor networks. In Proceedings of the First ACM Conference
on Embedded Networked Sensor Systems (SenSys)., 2003. to appear.
[19] Witold Litwin, Marie-Anne Neimat, and Donovan A. Schneider. RP*: A family
of order preserving scalable distributed data structures. In VLDB "94:
Proceedings of the 20th International Conference on Very Large Data Bases,
pages 342-353, San Francisco, CA, USA, 1994.
[20] Samuel Madden, Michael Franklin, Joseph Hellerstein, and Wei Hong. TAG: a
tiny aggregation service for ad-hoc sensor networks. In OSDI, Boston, MA,
2002.
[21] A. Mitra, A. Banerjee, W. Najjar, D. Zeinalipour-Yazti, D.Gunopulos, and
V. Kalogeraki. High performance, low power sensor platforms featuring
gigabyte scale storage. In SenMetrics 2005: Third International Workshop on
Measurement, Modeling, and Performance Analysis of Wireless Sensor
Networks, July 2005.
[22] J. Polastre, J. Hill, and D. Culler. Versatile low power media access for wireless
sensor networks. In Proceedings of the Second ACM Conference on Embedded
Networked Sensor Systems (SenSys), November 2004.
[23] William Pugh. Skip lists: a probabilistic alternative to balanced trees. Commun.
ACM, 33(6):668-676, 1990.
[24] S. Ratnasamy, D. Estrin, R. Govindan, B. Karp, L. Yin S. Shenker, and F. Yu.
Data-centric storage in sensornets. In ACM First Workshop on Hot Topics in
Networks, 2001.
[25] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A scalable
content addressable network. In Proceedings of the 2001 ACM SIGCOMM
Conference, 2001.
[26] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker.
GHT - a geographic hash-table for data-centric storage. In First ACM
International Workshop on Wireless Sensor Networks and their Applications,
2002.
[27] N. Xu, E. Osterweil, M. Hamilton, and D. Estrin.
http://www.lecs.cs.ucla.edu/˜nxu/ess/. James Reserve Data.
50 | analysis;archival storage;datum separation;sensor datum;index method;homogeneous architecture;flash storage;metada;distributed index structure;interval skip graph;interval tree;spatial scoping;geographic hash table;separation of datum;flooding;multi-tier sensor network;wireless sensor network;archive |
train_C-48 | Multi-dimensional Range Queries in Sensor Networks∗ | In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: List all events whose temperature lies between 50◦ and 60◦ , and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( √ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs. | 1. INTRODUCTION
In wireless sensor networks, data or events will be named
by attributes [15] or represented as virtual relations in a
distributed database [18, 3]. Many of these attributes will
have scalar values: e.g., temperature and light levels, soil
moisture conditions, etc. In these systems, we argue, one
natural way to query for events of interest will be to use
multi-dimensional range queries on these attributes. For
example, scientists analyzing the growth of marine
microorganisms might be interested in events that occurred within
certain temperature and light conditions: List all events
that have temperatures between 50◦
F and 60◦
F, and light
levels between 10 and 20.
Such range queries can be used in two distinct ways. They
can help users efficiently drill-down their search for events of
interest. The query described above illustrates this, where
the scientist is presumably interested in discovering, and
perhaps mapping the combined effect of temperature and
light on the growth of marine micro-organisms. More
importantly, they can be used by application software running
within a sensor network for correlating events and triggering
actions. For example, if in a habitat monitoring application,
a bird alighting on its nest is indicated by a certain range
of thermopile sensor readings, and a certain range of
microphone readings, a multi-dimensional range query on those
attributes enables higher confidence detection of the arrival
of a flock of birds, and can trigger a system of cameras.
In traditional database systems, such range queries are
supported using pre-computed indices. Indices trade-off some
initial pre-computation cost to achieve a significantly more
efficient querying capability. For sensor networks, we
assert that a centralized index for multi-dimensional range
queries may not be feasible for energy-efficiency reasons (as
well as the fact that the access bandwidth to this central
index will be limited, particularly for queries emanating
from within the network). Rather, we believe, there will
be situations when it is more appropriate to build an
innetwork distributed data structure for efficiently answering
multi-dimensional range queries.
In this paper, we present just such a data structure, that
we call a DIM1
. DIMs are inspired by classical database
indices, and are essentially embeddings of such indices within
the sensor network. DIMs leverage two key ideas: in-network
1
Distributed Index for Multi-dimensional data.
63
data centric storage, and a novel locality-preserving
geographic hash (Section 3). DIMs trace their lineage to
datacentric storage systems [23]. The underlying mechanism in
these systems allows nodes to consistently hash an event to
some location within the network, which allows efficient
retrieval of events. Building upon this, DIMs use a technique
whereby events whose attribute values are close are likely
to be stored at the same or nearby nodes. DIMs then use
an underlying geographic routing algorithm (GPSR [16]) to
route events and queries to their corresponding nodes in an
entirely distributed fashion.
We discuss the design of a DIM, presenting algorithms for
event insertion and querying, for maintaining a DIM in the
event of node failure, and for making DIMs robust to data or
packet loss (Section 3). We then extensively evaluate DIMs
using analysis (Section 4), simulation (Section 5), and actual
implementation (Section 6). Our analysis reveals that,
under reasonable assumptions about query distributions, DIMs
scale quite well with network size (both insertion and query
costs scale as O(
√
N)). In detailed simulations, we show
that in practice, the event insertion and querying costs of
other alternatives are sometimes an order of magnitude the
costs of DIMs, even for moderately sized network.
Experiments on a small scale testbed validate the feasibility of
DIMs (Section 6). Much work remains, including efficient
support for skewed data distributions, existential queries,
and node heterogeneity.
We believe that DIMs will be an essential, but perhaps
not necessarily the only, distributed data structure
supporting efficient queries in sensor networks. DIMs will be part
of a suite of such systems that enable feature extraction [7],
simple range querying [10], exact-match queries [23], or
continuous queries [15, 18]. All such systems will likely be
integrated to a sensor network database system such as
TinyDB [17]. Application designers could then choose the
appropriate method of information access. For instance,
a fire tracking application would use DIM to detect the
hotspots, and would then use mechanisms that enable
continuous queries [15, 18] to track the spatio-temporal progress
of the hotspots. Finally, we note that DIMs are applicable
not just to sensor networks, but to other deeply distributed
systems (embedded networks for home and factory
automation) as well.
2. RELATED WORK
The basic problem that this paper addresses -
multidimensional range queries - is typically solved in database
systems using indexing techniques. The database
community has focused mostly on centralized indices, but distributed
indexing has received some attention in the literature.
Indexing techniques essentially trade-off some data
insertion cost to enable efficient querying. Indexing has, for long,
been a classical research problem in the database
community [5, 2]. Our work draws its inspiration from the class
of multi-key constant branching index structures,
exemplified by k-d trees [2], where k represents the dimensionality
of the data space. Our approach essentially represents a
geographic embedding of such structures in a sensor field.
There is one important difference. The classical indexing
structures are data-dependent (as are some indexing schemes
that use locality preserving hashes, and developed in the
theory literature [14, 8, 13]). The index structure is decided
not only by the data, but also by the order in which data
is inserted. Our current design is not data dependent.
Finally, tangentially related to our work is the class of spatial
indexing systems [21, 6, 11].
While there has been some work on distributed indexing,
the problem has not been extensively explored. There
exist distributed indices of a restricted kind-those that allow
exact match or partial prefix match queries. Examples of
such systems, of course, are the Internet Domain Name
System, and the class of distributed hash table (DHT) systems
exemplified by Freenet[4], Chord[24], and CAN[19]. Our
work is superficially similar to CAN in that both construct
a zone-based overlay atop of the underlying physical
network. The underlying details make the two systems very
different: CAN"s overlay is purely logical while our overlay
is consistent with the underlying physical topology. More
recent work in the Internet context has addressed support
for range queries in DHT systems [1, 12], but it is unclear if
these directly translate to the sensor network context.
Several research efforts have expressed the vision of a
database interface to sensor networks [9, 3, 18], and there
are examples of systems that contribute to this vision [18,
3, 17]. Our work is similar in spirit to this body of
literature. In fact, DIMs could become an important component
of a sensor network database system such as TinyDB [17].
Our work departs from prior work in this area in two
significant respects. Unlike these approaches, in our work the data
generated at a node are hashed (in general) to different
locations. This hashing is the key to scaling multi-dimensional
range searches. In all the other systems described above,
queries are flooded throughout the network, and can
dominate the total cost of the system. Our work avoids query
flooding by an appropriate choice of hashing. Madden et
al. [17] also describe a distributed index, called Semantic
Routing Trees (SRT). This index is used to direct queries
to nodes that have detected relevant data. Our work
differs from SRT in three key aspects. First, SRT is built on
single attributes while DIM supports mulitple attributes.
Second, SRT constructs a routing tree based on historical
sensor readings, and therefore works well only for
slowlychanging sensor values. Finally, in SRT queries are issued
from a fixed node while in DIM queries can be issued from
any node.
A similar differentiation applies with respect to work on
data-centric routing in sensor networks [15, 25], where data
generated at a node is assumed to be stored at the node,
and queries are either flooded throughout the network [15],
or each source sets up a network-wide overlay announcing its
presence so that mobile sinks can rendezvous with sources
at the nearest node on the overlay [25]. These approaches
work well for relatively long-lived queries.
Finally, our work is most close related to data-centric
storage [23] systems, which include geographic hash-tables
(GHTs) [20], DIMENSIONS [7], and DIFS [10].In a GHT,
data is hashed by name to a location within the network,
enabling highly efficient rendezvous. GHTs are built upon the
GPSR [16] protocol and leverage some interesting properties
of that protocol, such as the ability to route to a node nearest
to a given location. We also leverage properties in GPSR (as
we describe later), but we use a locality-preserving hash to
store data, enabling efficient multi-dimensional range queries.
DIMENSIONS and DIFS can be thought of as using the
same set of primitives as GHT (storage using consistent
hashing), but for different ends: DIMENSIONS allows
drill64
down search for features within a sensor network, while
DIFS allows range queries on a single key in addition to
other operations.
3. THE DESIGN OF DIMS
Most sensor networks are deployed to collect data from
the environment. In these networks, nodes (either
individually or collaboratively) will generate events. An event
can generally be described as a tuple of attribute values,
A1, A2, · · · , Ak , where each attribute Ai represents a
sensor reading, or some value corresponding to a detection
(e.g., a confidence level). The focus of this paper is the
design of systems to efficiently answer multi-dimensional range
queries of the form: x1 − y1, x2 − y2, · · · , xk − yk . Such a
query returns all events whose attribute values fall into the
corresponding ranges. Notice that point queries, i.e., queries
that ask for events with specified values for each attribute,
are a special case of range queries.
As we have discussed in Section 1, range queries can
enable efficient correlation and triggering within the network.
It is possible to implement range queries by flooding a query
within the network. However, as we show in later sections,
this alternative can be inefficient, particularly as the system
scales, and if nodes within the network issue such queries
relatively frequently. The other alternative, sending all events
to an external storage node results in the access link being
a bottleneck, especially if nodes within the network issue
queries. Shenker et al. [23] also make similar arguments with
respect to data-centric storage schemes in general; DIMs are
an instance of such schemes.
The system we present in this paper, the DIM, relies upon
two foundations: a locality-preserving geographic hash, and
an underlying geographic routing scheme.
The key to resolving range queries efficiently is data
locality: i.e., events with comparable attribute values are stored
nearby. The basic insight underlying DIM is that data
locality can be obtained by a locality-preserving geographic
hash function. Our geographic hash function finds a
localitypreserving mapping from the multi-dimensional space
(described by the set of attributes) to a 2-d geographic space;
this mapping is inspired by k-d trees [2] and is described
later. Moreover, each node in the network self-organizes
to claim part of the attribute space for itself (we say that
each node owns a zone), so events falling into that space are
routed to and stored at that node.
Having established the mapping, and the zone structure,
DIMs use a geographic routing algorithm previously
developed in the literature to route events to their corresponding
nodes, or to resolve queries. This algorithm, GPSR [16],
essentially enables the delivery of a packet to a node at a
specified location. The routing mechanism is simple: when
a node receives a packet destined to a node at location X, it
forwards the packet to the neighbor closest to X. In GPSR,
this is called greedy-mode forwarding. When no such
neighbor exists (as when there exists a void in the network), the
node starts the packet on a perimeter mode traversal,
using the well known right-hand rule to circumnavigate voids.
GPSR includes efficient techniques for perimeter traversal
that are based on graph planarization algorithms amenable
to distributed implementation.
For all of this to work, DIMs make two assumptions that
are consistent with the literature [23]. First, all nodes know
the approximate geographic boundaries of the network. These
boundaries may either be configured in nodes at the time of
deployment, or may be discovered using a simple protocol.
Second, each node knows its geographic location. Node
locations can be automatically determined by a localization
system or by other means.
Although the basic idea of DIMs may seem
straightforward, it is challenging to design a completely distributed
data structure that must be robust to packet losses and
node failures, yet must support efficient query distribution
and deal with communication voids and obstacles. We now
describe the complete design of DIMs.
3.1 Zones
The key idea behind DIMs, as we have discussed, is a
geographic locality-preserving hash that maps a multi-attribute
event to a geographic zone. Intuitively, a zone is a
subdivision of the geographic extent of a sensor field.
A zone is defined by the following constructive procedure.
Consider a rectangle R on the x-y plane. Intuitively, R is
the bounding rectangle that contains all sensors withing the
network. We call a sub-rectangle Z of R a zone, if Z is
obtained by dividing R k times, k ≥ 0, using a procedure
that satisfies the following property:
After the i-th division, 0 ≤ i ≤ k, R is
partitioned into 2i
equal sized rectangles. If i is an
odd (even) number, the i-th division is parallel
to the y-axis (x-axis).
That is, the bounding rectangle R is first sub-divided into
two zones at level 0 by a vertical line that splits R into two
equal pieces, each of these sub-zones can be split into two
zones at level 1 by a horizontal line, and so on. We call the
non-negative integer k the level of zone Z, i.e. level(Z) = k.
A zone can be identified either by a zone code code(Z)
or by an address addr(Z). The code code(Z) is a 0-1 bit
string of length level(Z), and is defined as follows. If Z lies
in the left half of R, the first (from the left) bit of code(Z)
is 0, else 1. If Z lies in the bottom half of R, the second
bit of code(Z) is 0, else 1. The remaining bits of code(Z)
are then recursively defined on each of the four quadrants of
R. This definition of the zone code matches the definition
of zones given above, encoding divisions of the sensor field
geography by bit strings. Thus, in Figure 2, the zone in the
top-right corner of the rectangle R has a zone code of 1111.
Note that the zone codes collectively define a zone tree such
that individual zones are at the leaves of this tree.
The address of a zone Z, addr(Z), is defined to be the
centroid of the rectangle defined by Z. The two representations
of a zone (its code and its address) can each be computed
from the other, assuming the level of the zone is known.
Two zones are called sibling zones if their zone codes are
the same except for the last bit. For example, if code(Z1) =
01101 and code(Z2) = 01100, then Z1 and Z2 are sibling
zones. The sibling subtree of a zone is the subtree rooted
at the left or right sibling of the zone in the zone tree. We
uniquely define a backup zone for each zone as follows: if
the sibling subtree of the zone is on the left, the backup
zone is the right-most zone in the sibling subtree;
otherwise, the backup zone is the left-most zone in the sibling
subtree. For a zone Z, let p be the first level(Z) − 1 digits
of code(Z). Let backup(Z) be the backup zone of zone Z.
If code(Z) = p1, code(backup(Z)) = p01∗ with the most
number of trailing 1"s (∗ means 0 or 1 occurrences). If
65
code(Z) = p0, code(backup(Z)) = p10∗ with the most
number of trailing 0"s.
3.2 Associating Zones with Nodes
Our definition of a zone is independent of the actual
distribution of nodes in the sensor field, and only depends upon
the geographic extent (the bounding rectangle) of the sensor
field. Now we describe how zones are mapped to nodes.
Conceptually, the sensor field is logically divided into zones
and each zone is assigned to a single node. If the sensor
network were deployed in a grid-like (i.e., very regular) fashion,
then it is easy to see that there exists a k such that each
node maps into a distinct level-k zone. In general, however,
the node placements within a sensor field are likely to be less
regular than the grid. For some k, some zones may be empty
and other zones might have more than one node situated
within them. One alternative would have been to choose
a fixed k for the overall system, and then associate nodes
with the zones they are in (and if a zone is empty, associate
the nearest node with it, for some definition of nearest).
Because it makes our overall query routing system simpler,
we allow nodes in a DIM to map to different-sized zones.
To precisely understand the associations between zones
and nodes, we define the notion of zone ownership. For any
given placement of network nodes, consider a node A. Let
ZA to be the largest zone that includes only node A and no
other node. Then, we say that A owns ZA. Notice that this
definition of ownership may leave some sections of the sensor
field un-associated with a node. For example, in Figure 2,
the zone 110 does not contain any nodes and would not have
an owner. To remedy this, for any empty zone Z, we define
the owner to be the owner of backup(Z). In our example,
that empty zone"s owner would also be the node that owns
1110, its backup zone.
Having defined the association between nodes and zones,
the next problem we tackle is: given a node placement, does
there exist a distributed algorithm that enables each node
to determine which zones it owns, knowing only the overall
boundary of the sensor network? In principle, this should
be relatively straightforward, since each node can simply
determine the location of its neighbors, and apply simple
geometric methods to determine the largest zone around it
such that no other node resides in that zone. In practice,
however, communication voids and obstacles make the
algorithm much more challenging. In particular, resolving the
ownership of zones that do not contain any nodes is
complicated. Equally complicated is the case where the zone
of a node is larger than its communication radius and the
node cannot determine the boundaries of its zone by local
communication alone.
Our distributed zone building algorithm defers the
resolution of such zones until when either a query is initiated, or
when an event is inserted. The basic idea behind our
algorithm is that each node tentatively builds up an idea of the
zone it resides in just by communicating with its neighbors
(remembering which boundaries of the zone are undecided
because there is no radio neighbor that can help resolve that
boundary). These undecided boundaries are later resolved
by a GPSR perimeter traversal when data messages are
actually routed.
We now describe the algorithm, and illustrate it using
examples. In our algorithm, each node uses an array bound[0..3]
to maintain the four boundaries of the zone it owns
(rememFigure 1: A network, where circles represent sensor
nodes and dashed lines mark the network boundary.
1111
011
00
110
100
101
1110
010
Figure 2: The zone code and boundaries.
0 1
0 1
10
10
1 1
10
00
Figure 3: The Corresponding Zone Tree
ber that in this algorithm, the node only tries to determine
the zone it resides in, not the other zones it might own
because those zones are devoid of nodes). When a node
starts up, each node initializes this array to be the network
boundary, i.e., initially each node assumes its zone contains
the whole network. The zone boundary algorithm now
relies upon GPSR"s beacon messages to learn the locations of
neighbors within radio range. Upon hearing of such a
neighbor, the node calls the algorithm in Figure 4 to update its
zone boundaries and its code accordingly. In this algorithm,
we assume that A is the node at which the algorithm is
executed, ZA is its zone, and a is a newly discovered neighbor
of A. (Procedure Contain(ZA, a) is used to decide if node
a is located within the current zone boundaries of node A).
Using this algorithm, then, each node can independently
and asynchronously decide its own tentative zone based on
the location of its neighbors. Figure 2 illustrates the results
of applying this algorithm for the network in Figure 1.
Figure 3 describes the corresponding zone tree. Each zone
resides at a leaf node and the code of a zone is the path from
the root to the zone if we represent the branch to the left
66
Build-Zone(a)
1 while Contain(ZA, a)
2 do if length(code(ZA)) mod 2 = 0
3 then new bound ← (bound[0] + bound[1])/2
4 if A.x < new bound
5 then bound[1] ← new bound
6 else bound[0] ← new bound
7 else new bound ← (bound[2] + bound[3])/2
8 if A.y < new bound
9 then bound[3] ← new bound
10 else bound[2] ← new bound
11 Update zone code code(ZA)
Figure 4: Zone Boundary Determination, where A.x
and A.y represent the geographic coordinate of node
A.
Insert-Event(e)
1 c ← Encode(e)
2 if Contain(ZA, c) = true and is Internal() = true
3 then Store e and exit
4 Send-Message(c, e)
Send-Message(c, m)
1 if ∃ neighbor Y, Closer(Y, owner(m), m) = true
2 then addr(m) ← addr(Y )
3 else if length(c) > length(code(m))
4 then Update code(m) and addr(m)
5 source(m) ← caller
6 if is Owner(msg) = true
7 then owner(m) ← caller"s code
8 Send(m)
Figure 5: Inserting an event in a DIM. Procedure
Closer(A, B, m) returns true if code(A) is closer to
code(m) than code(B). source(m) is used to set the source
address of message m.
child by 0 and the branch to the right child by 1. This binary
tree forms the index that we will use in the following event
and query processing procedures.
We see that the zone sizes are different and depend on
the local densities and so are the lengths of zone codes for
different nodes. Notice that in Figure 2, there is an empty
zone whose code should be 110. In this case, if the node in
zone 1111 can only hear the node in zone 1110, it sets its
boundary with the empty zone to undecided, because it did
not hear from any neighboring nodes from that direction.
As we have mentioned before, the undecided boundaries are
resolved using GPSR"s perimeter mode when an event is
inserted, or a query sent. We describe event insertion in the
next step.
Finally, this description does not describe how a node"s
zone codes are adjusted when neighboring nodes fail, or new
nodes come up. We return to this in Section 3.5.
3.3 Inserting an Event
In this section, we describe how events are inserted into
a DIM. There are two algorithms of interest: a consistent
hashing technique for mapping an event to a zone, and a
routing algorithm for storing the event at the appropriate
zone. As we shall see, these two algorithms are inter-related.
3.3.1 Hashing an Event to a Zone
In Section 3.1, we described a recursive tessellation of
the geographic extent of a sensor field. We now describe
a consistent hashing scheme for a DIM that supports range
queries on m distinct attributes2
Let us denote these attributes A1 . . . Am. For simplicity,
assume for now that the depth of every zone in the network
is k, k is a multiple of m, and that this value of k is known
to every node. We will relax this assumption shortly.
Furthermore, for ease of discussion, we assume that all attribute
values have been normalized to be between 0 and 1.
Our hashing scheme assigns a k bit zone code to an event
as follows. For i between 1 and m, if Ai < 0.5, the i-th
bit of the zone code is assigned 0, else 1. For i between
m + 1 and 2m, if Ai−m < 0.25 or Ai−m ∈ [0.5, 0.75), the
i-th bit of the zone is assigned 0, else 1, because the next
level divisions are at 0.25 and 0.75 which divide the ranges
to [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1). We repeat
this procedure until all k bits have been assigned. As an
example, consider event E = 0.3, 0.8 . For this event, the
5-bit zone code is code(ZA) = 01110.
Essentially, our hashing scheme uses the values of the
attributes in round-robin fashion on the zone tree (such as
the one in Figure 3), in order to map an m-attribute event
to a zone code. This is reminiscent of k-d trees [2], but
is quite different from that data structure: zone trees are
spatial embeddings and do not incorporate the re-balancing
algorithms in k-d trees.
In our design of DIMs, we do not require nodes to have
zone codes of the same length, nor do we expect a node to
know the zone codes of other nodes. Rather, suppose the
encoding node is A and its own zone code is of length kA.
Then, given an event E, node A only hashes E to a zone
code of length kA. We denote the zone code assigned to an
event E by code(E). As we describe below, as the event is
routed, code(E) is refined by intermediate nodes. This lazy
evaluation of zone codes allows different nodes to use
different length zone codes without any explicit coordination.
3.3.2 Routing an Event to its Owner
The aim of hashing an event to a zone code is to store the
event at the node within the network node that owns that
zone. We call this node the owner of the event. Consider
an event E that has just been generated at a node A. After
encoding event E, node A compares code(E) with code(A).
If the two are identical, node A store event E locally;
otherwise, node A will attempt to route the event to its owner.
To do this, note that code(E) corresponds to some zone
Z , which is A"s current guess for the zone at which event E
should be stored. A now invokes GPSR to send a message
to addr(Z ) (the centroid of Z , Section 3.1). The message
contains the event E, code(E), and the target geographic
location for storing the event. In the message, A also marks
itself as the owner of event E. As we will see later, the
guessed zone Z , the address addr(Z ), and the owner of
E, all of them contained in the message, will be refined by
intermediate forwarding nodes.
GPSR now delivers this message to the next hop towards
addr(Z ) from A. This next hop node (call it B) does not
immediately forward the message. Rather, it attempts to
com2
DIM does not assume that all nodes are homogeneous in
terms of the sensors they have. Thus, in an m dimensional
DIM, a node that does not possess all m sensors can use NULL
values for the corresponding readings. DIM treats NULL as
an extreme value for range comparisons. As an aside, a
network may have many DIM instances running concurrently.
67
pute a new zone code for E to get a new code codenew(E).
B will update the code contained in the message (and also
the geographic destination of the message) if codenew(E) is
longer than the event code in the message. In this manner,
as the event wends its way to its owner, its zone code gets
refined. Now, B compares its own code code(B) against the
owner code owner(E) contained in the incoming message.
If code(B) has a longer match with code(E) than the
current owner owner(E), then B sets itself to be the current
owner of E, meaning that if nobody is eligible to store E,
then B will store the event (we shall see how this happens
next). If B"s zone code does not exactly match code(E), B
will invoke GPSR to deliver E to the next hop.
3.3.3 Resolving undecided zone boundaries during
insertion
Suppose that some node, say C, finds itself to be the
destination (or eventual owner) of an event E. It does so by
noticing that code code(C) equals code(E) after locally
recomputing a code for E. In that case, C stores E locally, but
only if all four of C"s zone boundaries are decided. When
this condition holds, C knows for sure that no other nodes
have overlapped zones with it. In this case, we call C an
internal node.
Recall, though, that because the zone discovery algorithm
Section 3.2 only uses information from immediate neighbors,
one or more of C"s boundaries may be undecided. If so, C
assumes that some other nodes have a zone that overlaps
with its own, and sets out to resolve this overlap. To do
this, C now sets itself to be the owner of E and continues
forwarding the message. Here we rely on GPSR"s
perimeter mode routing to probe around the void that causes the
undecided boundary. Since the message starts from C and
is destined for a geographic location near C, GPSR
guarantees that the message will be delivered back to C if no
other nodes will update the information in the message. If
the message comes back to C with itself to be the owner, C
infers that it must be the true owner of the zone and stores
E locally.
If this does not happen, there are two possibilities. The
first is that as the event traverses the perimeter, some
intermediate node, say B whose zone overlaps with C"s marks
itself to be the owner of the event, but otherwise does not
change the event"s zone code. This node also recognizes that
its own zone overlaps with C"s and initiates a message
exchange which causes each of them to appropriately shrink
their zone.
Figures 6 through 8 show an example of this data-driven
zone shrinking. Initially, both node A and node B have
claimed the same zone 0 because they are out of radio range
of each other. Suppose that A inserts an event E = 0.4, 0.8, 0.9 .
A encodes E to 0 and claims itself to be the owner of E.
Since A is not an internal node, it sends out E, looking for
other owner candidates of E. Once E gets to node B, B will
see in the message"s owner field A"s code that is the same as
its own. B then shrinks its zone from 0 to 01 according to
A"s location which is also recorded in the message and send
a shrink request to A. Upon receiving this request, A also
shrinks its zone from 0 to 00.
A second possibility is if some intermediate node changes
the destination code of E to a more specific value (i.e.,
longer zone code). Let us label this node D. D now tries
to initiate delivery to the centroid of the new zone. This
A
B
0
0
110
100
1111
1110
101
Figure 6: Nodes A and B have claimed the same zone.
A
B
<0.4,0.8,0.9>
Figure 7: An event/query message (filled arrows)
triggers zone shrinking (hollow arrows).
A
B
01
00
110
100
1111
1110
101
Figure 8: The zone layout after shrinking. Now node
A and B have been mapped to different zones.
might result in a new perimeter walk that returns to D (if,
for example, D happens to be geographically closest to the
centroid of the zone). However, D would not be the owner
of the event, which would still be C. In routing to the
centroid of this zone, the message may traverse the perimeter
and return to D. Now D notices that C was the original
owner, so it encapsulates the event and directs it to C. In
case that there indeed is another node, say X, that owns
an overlapped zone with C, X will notice this fact by
finding in the message the same prefix of the code of one of
its zones, but with a different geographic location from its
own. X will shrink its zone to resolve the overlap. If X"s
zone is smaller than or equal to C"s zone, X will also send
a shrink request to C. Once C receives a shrink request,
it will reduce its zone appropriately and fix its undecided
boundary. In this manner, the zone formation process is
resolved on demand in a data-driven way.
68
There are several interesting effects with respect to
perimeter walking that arise in our algorithm. The first is that
there are some cases where an event insertion might cause
the entire outer perimeter of the network to be traversed3
.
Figure 6 also works as an example where the outer
perimeter is traversed. Event E inserted by A will eventually be
stored in node B. Before node B stores event E, if B"s
nominal radio range does not intersect the network boundary, it
needs to send out E again as A did, because B in this case
is not an internal node. But if B"s nominal radio range
intersects the network boundary, it then has two choices. It
can assume that there will not be any nodes outside the
network boundary and so B is an internal node. This is an
aggressive approach. On the other hand, B can also make
a conservative decision assuming that there might be some
other nodes it have not heard of yet. B will then force the
message walking another perimeter before storing it.
In some situations, especially for large zones where the
node that owns a zone is far away from the centroid of the
owned zone, there might exist a small perimeter around the
destination that does not include the owner of the zone. The
event will end up being stored at a different node than the
real owner. In order to deal with this problem, we add an
extra operation in event forwarding, called efficient neighbor
discovery. Before invoking GPSR, a node needs to check if
there exists a neighbor who is eligible to be the real owner of
the event. To do this, a node C, say, needs to know the zone
codes of its neighboring nodes. We deploy GPSR"s
beaconing message to piggyback the zone codes for nodes. So by
simply comparing the event"s code and neighbor"s code, a
node can decide whether there exists a neighbor Y which
is more likely to be the owner of event E. C delivers E
to Y , which simply follows the decision making procedure
discussed above.
3.3.4 Summary and Pseudo-code
In summary, our event insertion procedure is designed to
nicely interact with the zone discovery mechanism, and the
event hashing mechanism. The latter two mechanisms are
kept simple, while the event insertion mechanism uses lazy
evaluation at each hop to refine the event"s zone code, and it
leverages GPSR"s perimeter walking mechanism to fix
undecided zone boundaries. In Section 3.5, we address robustness
of event insertion to packet loss or to node failures.
Figure 5 shows the pseudo-code for inserting and
forwarding an event e. In this pseudo code, we have omitted a
description of the zone shrinking procedure. In the pseudo
code, procedure is Internal() is used to determine if the
caller is an internal node and procedure is Owner() is used
to determine if the caller is more eligible to be the owner of
the event than is currently claimed owner as recorded in the
message. Procedure Send-Message is used to send either
an event message or a query message. If the message
destination address has been changed, the packet source address
needs also to be changed in order to avoid being dropped by
GPSR, since GPSR does not allow a node to see the same
packet in greedy mode twice.
3
This happens less frequently than for GHTs, where
inserting an event to a location outside the actual (but inside
the nominal) boundary of the network will always invoke an
external perimeter walk.
3.4 Resolving and Routing Queries
DIMs support both point queries4
and range queries.
Routing a point query is identical to routing an event. Thus, the
rest of this section details how range queries are routed.
The key challenge in routing zone queries is brought out
by the following strawman design. If the entire network was
divided evenly into zones of depth k (for some pre-defined
constant k), then the querier (the node issuing the query)
could subdivide a given range query into the relevant
subzones and route individual requests to each of the zones.
This can be inefficient for large range queries and also hard
to implement in our design where zone sizes are not
predefined. Accordingly, we use a slightly different technique
where a range query is initially routed to a zone
corresponding to the entire range, and is then progressively split into
smaller subqueries. We describe this algorithm here.
The first step of the algorithm is to map a range query to
a zone code prefix. Conceptually, this is easy; in a zone tree
(Figure 3), there exists some node which contains the entire
range query in its sub-tree, and none of its children in the
tree do. The initial zone code we choose for the query is the
zone code corresponding to that tree node, and is a prefix of
the zone codes of all zones (note that these zones may not
be geographically contiguous) in the subtree. The querier
computes the zone code of Q, denoted by code(Q) and then
starts routing a query to addr(code(Q)).
Upon receiving a range query Q, a node A (where A is any
node on the query propagation path) divides it into multiple
smaller sized subqueries if there is an overlap between the
zone of A, zone(A) and the zone code associated with Q,
code(Q). Our approach to split a query Q into subqueries
is as follows. If the range of Q"s first attribute contains
the value 0.5, A divides Q into two sub-queries one of whose
first attribute ranges from 0 to 0.5, and the other from 0.5 to
1. Then A decides the half that overlaps with its own zone.
Let"s call it QA. If QA does not exist, then A stops splitting;
otherwise, it continues splitting (using the second attribute
range) and recomputing QA until QA is small enough so
that it completely falls into zone(A) and hence A can now
resolve it. For example, suppose that node A, whose code
is 0110, is to split a range query Q = 0.3 − 0.8, 0.6 − 0.9 .
The splitting steps is shown in Figure 2. After splitting,
we obtain three smaller queries q0 = 0.3 − 0.5, 0.6 − 0.75 ,
q1 = 0.3 − 0.5, 0.75 − 0.9 , and q2 = 0.5 − 0.8, 0.6 − 0.9 .
This splitting procedure is illustrated in Figure 9 which
also shows the codes of each subquery after splitting.
A then replies to subquery q0 with data stored locally
and sends subqueries q1 and q2 using the procedure outlined
above. More generally, if node A finds itself to be inside
the zone subtree that maximally covers Q, it will send the
subqueries that resulted from the split. Otherwise, if there
is no overlap between A and Q, then A forwards Q as is (in
this case Q is either the original query, or a product of an
earlier split).
Figure 10 describes the pseudo-code for the zone splitting
algorithm. As shown in the above algorithm, once a
subquery has been recognized as belonging to the caller"s zone,
procedure Resolve is invoked to resolve the subquery and
send a reply to the querier. Every query message contains
4
By point queries, we mean the equality condition on all
indexed keys. DIM index attributes are not necessarily
primary keys.
69
the geographic location of its initiator, so the corresponding
reply message can be delivered directly back to the
initiator. Finally, in the process of query resolution, zones might
shrink similar to shrinkage during inserting. We omit this
in the pseudo code.
3.5 Robustness
Until now, we have not discussed the impact of node
failures and packet losses, or node arrivals and departures on
our algorithms. Packet losses can affect query and event
insertion, and node failures can result in lost data, while node
arrivals and departures can impact the zone structure. We
now discuss how DIMs can be made robust to these kinds
of dynamics.
3.5.1 Maintaining Zones
In previous sections, we described how the zone discovery
algorithm could leave zone boundaries undecided. These
undecided boundaries are resolved during insertion or
querying, using the zone shrinking procedure describe above.
When a new node joins the network, the zone discovery
mechanism (Section 3.2) will cause neighboring zones to
appropriately adjust their zone boundaries. At this time, those
zones can also transfer to the new node those events they
store but which should belong to the new node.
Before a node turns itself off (if this is indeed possible), it
knows that its backup node (Section 3.1) will take over its
zone, and will simply send all its events to its backup node.
Node deletion may also cause zone expansion. In order to
keep the mapping between the binary zone tree"s leaf nodes
and zones, we allow zone expansion to only occur among
sibling zones (Section 3.1). The rule is: if zone(A)"s sibling
zone becomes empty, then A can expand its own zone to
include its sibling zone.
Now, we turn our attention to node failures. Node failures
are just like node deletions except that a failed node does
not have a chance to move its events to another node. But
how does a node decide if its sibling has failed? If the
sibling is within radio range, the absence of GPSR beaconing
messages can detect this. Once it detects this, the node can
expand its zone. A different approach is needed for
detecting siblings who are not within radio range. These are the
cases where two nodes own their zones after exchanging a
shrink message; they do not periodically exchange messages
thereafter to maintain this zone relationship. In this case,
we detect the failure in a data-driven fashion, with obvious
efficiency benefits compared to periodic keepalives. Once a
node B has failed, an event or query message that previously
should have been owned by the failed node will now be
delivered to the node A that owns the empty zone left by node
B. A can see this message because A stands right around
the empty area left by B and is guaranteed to be visited in a
GPSR perimeter traversal. A will set itself to be the owner
of the message, and any node which would have dropped this
message due to a perimeter loop will redirect the message to
A instead. If A"s zone happens to be the sibling of B"s zone,
A can safely expand its own zone and notify its expanded
zone to its neighbors via GPSR beaconing messages.
3.5.2 Preventing Data Loss from Node Failure
The algorithms described above are robust in terms of
zone formation, but node failure can erase data. To avoid
this, DIMs can employ two kinds of replication: local
replication to be resilient to random node failures, and mirror
replication for resilience to concurrent failure of
geographically contiguous nodes.
Mirror replication is conceptually easy. Suppose an event
E has a zone code code(E). Then, the node that inserts
E would store two copies of E; one at the zone denoted
by code(E), and the other at the zone corresponding to the
one"s complement of code(E). This technique essentially
creates a mirror DIM. A querier would need, in parallel, to
query both the original DIM and its mirror since there is no
way of knowing if a collection of nodes has failed. Clearly,
the trade-off here is an approximate doubling of both
insertion and query costs.
There exists a far cheaper technique to ensure resilience
to random node failures. Our local replication technique
rests on the observation that, for each node A, there exists
a unique node which will take over its zone when A fails.
This node is defined as the node responsible for A"s zone"s
backup zone (see Section 3.1). The basic idea is that A
replicates each data item it has in this node. We call this
node A"s local replica. Let A"s local replica be B. Often
B will be a radio neighbor of A and can be detected from
GPSR beacons. Sometimes, however, this is not the case,
and B will have to be explicitly discovered.
We use an explicit message for discovering the local replica.
Discovering the local replica is data-driven, and uses a
mechanism similar to that of event insertion. Node A sends a
message whose geographic destination is a random nearby
location chosen by A. The location is close enough to A such
that GPSR will guarantee that the message will delivered
back to A. In addition, the message has three fields, one for
the zone code of A, code(A), one for the owner owner(A) of
zone(A) which is set to be empty, and one for the geographic
location of owner(A). Then the packet will be delivered in
GPSR perimeter mode. Each node that receives this
message will compare its zone code and code(A) in the message,
and if it is more eligible to be the owner of zone(A) than
the current owner(A) recorded in the message, it will
update the field owner(A) and the corresponding geographic
location. Once the packet comes back to A, it will know the
location of its local replica and can start to send replicas.
In a dense sensor network, the local replica of a node
is usually very near to the node, either its direct neighbor
or 1-2 hops away, so the cost of sending replicas to local
replication will not dominate the network traffic. However,
a node"s local replica itself may fail. There are two ways to
deal with this situation; periodic refreshes, or repeated
datadriven discovery of local replicas. The former has higher
overhead, but more quickly discovers failed replicas.
3.5.3 Robustness to Packet Loss
Finally, the mechanisms for querying and event insertion
can be easily made resilient to packet loss. For event
insertion, a simple ACK scheme suffices.
Of course, queries and responses can be lost as well. In
this case, there exists an efficient approach for error
recovery. This rests on the observation that the querier knows
which zones fall within its query and should have responded
(we assume that a node that has no data matching a query,
but whose zone falls within the query, responds with a
negative acknowledgment). After a conservative timeout, the
querier can re-issue the queries selectively to these zones.
If DIM cannot get any answers (positive or negative) from
70
<0.3-0.8, 0.6-0.9>
<0.5-0.8, 0.6-0.9><0.3-0.5, 0.6-0.9>
<0.3-0.5, 0.6-0.9>
<0.3-0.5, 0.6-0.9>
<0.3-0.5, 0.6-0.75> <0.3-0.5, 0.75-0.9>
0
0
1
1
1
1
Figure 9: An example of range query splitting
Resolve-Range-Query(Q)
1 Qsub ← nil
2 q0, Qsub ← Split-Query(Q)
3 if q0 = nil
4 then c ← Encode(Q)
5 if Contain(c, code(A)) = true
6 then go to step 12
7 else Send-Message(c, q0)
8 else Resolve(q0)
9 if is Internal() = true
10 then Absorb (q0)
11 else Append q0 to Qsub
12 if Qsub = nil
13 then for each subquery q ∈ Qsub
14 do c ← Encode(q)
15 Send-Message(c, q)
Figure 10: Query resolving algorithm
certain zones after repeated timeouts, it can at least return
the partial query results to the application together with the
information about the zones from which data is missing.
4. DIMS: AN ANALYSIS
In this section, we present a simple analytic performance
evaluation of DIMs, and compare their performance against
other possible approaches for implementing multi-dimensional
range queries in sensor networks. In the next section, we
validate these analyses using detailed packet-level simulations.
Our primary metrics for the performance of a DIM are:
Average Insertion Cost measures the average number of
messages required to insert an event into the network.
Average Query Delivery Cost measures the average
number of messages required to route a query message to
all the relevant nodes in the network.
It does not measure the number of messages required to
transmit responses to the querier; this latter number
depends upon the precise data distribution and is the same
for many of the schemes we compare DIMs against.
In DIMs, event insertion essentially uses geographic
routing. In a dense N-node network where the likelihood of
traversing perimeters is small, the average event insertion
cost proportional to
√
N [23].
On the other hand, the query delivery cost depends upon
the size of ranges specified in the query. Recall that our
query delivery mechanism is careful about splitting a query
into sub-queries, doing so only when the query nears the
zone that covers the query range. Thus, when the querier is
far from the queried zone, there are two components to the
query delivery cost. The first, which is proportional to
√
N,
is the cost to deliver the query near the covering zone. If
within this covering zone, there are M nodes, the message
delivery cost of splitting the query is proportional to M.
The average cost of query delivery depends upon the
distribution of query range sizes. Now, suppose that query sizes
follow some density function f(x), then the average cost of
resolve a query can be approximated by
Ê N
1
xf(x)dx. To
give some intuition for the performance of DIMs, we
consider four different forms for f(x): the uniform distribution
where a query range encompassing the entire network is as
likely as a point query; a bounded uniform distribution where
all sizes up to a bound B are equally likely; an algebraic
distribution in which most queries are small, but large queries
are somewhat likely; and an exponential distribution where
most queries are small and large queries are unlikely. In all
our analyses, we make the simplifying assumption that the
size of a query is proportional to the number of nodes that
can answer that query.
For the uniform distribution P(x) ∝ c for some constant c.
If each query size from 1 . . . N is equally likely, the average
query delivery cost of uniformly distributed queries is O(N).
Thus, for uniformly distributed queries, the performance of
DIMs is comparable to that of flooding. However, for the
applications we envision, where nodes within the network
are trying to correlate events, the uniform distribution is
highly unrealistic.
Somewhat more realistic is a situation where all query
sizes are bounded by a constant B. In this case, the average
cost for resolving such a query is approximately
Ê B
1
xf(x)dx =
O(B). Recall now that all queries have to pay an
approximate cost of O(
√
N) to deliver the query near the covering
zone. Thus, if DIM limited queries to a size proportional to√
N, the average query cost would be O(
√
N).
The algebraic distribution, where f(x) ∝ x−k
, for some
constant k between 1 and 2, has an average query resolution
cost given by
Ê N
1
xf(x)dx = O(N2−k
). In this case, if k >
1.5, the average cost of query delivery is dominated by the
cost to deliver the query to near the covering zone, given by
O(
√
N).
Finally, for the exponential distribution, f(x) = ce−cx
for
some constant c, and the average cost is just the mean of the
corresponding distribution, i.e., O(1) for large N.
Asymptotically, then, the cost of the query for the exponential
distribution is dominated by the cost to deliver the query
near the covering zone (O(
√
N)).
Thus, we see that if queries follow either the bounded
uniform distribution, the algebraic distribution, or the
exponential distribution, the query cost scales as the insertion
cost (for appropriate choice of constants for the bounded
uniform and the algebraic distributions).
How well does the performance of DIMs compare against
alternative choices for implementing multi-dimensional queries?
A simple alternative is called external storage [23], where all
events are stored centrally in a node outside the sensor
network. This scheme incurs an insertion cost of O(
√
N), and
a zero query cost. However, as [23] points out, such systems
may be impractical in sensor networks since the access link
to the external node becomes a hotspot.
A second alternative implementation would store events
at the node where they are generated. Queries are flooded
71
throughout the network, and nodes that have matching data
respond. Examples of systems that can be used for this
(although, to our knowledge, these systems do not implement
multi-dimensional range queries) are Directed Diffusion [15]
and TinyDB [17]. The flooding scheme incurs a zero
insertion cost, but an O(N) query cost. It is easy to show that
DIMs outperform flooding as long as the ratio of the number
of insertions to the number of queries is less than
√
N.
A final alternative would be to use a geographic hash table
(GHT [20]). In this approach, attribute values are assumed
to be integers (this is actually quite a reasonable
assumption since attribute values are often quantized), and events
are hashed on some (say, the first) attribute. A range query
is sub-divided into several sub-queries, one for each integer
in the range of the first attribute. Each sub-query is then
hashed to the appropriate location. The nodes that receive a
sub-query only return events that match all other attribute
ranges. In this approach, which we call GHT-R (GHT"s for
range queries) the insertion cost is O(
√
N). Suppose that
the range of the first attribute contains r discrete values.
Then the cost to deliver queries is O(r
√
N). Thus,
asymptotically, GHT-R"s perform similarly to DIMs. In practice,
however, the proportionality constants are significantly
different, and DIMs outperform GHT-Rs, as we shall show
using detailed simulations.
5. DIMS: SIMULATION RESULTS
Our analysis gives us some insight into the asymptotic
behavior of various approaches for multi-dimensional range
queries. In this section, we use simulation to compare DIMs
against flooding and GHT-R; this comparison gives us a
more detailed understanding of these approaches for
moderate size networks, and gives us a nuanced view of the
mechanistic differences between some of these approaches.
5.1 Simulation Methodology
We use ns-2 for our simulations. Since DIMs are
implemented on top of GPSR, we first ported an earlier GPSR
implementation to the latest version of ns-2. We modified
the GPSR module to call our DIM implementation when
it receives any data message in transit or when it is about
to drop a message because that message traversed the entire
perimeter. This allows a DIM to modify message zone codes
in flight (Section 3), and determine the actual owner of an
event or query.
In addition, to this, we implemented in ns-2 most of the
DIM mechanisms described in Section 3. Of those
mechanisms, the only one we did not implement is mirror
replication. We have implemented selective query retransmission
for resiliency to packet loss, but have left the evaluation of
this mechanism to future work. Our DIM implementation
in ns-2 is 2800 lines of code.
Finally, we implemented GHT-R, our GHT-based
multidimensional range query mechanism in ns-2. This
implementation was relatively straightforward, given that we had
ported GPSR, and modified GPSR to detect the completion
of perimeter mode traversals.
Using this implementation, we conducted a fairly
extensive evaluation of DIM and two alternatives (flooding, and
our GHT-R). For all our experiments, we use uniformly
placed sensor nodes with network sizes ranging from 50
nodes to 300 nodes. Each node has a radio range of 40m.
For the results presented here, each node has on average 20
nodes within its nominal radio range. We have conducted
experiments at other node densities; they are in agreement
with the results presented here.
In all our experiments, each node first generates 3 events5
on average (more precisely, for a topology of size N, we have
3N events, and each node is equally likely to generate an
event). We have conducted experiments for three different
event value distributions. Our uniform event distribution
generates 2-dimensional events and, for each dimension,
every attribute value is equally likely. Our normal event
distribution generates 2-dimensional events and, for each
dimension, the attribute value is normally distributed with a
mean corresponding to the mid-point of the attribute value
range. The normal event distribution represents a skewed
data set. Finally, our trace event distribution is a collection
of 4-dimensional events obtained from a habitat monitoring
network. As we shall see, this represents a fairly skewed
data set.
Having generated events, for each simulation we
generate queries such that, on average, each node generates 2
queries. The query sizes are determined using the four size
distributions we discussed in Section 4: uniform,
boundeduniform, algebraic and exponential. Once a query size has
been determined, the location of the query (i.e., the actual
boundaries of the zone) are uniformly distributed. For our
GHT-R experiments, the dynamic range of the attributes
had 100 discrete values, but we restricted the query range
for any one attribute to 50 discrete values to allow those
simulations to complete in reasonable time.
Finally, using one set of simulations we evaluate the
efficacy of local replication by turning off random fractions of
nodes and measuring the fidelity of the returned results.
The primary metrics for our simulations are the average
query and insertion costs, as defined in Section 4.
5.2 Results
Although we have examined almost all the combinations
of factors described above, we discuss only the most salient
ones here, for lack of space.
Figure 11 plots the average insertion costs for DIM and
GHT-R (for flooding, of course, the insertion costs are zero).
DIM incurs less per event overhead in inserting events
(regardless of the actual event distribution; Figure 11 shows the
cost for uniformly distributed events). The reason for this is
interesting. In GHT-R, storing almost every event incurs a
perimeter traversal, and storing some events require
traversing the outer perimeter of the network [20]. By contrast, in
DIM, storing an event incurs a perimeter traversal only when
a node"s boundaries are undecided. Furthermore, an
insertion or a query in a DIM can traverse the outer perimeter
(Section 3.3), but less frequently than in GHTs.
Figure 13 plots the average query cost for a bounded
uniform query size distribution. For this graph (and the next)
we use a uniform event distribution, since the event
distribution does not affect the query delivery cost. For this
simulation, our bound was 1
4
th the size of the largest possible
5
Our metrics are chosen so that the exact number of events
and queries is unimportant for our discussion. Of course,
the overall performance of the system will depend on the
relative frequency of events and queries, as we discuss in
Section 4. Since we don"t have realistic ratios for these, we
focus on the microscopic costs, rather than on the overall
system costs.
72
0
2
4
6
8
10
12
14
16
18
20
50 100 150 200 250 300
AverageCostperInsertion
Network Size
DIM
GHT-R
Figure 11: Average insertion cost for DIM and
GHT.
0.4
0.5
0.6
0.7
0.8
0.9
1
5 10 15 20 25 30
Fractionofrepliescomparedwithnon-failurecase
Fraction of failed nodes (%)
No Replication
Local Replication
Figure 12: Local replication performance.
query (e.g., a query of the form 0 − 0.5, 0 − 0.5 . Even for
this generous query size, DIMs perform quite well (almost
a third the cost of flooding). Notice, however, that
GHTRs incur high query cost since almost any query requires as
many subqueries as the width of the first attribute"s range.
Figure 14 plots the average query cost for the exponential
distribution (the average query size for this distribution was
set to be 1
16
th the largest possible query). The superior
scaling of DIMs is evident in these graphs. Clearly, this is
the regime in which one might expect DIMs to perform best,
when most of the queries are small and large queries are
relatively rare. This is also the regime in which one would
expect to use multi-dimensional range queries: to perform
relatively tight correlations. As with the bounded uniform
distribution, GHT query cost is dominated by the cost of
sending sub-queries; for DIMs, the query splitting strategy
works quite well in keep overall query delivery costs low.
Figure 12 describes the efficacy of local replication. To
obtain this figure, we conducted the following experiment.
On a 100-node network, we inserted a number of events
uniformly distributed throughout the network, then issued
a query covering the entire network and recorded the
answers. Knowing the expected answers for this query, we
then successively removed a fraction f of nodes randomly,
and re-issued the same query. The figure plots the fraction
of expected responses actually received, with and without
replication. As the graph shows, local replication performs
well for random failures, returning almost 90% of the
responses when up to 30% of the nodes have failed
simultaneously 6
.In the absence of local replication, of course, when
6
In practice, the performance of local replication is likely to
0
100
200
300
400
500
600
700
50 100 150 200 250 300
AverageCostperQueryinBoundedUnifDistribution
Network Size
DIM
flooding
GHT-R
Figure 13: Average query cost with a bounded
uniform query distribution
0
50
100
150
200
250
300
350
400
450
50 100 150 200 250 300
AverageCostperQueryinExponentialDistribution
Network Size
DIM
flooding
GHT-R
Figure 14: Average query cost with an exponential
query distribution
30% of the nodes fail, the response rate is only 70% as one
would expect.
We note that DIMs (as currently designed) are not
perfect. When the data is highly skewed-as it was for our trace
data set from the habitat monitoring application where most
of the event values fell into within 10% of the attribute"s
range-a few DIM nodes will clearly become the bottleneck.
This is depicted in Figure 15, which shows that for DIMs,
and GHT-Rs, the maximum number of transmissions at any
network node (the hotspots) is rather high. (For less skewed
data distributions, and reasonable query size distributions,
the hotspot curves for all three schemes are comparable.)
This is a standard problem that the database indices have
dealt with by tree re-balancing. In our case, simpler
solutions might be possible (and we discuss this in Section 7).
However, our use of the trace data demonstrates that
DIMs work for events which have more than two dimensions.
Increasing the number of dimensions does not noticeably
degrade DIMs query cost (omitted for lack of space).
Also omitted are experiments examining the impact of
several other factors, as they do not affect our conclusions
in any way. As we expected, DIMs are comparable in
performance to flooding when all sizes of queries are equally
likely. For an algebraic distribution of query sizes, the
relative performance is close to that for the exponential
distribution. For normally distributed events, the insertion costs
be much better than this. Assuming a node and its replica
don"t simultaneously fail often, a node will almost always
detect a replica failure and re-replicate, leading to near 100%
response rates.
73
0
2000
4000
6000
8000
10000
12000
50 100 150 200 250 300
MaximumHotspotonTraceDataSet
Network Size
DIM
flooding
GHT-R
Figure 15: Hotspot usage
DIM
Zone
Manager
Query
Router
Query
Processor
Event
Manager
Event
Router
GPSR interface(Event driven/Thread based)
update
useuse
update
GPSR
Upper interface(Event driven/Thread based)
Lower interface(Event driven/Thread based)
Greedy
Forwarding
Perimeter
Forwarding
Beaconing
Neighbor
List
Manager
update
use
MoteNIC (MicaRadio) IP Socket (802.11b/Ethernet)
Figure 16: Software architecture of DIM over GPSR
are comparable to that for the uniform distribution.
Finally, we note that in all our evaluations we have only
used list queries (those that request all events matching the
specified range). We expect that for summary queries (those
that expect an aggregate over matching events), the overall
cost of DIMs could be lower because the matching data are
likely to be found in one or a small number of zones. We
leave an understanding of this to future work. Also left to
future work is a detailed understanding of the impact of
location error on DIM"s mechanisms. Recent work [22] has
examined the impact of imprecise location information on
other data-centric storage mechanisms such as GHTs, and
found that there exist relatively simple fixes to GPSR that
ameliorate the effects of location error.
6. IMPLEMENTATION
We have implemented DIMs on a Linux platform suitable
for experimentation on PDAs and PC-104 class machines.
To implement DIMs, we had to develop and test an
independent implementation of GPSR. Our GPSR implementation
is full-featured, while our DIM implementation has most of
the algorithms discussed in Section 3; some of the robustness
extensions have only simpler variants implemented.
The software architecture of DIM/GPSR system is shown
in Figure 16. The entire system (about 5000 lines of code)
is event-driven and multi-threaded. The DIM subsystem
consists of six logical components: zone management, event
maintenance, event routing, query routing, query
processing, and GPSR interactions. The GPSR system is
implemented as user-level daemon process. Applications are
executed as clients. For the DIM subsystem, the GPSR module
0
2
4
6
8
10
12
14
16
0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0
Query size
Average#ofreceivedresponses
perquery
Figure 17: Number of events received for different
query sizes
0
2
4
6
8
10
12
14
16
0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0
Query sizeTotalnumberofmessages
onlyforsendingthequery
Figure 18: Query distribution cost
provides several extensions: it exports information about
neighbors, and provides callbacks during packet forwarding
and perimeter-mode termination.
We tested our implementation on a testbed consisting of 8
PC-104 class machines. Each of these boxes runs Linux and
uses a Mica mote (attached through a serial cable) for
communication. These boxes are laid out in an office building
with a total spatial separation of over a hundred feet. We
manually measured the locations of these nodes relative to
some coordinate system and configured the nodes with their
location. The network topology is approximately a chain.
On this testbed, we inserted queries and events from a
single designated node. Our events have two attributes which
span all combinations of the four values [0, 0.25, 0.75, 1]
(sixteen events in all). Our queries span four sizes, returning 1,
4, 9 and 16 events respectively.
Figure 17 plots the number of events received for different
sized queries. It might appear that we received fewer events
than expected, but this graph doesn"t count the events that
were already stored at the querier. With that adjustment,
the number of responses matches our expectation. Finally,
Figure 18 shows the total number of messages required for
different query sizes on our testbed.
While these experiments do not reveal as much about the
performance range of DIMs as our simulations do, they
nevertheless serve as proof-of-concept for DIMs. Our next step
in the implementation is to port DIMs to the Mica motes,
and integrate them into the TinyDB [17] sensor database
engine on motes.
74
7. CONCLUSIONS
In this paper, we have discussed the design and evaluation
of a distributed data structure called DIM for efficiently
resolving multi-dimensional range queries in sensor networks.
Our design of DIMs relies upon a novel locality-preserving
hash inspired by early work in database indexing, and is
built upon GPSR. We have a working prototype, both of
GPSR and DIM, and plan to conduct larger scale
experiments in the future.
There are several interesting future directions that we
intend to pursue. One is adaptation to skewed data
distributions, since these can cause storage and transmission
hotspots. Unlike traditional database indices that re-balance
trees upon data insertion, in sensor networks it might be
feasible to re-structure the zones on a much larger timescale
after obtaining a rough global estimate of the data
distribution. Another direction is support for node heterogeneity
in the zone construction process; nodes with larger storage
space assert larger-sized zones for themselves. A third is
support for efficient resolution of existential queries-whether
there exists an event matching a multi-dimensional range.
Acknowledgments
This work benefited greatly from discussions with Fang Bian,
Hui Zhang and other ENL lab members, as well as from
comments provided by the reviewers and our shepherd Feng
Zhao.
8. REFERENCES
[1] J. Aspnes and G. Shah. Skip Graphs. In Proceedings of
14th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), Baltimore, MD, January 2003.
[2] J. L. Bentley. Multidimensional Binary Search Trees Used
for Associative Searching. Communicaions of the ACM,
18(9):475-484, 1975.
[3] P. Bonnet, J. E. Gerhke, and P. Seshadri. Towards Sensor
Database Systems. In Proceedings of the Second
International Conference on Mobile Data Management,
Hong Kong, January 2001.
[4] I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong. Freenet:
A Distributed Anonymous Information Storage and
Retrieval System. In Designing Privacy Enhancing
Technologies: International Workshop on Design Issues in
Anonymity and Unobservability. Springer, New York, 2001.
[5] D. Comer. The Ubiquitous B-tree. ACM Computing
Surveys, 11(2):121-137, 1979.
[6] R. A. Finkel and J. L. Bentley. Quad Trees: A Data
Structure for Retrieval on Composite Keys. Acta
Informatica, 4:1-9, 1974.
[7] D. Ganesan, D. Estrin, and J. Heidemann. DIMENSIONS:
Why do we need a new Data Handling architecture for
Sensor Networks? In Proceedings of the First Workshop on
Hot Topics In Networks (HotNets-I), Princeton, NJ,
October 2002.
[8] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in
High Dimensions via Hashing. In Proceedings of the 25th
VLDB conference, Edinburgh, Scotland, September 1999.
[9] R. Govindan, J. Hellerstein, W. Hong, S. Madden,
M. Franklin, and S. Shenker. The Sensor Network as a
Database. Technical Report 02-771, Computer Science
Department, University of Southern California, September
2002.
[10] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and
S. Shenker. DIFS: A Distributed Index for Features in
Sensor Networks. In Proceedings of 1st IEEE International
Workshop on Sensor Network Protocols and Applications,
Anchorage, AK, May 2003.
[11] A. Guttman. R-trees: A Dynamic Index Structure for
Spatial Searching. In Proceedings of the ACM SIGMOD,
Boston, MA, June 1984.
[12] M. Harren, J. M. Hellerstein, R. Huebsch, B. T. Loo,
S. Shenker, and I. Stoica. Complex Queries in DHT-based
Peer-to-Peer Networks. In P. Druschel, F. Kaashoek, and
A. Rowstron, editors, Proceedings of 1st International
Workshop on Peer-to-Peer Systems (IPTPS"02), volume
2429 of LNCS, page 242, Cambridge, MA, March 2002.
Springer-Verlag.
[13] P. Indyk and R. Motwani. Approximate Nearest Neighbors:
Towards Removing the Curse of Dimensionality. In
Proceedings of the 30th Annual ACM Symposium on
Theory of Computing, Dallas, Texas, May 1998.
[14] P. Indyk, R. Motwani, P. Raghavan, and S. Vempala.
Locality-preserving Hashing in Multidimensional Spaces. In
Proceedings of the 29th Annual ACM symposium on
Theory of Computing, pages 618 - 625, El Paso, Texas,
May 1997. ACM Press.
[15] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed
Diffusion: A Scalable and Robust Communication
Paradigm for Sensor Networks. In Proceedings of the Sixth
Annual ACM/IEEE International Conference on Mobile
Computing and Networking (Mobicom 2000), Boston, MA,
August 2000.
[16] B. Karp and H. T. Kung. GPSR: Greedy Perimeter
Stateless Routing for Wireless Networks. In Proceedings of
the Sixth Annual ACM/IEEE International Conference on
Mobile Computing and Networking (Mobicom 2000),
Boston, MA, August 2000.
[17] S. Madden, M. Franklin, J. Hellerstein, and W. Hong. The
Design of an Acquisitional Query Processor for Sensor
Networks. In Proceedings of ACM SIGCMOD, San Diego,
CA, June 2003.
[18] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.
TAG: a Tiny AGregation Service for ad-hoc Sensor
Networks. In Proceedings of 5th Annual Symposium on
Operating Systems Design and Implementation (OSDI),
Boston, MA, December 2002.
[19] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and
S. Shenker. A Scalable Content-Addressable Network. In
Proceedings of the ACM SIGCOMM, San Diego, CA,
August 2001.
[20] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin,
R. Govindan, and S. Shenker. GHT: A Geographic Hash
Table for Data-Centric Storage. In Proceedings of the First
ACM International Workshop on Wireless Sensor
Networks and Applications, Atlanta, GA, September 2002.
[21] H. Samet. Spatial Data Structures. In W. Kim, editor,
Modern Database Systems: The Object Model,
Interoperability and Beyond, pages 361-385. Addison
Wesley/ACM, 1995.
[22] K. Sead, A. Helmy, and R. Govindan. On the Effect of
Localization Errors on Geographic Face Routing in Sensor
Networks. In Under submission, 2003.
[23] S. Shenker, S. Ratnasamy, B. Karp, R. Govindan, and
D. Estrin. Data-Centric Storage in Sensornets. In Proc.
ACM SIGCOMM Workshop on Hot Topics In Networks,
Princeton, NJ, 2002.
[24] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and
H. Balakrishnan. Chord: A Scalable Peer-To-Peer Lookup
Service for Internet Applications. In Proceedings of the
ACM SIGCOMM, San Diego, CA, August 2001.
[25] F. Ye, H. Luo, J. Cheng, S. Lu, and L. Zhang. A Two-Tier
Data Dissemination Model for Large-scale Wireless Sensor
Networks. In Proceedings of the Eighth Annual ACM/IEEE
International Conference on Mobile Computing and
Networking (Mobicom"02), Atlanta, GA, September 2002.
75 | datacentric storage system;multi-dimensional range query;multidimensional range query;event insertion;querying cost;query flooding;indexing technique;dim;distributed datum structure;asymptotic behavior;locality-preserving geographic hash;sensor network;geographic routing;centralized index;normal event distribution;efficient correlation;distributed index |
train_C-49 | Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces | Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage. | 1. INTRODUCTION
Mobile opportunistic networks are one kind of delay-tolerant
network (DTN) [6]. Delay-tolerant networks provide service
despite long link delays or frequent link breaks. Long link delays
happen in networks with communication between nodes at a great
distance, such as interplanetary networks [2]. Link breaks are caused
by nodes moving out of range, environmental changes, interference
from other moving objects, radio power-offs, or failed nodes. For
us, mobile opportunistic networks are those DTNs with sparse node
population and frequent link breaks caused by power-offs and the
mobility of the nodes.
Mobile opportunistic networks have received increasing interest
from researchers. In the literature, these networks include mobile
sensor networks [25], wild-animal tracking networks [11],
pocketswitched networks [8], and transportation networks [1, 14]. We
expect to see more opportunistic networks when the
one-laptopper-child (OLPC) project [18] starts rolling out inexpensive
laptops with wireless networking capability for children in developing
countries, where often no infrastructure exits. Opportunistic
networking is one promising approach for those children to exchange
information.
One fundamental problem in opportunistic networks is how to
route messages from their source to their destination. Mobile
opportunistic networks differ from the Internet in that disconnections
are the norm instead of the exception. In mobile opportunistic
networks, communication devices can be carried by people [4],
vehicles [1] or animals [11]. Some devices can form a small mobile
ad-hoc network when the nodes move close to each other. But a
node may frequently be isolated from other nodes. Note that
traditional Internet routing protocols and ad-hoc routing protocols, such
as AODV [20] or DSDV [19], assume that a contemporaneous
endto-end path exists, and thus fail in mobile opportunistic networks.
Indeed, there may never exist an end-to-end path between two given
devices.
In this paper, we study protocols for routing messages between
wireless networking devices carried by people. We assume that
people send messages to other people occasionally, using their
devices; when no direct link exists between the source and the
destination of the message, other nodes may relay the message to the
destination. Each device represents a unique person (it is out of the
scope of this paper when a device maybe carried by multiple
people). Each message is destined for a specific person and thus for
a specific node carried by that person. Although one person may
carry multiple devices, we assume that the sender knows which
device is the best to receive the message. We do not consider
multicast or geocast in this paper.
Many routing protocols have been proposed in the literature.
Few of them were evaluated in realistic network settings, or even in
realistic simulations, due to the lack of any realistic people
mobility model. Random walk or random way-point mobility models are
often used to evaluate the performance of those routing protocols.
Although these synthetic mobility models have received extensive
interest by mobile ad-hoc network researchers [3], they do not
reflect people"s mobility patterns [9]. Realising the limitations of
using random mobility models in simulations, a few researchers have
studied routing protocols in mobile opportunistic networks with
realistic mobility traces. Chaintreau et al. [5] theoretically analyzed
the impact of routing algorithms over a model derived from a
realistic mobility data set. Su et al. [22] simulated a set of routing
35
protocols in a small experimental network. Those studies help
researchers better understand the theoretical limits of opportunistic
networks, and the routing protocol performance in a small network
(20-30 nodes).
Deploying and experimenting large-scale mobile opportunistic
networks is difficult, we too resort to simulation. Instead of
using a complex mobility model to mimic people"s mobility patterns,
we used mobility traces collected in a production wireless
network at Dartmouth College to drive our simulation. Our
messagegeneration model, however, was synthetic.
To the best of our knowledge, we are the first to simulate the
effect of routing protocols in a large-scale mobile opportunistic
network, using realistic contact traces derived from real traces of
a production network with more than 5, 000 users.
Using realistic contact traces, we evaluate the performance of
three naive routing protocols (direct-delivery, epidemic, and
random) and two prediction-based routing protocols, PRoPHET [16]
and Link-State [22]. We also propose a new prediction-based
routing protocol, and compare it to the above in our evaluation.
2. ROUTING PROTOCOL
A routing protocol is designed for forwarding messages from one
node (source) to another node (destination). Any node may
generate messages for any other node, and may carry messages destined
for other nodes. In this paper, we consider only messages that are
unicast (single destination).
DTN routing protocols could be described in part by their
transfer probability and replication probability; that is, when one node
meets another node, what is the probability that a message should
be transfered and if so, whether the sender should retain its copy.
Two extremes are the direct-delivery protocol and the epidemic
protocol. The former transfers with probability 1 when the node
meets the destination, 0 for others, and no replication. The latter
uses transfer probability 1 for all nodes and unlimited replication.
Both these protocols have their advantages and disadvantages. All
other protocols are between the two extremes.
First, we define the notion of contact between two nodes. Then
we describe five existing protocols before presenting our own
proposal.
A contact is defined as a period of time during which two nodes
have the opportunity to communicate. Although we are aware that
wireless technologies differ, we assume that a node can reliably
detect the beginning and end time of a contact with nearby nodes.
A node may be in contact with several other nodes at the same time.
The contact history of a node is a sequence of contacts with other
nodes. Node i has a contact history Hi(j), for each other node j,
which denotes the historical contacts between node i and node j.
We record the start and end time for each contact; however, the last
contacts in the node"s contact history may not have ended.
2.1 Direct Delivery Protocol
In this simple protocol, a message is transmitted only when the
source node can directly communicate with the destination node
of the message. In mobile opportunistic networks, however, the
probability for the sender to meet the destination may be low, or
even zero.
2.2 Epidemic Routing Protocol
The epidemic routing protocol [23] floods messages into the
network. The source node sends a copy of the message to every node
that it meets. The nodes that receive a copy of the message also
send a copy of the message to every node that they meet.
Eventually, a copy of the message arrives at the destination of the message.
This protocol is simple, but may use significant resources;
excessive communication may drain each node"s battery quickly.
Moreover, since each node keeps a copy of each message, storage is not
used efficiently, and the capacity of the network is limited.
At a minimum, each node must expire messages after some amount
of time or stop forwarding them after a certain number of hops.
After a message expires, the message will not be transmitted and will
be deleted from the storage of any node that holds the message.
An optimization to reduce the communication cost is to
transfer index messages before transferring any data message. The
index messages contain IDs of messages that a node currently holds.
Thus, by examining the index messages, a node only transfers
messages that are not yet contained on the other nodes.
2.3 Random Routing
An obvious approach between the above two extremes is to
select a transfer probability between 0 and 1 to forward messages at
each contact. We use a simple replication strategy that allows only
the source node to make replicas, and limits the replication to a
specific number of copies. The message has some chance of
being transferred to a highly mobile node, and thus may have a better
chance to reach its destination before the message expires.
2.4 PRoPHET Protocol
PRoPHET [16] is a Probabilistic Routing Protocol using History
of past Encounters and Transitivity to estimate each node"s delivery
probability for each other node. When node i meets node j, the
delivery probability of node i for j is updated by
pij = (1 − pij)p0 + pij, (1)
where p0 is an initial probability, a design parameter for a given
network. Lindgren et al. [16] chose 0.75, as did we in our
evaluation. When node i does not meet j for some time, the delivery
probability decreases by
pij = αk
pij, (2)
where α is the aging factor (α < 1), and k is the number of time
units since the last update.
The PRoPHET protocol exchanges index messages as well as
delivery probabilities. When node i receives node j"s delivery
probabilities, node i may compute the transitive delivery probability
through j to z with
piz = piz + (1 − piz)pijpjzβ, (3)
where β is a design parameter for the impact of transitivity; we
used β = 0.25 as did Lindgren [16].
2.5 Link-State Protocol
Su et al. [22] use a link-state approach to estimate the weight of
each path from the source of a message to the destination. They
use the median inter-contact duration or exponentially aged
intercontact duration as the weight on links. The exponentially aged
inter-contact duration of node i and j is computed by
wij = αwij + (1 − α)I, (4)
where I is the new inter-contact duration and α is the aging factor.
Nodes share their link-state weights when they can communicate
with each other, and messages are forwarded to the node that have
the path with the lowest link-state weight.
36
3. TIMELY-CONTACT PROBABILITY
We also use historical contact information to estimate the
probability of meeting other nodes in the future. But our method differs
in that we estimate the contact probability within a period of time.
For example, what is the contact probability in the next hour?
Neither PRoPHET nor Link-State considers time in this way.
One way to estimate the timely-contact probability is to use the
ratio of the total contact duration to the total time. However, this
approach does not capture the frequency of contacts. For example,
one node may have a long contact with another node, followed by
a long non-contact period. A third node may have a short contact
with the first node, followed by a short non-contact period. Using
the above estimation approach, both examples would have similar
contact probability. In the second example, however, the two nodes
have more frequent contacts.
We design a method to capture the contact frequency of mobile
nodes. For this purpose, we assume that even short contacts are
sufficient to exchange messages.1
The probability for node i to meet node j is computed by the
following procedure. We divide the contact history Hi(j) into a
sequence of n periods of ΔT starting from the start time (t0) of the
first contact in history Hi(j) to the current time. We number each
of the n periods from 0 to n − 1, then check each period. If node
i had any contact with node j during a given period m, which is
[t0 + mΔT, t0 + (m + 1)ΔT), we set the contact status Im to be
1; otherwise, the contact status Im is 0. The probability p
(0)
ij that
node i meets node j in the next ΔT can be estimated as the average
of the contact status in prior intervals:
p
(0)
ij =
1
n
n−1X
m=0
Im. (5)
To adapt to the change of contact patterns, and reduce the storage
space for contact histories, a node may discard old history contacts;
in this situation, the estimate would be based on only the retained
history.
The above probability is the direct contact probability of two
nodes. We are also interested in the probability that we may be
able to pass a message through a sequence of k nodes. We define
the k-order probability inductively,
p
(k)
ij = p
(0)
ij +
X
α
p
(0)
iα p
(k−1)
αj , (6)
where α is any node other than i or j.
3.1 Our Routing Protocol
We first consider the case of a two-hop path, that is, with only
one relay node. We consider two approaches: either the receiving
neighbor decides whether to act as a relay, or the source decides
which neighbors to use as relay.
3.1.1 Receiver Decision
Whenever a node meets other nodes, they exchange all their
messages (or as above, index messages). If the destination of a
message is the receiver itself, the message is delivered. Otherwise, if
the probability of delivering the message to its destination through
this receiver node within ΔT is greater than or equal to a certain
threshold, the message is stored in the receiver"s storage to forward
1
In our simulation, however, we accurately model the
communication costs and some short contacts will not succeed in transfer of
all messages.
to the destination. If the probability is less than the threshold, the
receiver discards the message. Notice that our protocol replicates
the message whenever a good-looking relay comes along.
3.1.2 Sender Decision
To make decisions, a sender must have the information about its
neighbors" contact probability with a message"s destination.
Therefore, meta-data exchange is necessary.
When two nodes meet, they exchange a meta-message,
containing an unordered list of node IDs for which the sender of the
metamessage has a contact probability greater than the threshold.
After receiving a meta-message, a node checks whether it has
any message that destined to its neighbor, or to a node in the node
list of the neighbor"s meta-message. If it has, it sends a copy of the
message.
When a node receives a message, if the destination of the
message is the receiver itself, the message is delivered. Otherwise, the
message is stored in the receiver"s storage for forwarding to the
destination.
3.1.3 Multi-node Relay
When we use more than two hops to relay a message, each node
needs to know the contact probabilities along all possible paths to
the message destination.
Every node keeps a contact probability matrix, in which each cell
pij is a contact probability between to nodes i and j. Each node
i computes its own contact probabilities (row i) with other nodes
using Equation (5) whenever the node ends a contact with other
nodes. Each row of the contact probability matrix has a version
number; the version number for row i is only increased when node i
updates the matrix entries in row i. Other matrix entries are updated
through exchange with other nodes when they meet.
When two nodes i and j meet, they first exchange their contact
probability matrices. Node i compares its own contact matrix with
node j"s matrix. If node j"s matrix has a row l with a higher version
number, then node i replaces its own row l with node j"s row l.
Likewise node j updates its matrix. After the exchange, the two
nodes will have identical contact probability matrices.
Next, if a node has a message to forward, the node estimates
its neighboring node"s order-k contact probability to contact the
destination of the message using Equation (6). If p
(k)
ij is above a
threshold, or if j is the destination of the message, node i will send
a copy of the message to node j.
All the above effort serves to determine the transfer probability
when two nodes meet. The replication decision is orthogonal to
the transfer decision. In our implementation, we always replicate.
Although PRoPHET [16] and Link-State [22] do no replication, as
described, we added replication to those protocols for better
comparison to our protocol.
4. EVALUATION RESULTS
We evaluate and compare the results of direct delivery, epidemic,
random, PRoPHET, Link-State, and timely-contact routing
protocols.
4.1 Mobility traces
We use real mobility data collected at Dartmouth College.
Dartmouth College has collected association and disassociation
messages from devices on its wireless network wireless users since
spring 2001 [13]. Each message records the wireless card MAC
address, the time of association/disassociation, and the name of the
access point. We treat each unique MAC address as a node. For
37
more information about Dartmouth"s network and the data
collection, see previous studies [7, 12].
Our data are not contacts in a mobile ad-hoc network. We can
approximate contact traces by assuming that two users can
communicate with each other whenever they are associated with the same
access point. Chaintreau et al. [5] used Dartmouth data traces and
made the same assumption to theoretically analyze the impact of
human mobility on opportunistic forwarding algorithms. This
assumption may not be accurate,2
but it is a good first approximation.
In our simulation, we imagine the same clients and same mobility
in a network with no access points. Since our campus has full WiFi
coverage, we assume that the location of access points had little
impact on users" mobility.
We simulated one full month of trace data (November 2003)
taken from CRAWDAD [13], with 5, 142 users. Although
predictionbased protocols require prior contact history to estimate each node"s
delivery probability, our preliminary results show that the
performance improvement of warming-up over one month of trace was
marginal. Therefore, for simplicity, we show the results of all
protocols without warming-up.
4.2 Simulator
We developed a custom simulator.3
Since we used contact traces
derived from real mobility data, we did not need a mobility model
and omitted physical and link-layer details for node discovery. We
were aware that the time for neighbor discovery in different
wireless technologies vary from less than one seconds to several
seconds. Furthermore, connection establishment also takes time, such
as DHCP. In our simulation, we assumed the nodes could discover
and connect each other instantly when they were associated with a
same AP. To accurately model communication costs, however, we
simulated some MAC-layer behaviors, such as collision.
The default settings of the network of our simulator are listed in
Table 1, using the values recommended by other papers [22, 16].
The message probability was the probability of generating
messages, as described in Section 4.3. The default transmission
bandwidth was 11 Mb/s. When one node tried to transmit a message, it
first checked whether any nearby node was transmitting. If it was,
the node backed off a random number of slots. Each slot was 1
millisecond, and the maximum number of backoff slots was 30. The
size of messages was uniformly distributed between 80 bytes and
1024 bytes. The hop count limit (HCL) was the maximum number
of hops before a message should stop forwarding. The time to live
(TTL) was the maximum duration that a message may exist before
expiring. The storage capacity was the maximum space that a node
can use for storing messages. For our routing method, we used a
default prediction window ΔT of 10 hours and a probability
threshold of 0.01. The replication factor r was not limited by default, so
the source of a message transferred the messages to any other node
that had a contact probability with the message destination higher
than the probability threshold.
4.3 Message generation
After each contact event in the contact trace, we generated a
message with a given probability; we choose a source node and a
des2
Two nodes may not have been able to directly communicate while
they were at two far sides of an access point, or two nodes may
have been able to directly communicate if they were between two
adjacent access points.
3
We tried to use a general network simulator (ns2), which was
extremely slow when simulating a large number of mobile nodes (in
our case, more than 5000 nodes), and provided unnecessary detail
in modeling lower-level network protocols.
Table 1: Default Settings of the Simulation
Parameter Default value
message probability 0.001
bandwidth 11 Mb/s
transmission slot 1 millisecond
max backoff slots 30
message size 80-1024 bytes
hop count limit (HCL) unlimited
time to live (TTL) unlimited
storage capacity unlimited
prediction window ΔT 10 hours
probability threshold 0.01
contact history length 20
replication always
aging factor α 0.9 (0.98 PRoPHET)
initial probability p0 0.75 (PRoPHET)
transitivity impact β 0.25 (PRoPHET)
0
20000
40000
60000
80000
100000
120000
0 5 10 15 20
Numberofoccurrence
hour
movements
contacts
Figure 1: Movements and contacts duration each hour
tination node randomly using a uniform distribution across nodes
seen in the contact trace up to the current time. When there were
more contacts during a certain period, there was a higher likelihood
that a new message was generated in that period. This correlation
is not unreasonable, since there were more movements during the
day than during the night, and so the number of contacts. Figure 1
shows the statistics of the numbers of movements and the numbers
of contacts during each hour of the day, summed across all users
and all days. The plot shows a clear diurnal activity pattern. The
activities reached lowest around 5am and peaked between 4pm and
5pm. We assume that in some applications, network traffic exhibits
similar patterns, that is, people send more messages during the day,
too.
Messages expire after a TTL. We did not use proactive methods
to notify nodes the delivery of messages, so that the messages can
be removed from storage.
4.4 Metrics
We define a set of metrics that we use in evaluating routing
protocols in opportunistic networks:
• delivery ratio, the ratio of the number of messages delivered
to the number of total messages generated.
• message transmissions, the total number of messages
transmitted during the simulation across all nodes.
38
• meta-data transmissions, the total number of meta-data units
transmitted during the simulation across all nodes.
• message duplications, the number of times a message copy
occurred, due to replication.
• delay, the duration between a message"s generation time and
the message"s delivery time.
• storage usage, the max and mean of maximum storage (bytes)
used across all nodes.
4.5 Results
Here we compare simulation results of the six routing protocols.
0.001
0.01
0.1
1
unlimited 100 24 10 1
Deliveryratio
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Figure 2: Delivery ratio (log scale). The direct and random
protocols for one-hour TTL had delivery ratios that were too
low to be visible in the plot.
Figure 2 shows the delivery ratio of all the protocols, with
different TTLs. (In all the plots in the paper, prediction stands for our
method, state stands for the Link-State protocol, and prophet
represents PRoPHET.) Although we had 5,142 users in the
network, the direct-delivery and random protocols had low delivery
ratios (note the log scale). Even for messages with an unlimited
lifetime, only 59 out of 2077 messages were delivered during this
one-month simulation. The delivery ratio of epidemic routing was
the best. The three prediction-based approaches had low delivery
ratio, compared to epidemic routing. Although our method was
slightly better than the other two, the advantage was marginal.
The high delivery ratio of epidemic routing came with a price:
excessive transmissions. Figure 3 shows the number of message
data transmissions. The number of message transmissions of
epidemic routing was more than 10 times higher than for the
predictionbased routing protocols. Obviously, the direct delivery protocol
had the lowest number of message transmissions - the number of
message delivered. Among the three prediction-based methods,
the PRoPHET transmitted fewer messages, but had comparable
delivery-ratio as seen in Figure 2.
Figure 4 shows that epidemic and all prediction-based methods
had substantial meta-data transmissions, though epidemic routing
had relatively more, with shorter TTLs. Because epidemic
protocol transmitted messages at every contact, in turn, more nodes had
messages that required meta-data transmission during contact. The
direct-delivery and random protocols had no meta-data
transmissions.
In addition to its message transmissions and meta-data
transmissions, the epidemic routing protocol also had excessive message
1
10
100
1000
10000
100000
1e+06
1e+07
1e+08
unlimited 100 24 10 1
Numberofmessagetransmitted
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Figure 3: Message transmissions (log scale)
1
10
100
1000
10000
100000
1e+06
1e+07
1e+08
unlimited 100 24 10 1
Numberofmeta-datatransmissions
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Figure 4: Meta-data transmissions (log scale). Direct and
random protocols had no meta-data transmissions.
duplications, spreading replicas of messages over the network.
Figure 5 shows that epidemic routing had one or two orders more
duplication than the prediction-based protocols. Recall that the
directdelivery and random protocols did not replicate, thus had no data
duplications.
Figure 6 shows both the median and mean delivery delays. All
protocols show similar delivery delays in both mean and median
measures for medium TTLs, but differ for long and short TTLs.
With a 100-hour TTL, or unlimited TTL, epidemic routing had the
shortest delays. The direct-delivery had the longest delay for
unlimited TTL, but it had the shortest delay for the one-hour TTL.
The results seem contrary to our intuition: the epidemic routing
protocol should be the fastest routing protocol since it spreads
messages all over the network. Indeed, the figures show only the delay
time for delivered messages. For direct delivery, random, and the
probability-based routing protocols, relatively few messages were
delivered for short TTLs, so many messages expired before they
could reach their destination; those messages had infinite delivery
delay and were not included in the median or mean measurements.
For longer TTLs, more messages were delivered even for the
directdelivery protocol. The statistics of longer TTLs for comparison are
more meaningful than those of short TTLs.
Since our message generation rate was low, the storage usage
was also low in our simulation. Figure 7 shows the maximum
and average of maximum volume (in KBytes) of messages stored
39
1
10
100
1000
10000
100000
1e+06
1e+07
1e+08
unlimited 100 24 10 1
Numberofmessageduplications
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Figure 5: Message duplications (log scale). Direct and random
protocols had no message duplications.
1
10
100
1000
10000
unlimited100 24 10 1 unlimited100 24 10 1
Delay(minute)
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Mean delayMedian delay
Figure 6: Median and mean delays (log scale).
in each node. The epidemic routing had the most storage usage.
The message time-to-live parameter was the big factor affecting the
storage usage for epidemic and prediction-based routing protocols.
We studied the impact of different parameters of our
predictionbased routing protocol. Our prediction-based protocol was
sensitive to several parameters, such as the probability threshold and the
prediction window ΔT. Figure 8 shows the delivery ratios when
we used different probability thresholds. (The leftmost value 0.01
is the value used for the other plots.) A higher probability threshold
limited the transfer probability, so fewer messages were delivered.
It also required fewer transmissions as shown in Figure 9. With
a larger prediction window, we got a higher contact probability.
Thus, for the same probability threshold, we had a slightly higher
delivery ratio as shown in Figure 10, and a few more transmissions
as shown in Figure 11.
5. RELATED WORK
In addition to the protocols that we evaluated in our simulation,
several other opportunistic network routing protocols have been
proposed in the literature. We did not implement and evaluate these
routing protocols, because either they require domain-specific
information (location information) [14, 15], assume certain mobility
patterns [17], present orthogonal approaches [10, 24] to other
routing protocols.
0.1
1
10
100
1000
10000
unlimited100 24 10 1 unlimited100 24 10 1
Storageusage(KB)
Message time-to-live (TTL) (hour)
direct
random
prediction
state
prophet
epidemic
Mean of maximumMax of maximum
Figure 7: Max and mean of maximum storage usage across all
nodes (log scale).
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Deliveryratio
Probability threshold
Figure 8: Probability threshold impact on delivery ratio of
timely-contact routing.
LeBrun et al. [14] propose a location-based delay-tolerant
network routing protocol. Their algorithm assumes that every node
knows its own position, and the destination is stationary at a known
location. A node forwards data to a neighbor only if the
neighbor is closer to the destination than its own position. Our protocol
does not require knowledge of the nodes" locations, and learns their
contact patterns.
Leguay et al. [15] use a high-dimensional space to represent a
mobility pattern, then routes messages to nodes that are closer to
the destination node in the mobility pattern space. Location
information of nodes is required to construct mobility patterns.
Musolesi et al. [17] propose an adaptive routing protocol for
intermittently connected mobile ad-hoc networks. They use a Kalman
filter to compute the probability that a node delivers messages. This
protocol assumes group mobility and cloud connectivity, that is,
nodes move as a group, and among this group of nodes a
contemporaneous end-to-end connection exists for every pair of nodes. When
two nodes are in the same connected cloud, DSDV [19] routing is
used.
Network coding also draws much interest from DTN research.
Erasure-coding [10, 24] explores coding algorithms to reduce
message replicas. The source node replicates a message m times, then
uses a coding scheme to encode them in one big message.
After replicas are encoded, the source divides the big message into k
40
0
0.5
1
1.5
2
2.5
3
3.5
0 0.2 0.4 0.6 0.8 1
Numberofmessagetransmitted(million)
Probability threshold
Figure 9: Probability threshold impact on message
transmission of timely-contact routing.
0
0.2
0.4
0.6
0.8
1
0.01 0.1 1 10 100
Deliveryratio
Prediction window (hour)
Figure 10: Prediction window impact on delivery ratio of
timely-contact routing (semi-log scale).
blocks of the same size, and transmits a block to each of the first k
encountered nodes. If m of the blocks are received at the
destination, the message can be restored, where m < k. In a uniformly
distributed mobility scenario, the delivery probability increases
because the probability that the destination node meets m relays is
greater than it meets k relays, given m < k.
6. SUMMARY
We propose a prediction-based routing protocol for
opportunistic networks. We evaluate the performance of our protocol using
realistic contact traces, and compare to five existing routing
protocols.
Our simulation results show that direct delivery had the
lowest delivery ratio, the fewest data transmissions, and no meta-data
transmission or data duplication. Direct delivery is suitable for
devices that require an extremely low power consumption. The
random protocol increased the chance of delivery for messages
otherwise stuck at some low mobility nodes. Epidemic routing delivered
the most messages. The excessive transmissions, and data
duplication, however, consume more resources than portable devices may
be able to provide.
None of these protocols (direct-delivery, random and epidemic
routing) are practical for real deployment of opportunistic networks,
0
0.5
1
1.5
2
2.5
3
3.5
0.01 0.1 1 10 100
Numberofmessagetransmitted(million)
Prediction window (hour)
Figure 11: Prediction window impact on message transmission
of timely-contact routing (semi-log scale).
because they either had an extremely low delivery ratio, or had an
extremely high resource consumption. The prediction-based
routing protocols had a delivery ratio more than 10 times better than
that for direct-delivery and random routing, and fewer
transmissions and less storage usage than epidemic routing. They also had
fewer data duplications than epidemic routing.
All the prediction-based routing protocols that we have
evaluated had similar performance. Our method had a slightly higher
delivery ratio, but more transmissions and higher storage usage.
There are many parameters for prediction-based routing protocols,
however, and different parameters may produce different results.
Indeed, there is an opportunity for some adaptation; for example,
high priority messages may be given higher transfer and
replication probabilities to increase the chance of delivery and reduce the
delay, or a node with infrequent contact may choose to raise its
transfer probability.
We only studied the impact of predicting peer-to-peer contact
probability for routing in unicast messages. In some applications,
context information (such as location) may be available for the
peers. One may also consider other messaging models, for
example, where messages are sent to a location, such that every node at
that location will receive a copy of the message. Location
prediction [21] may be used to predict nodes" mobility, and to choose as
relays those nodes moving toward the destined location.
Research on routing in opportunistic networks is still in its early
stage. Many other issues of opportunistic networks, such as
security and privacy, are mainly left open. We anticipate studying these
issues in future work.
7. ACKNOWLEDGEMENT
This research is a project of the Center for Mobile
Computing and the Institute for Security Technology Studies at Dartmouth
College. It was supported by DoCoMo Labs USA, the
CRAWDAD archive at Dartmouth College (funded by NSF CRI Award
0454062), NSF Infrastructure Award EIA-9802068, and by Grant
number 2005-DD-BX-1091 awarded by the Bureau of Justice
Assistance. Points of view or opinions in this document are those of
the authors and do not represent the official position or policies of
any sponsor.
8. REFERENCES
[1] John Burgess, Brian Gallagher, David Jensen, and Brian Neil
Levine. MaxProp: routing for vehicle-based
41
disruption-tolerant networks. In Proceedings of the 25th
IEEE International Conference on Computer
Communications (INFOCOM), April 2006.
[2] Scott Burleigh, Adrian Hooke, Leigh Torgerson, Kevin Fall,
Vint Cerf, Bob Durst, Keith Scott, and Howard Weiss.
Delay-tolerant networking: An approach to interplanetary
Internet. IEEE Communications Magazine, 41(6):128-136,
June 2003.
[3] Tracy Camp, Jeff Boleng, and Vanessa Davies. A survey of
mobility models for ad-hoc network research. Wireless
Communication & Mobile Computing (WCMC): Special
issue on Mobile ad-hoc Networking: Research, Trends and
Applications, 2(5):483-502, 2002.
[4] Andrew Campbell, Shane Eisenman, Nicholas Lane,
Emiliano Miluzzo, and Ronald Peterson. People-centric
urban sensing. In IEEE Wireless Internet Conference, August
2006.
[5] Augustin Chaintreau, Pan Hui, Jon Crowcroft, Christophe
Diot, Richard Gass, and James Scott. Impact of human
mobility on the design of opportunistic forwarding
algorithms. In Proceedings of the 25th IEEE International
Conference on Computer Communications (INFOCOM),
April 2006.
[6] Kevin Fall. A delay-tolerant network architecture for
challenged internets. In Proceedings of the 2003 Conference
on Applications, Technologies, Architectures, and Protocols
for Computer Communications (SIGCOMM), August 2003.
[7] Tristan Henderson, David Kotz, and Ilya Abyzov. The
changing usage of a mature campus-wide wireless network.
In Proceedings of the 10th Annual International Conference
on Mobile Computing and Networking (MobiCom), pages
187-201, September 2004.
[8] Pan Hui, Augustin Chaintreau, James Scott, Richard Gass,
Jon Crowcroft, and Christophe Diot. Pocket switched
networks and human mobility in conference environments.
In ACM SIGCOMM Workshop on Delay Tolerant
Networking, pages 244-251, August 2005.
[9] Ravi Jain, Dan Lelescu, and Mahadevan Balakrishnan.
Model T: an empirical model for user registration patterns in
a campus wireless LAN. In Proceedings of the 11th Annual
International Conference on Mobile Computing and
Networking (MobiCom), pages 170-184, 2005.
[10] Sushant Jain, Mike Demmer, Rabin Patra, and Kevin Fall.
Using redundancy to cope with failures in a delay tolerant
network. In Proceedings of the 2005 Conference on
Applications, Technologies, Architectures, and Protocols for
Computer Communications (SIGCOMM), pages 109-120,
August 2005.
[11] Philo Juang, Hidekazu Oki, Yong Wang, Margaret
Martonosi, Li-Shiuan Peh, and Daniel Rubenstein.
Energy-efficient computing for wildlife tracking: Design
tradeoffs and early experiences with ZebraNet. In the Tenth
International Conference on Architectural Support for
Programming Languages and Operating Systems, October
2002.
[12] David Kotz and Kobby Essien. Analysis of a campus-wide
wireless network. Wireless Networks, 11:115-133, 2005.
[13] David Kotz, Tristan Henderson, and Ilya Abyzov.
CRAWDAD data set dartmouth/campus.
http://crawdad.cs.dartmouth.edu/dartmouth/campus,
December 2004.
[14] Jason LeBrun, Chen-Nee Chuah, Dipak Ghosal, and Michael
Zhang. Knowledge-based opportunistic forwarding in
vehicular wireless ad-hoc networks. In IEEE Vehicular
Technology Conference, pages 2289-2293, May 2005.
[15] Jeremie Leguay, Timur Friedman, and Vania Conan.
Evaluating mobility pattern space routing for DTNs. In
Proceedings of the 25th IEEE International Conference on
Computer Communications (INFOCOM), April 2006.
[16] Anders Lindgren, Avri Doria, and Olov Schelen.
Probabilistic routing in intermittently connected networks. In
Workshop on Service Assurance with Partial and Intermittent
Resources (SAPIR), pages 239-254, 2004.
[17] Mirco Musolesi, Stephen Hailes, and Cecilia Mascolo.
Adaptive routing for intermittently connected mobile ad-hoc
networks. In IEEE International Symposium on a World of
Wireless Mobile and Multimedia Networks, pages 183-189,
June 2005. extended version.
[18] OLPC. One laptop per child project. http://laptop.org.
[19] C. E. Perkins and P. Bhagwat. Highly dynamic
destination-sequenced distance-vector routing (DSDV) for
mobile computers. Computer Communication Review, pages
234-244, October 1994.
[20] C. E. Perkins and E. M. Royer. ad-hoc on-demand distance
vector routing. In IEEE Workshop on Mobile Computing
Systems and Applications, pages 90-100, February 1999.
[21] Libo Song, David Kotz, Ravi Jain, and Xiaoning He.
Evaluating next-cell predictors with extensive Wi-Fi mobility
data. IEEE Transactions on Mobile Computing,
5(12):1633-1649, December 2006.
[22] Jing Su, Ashvin Goel, and Eyal de Lara. An empirical
evaluation of the student-net delay tolerant network. In
International Conference on Mobile and Ubiquitous Systems
(MobiQuitous), July 2006.
[23] Amin Vahdat and David Becker. Epidemic routing for
partially-connected ad-hoc networks. Technical Report
CS-2000-06, Duke University, July 2000.
[24] Yong Wang, Sushant Jain, Margaret Martonosia, and Kevin
Fall. Erasure-coding based routing for opportunistic
networks. In ACM SIGCOMM Workshop on Delay Tolerant
Networking, pages 229-236, August 2005.
[25] Yu Wang and Hongyi Wu. DFT-MSN: the delay fault tolerant
mobile sensor network for pervasive information gathering.
In Proceedings of the 25th IEEE International Conference on
Computer Communications (INFOCOM), April 2006.
42 | contact trace;opportunistic network;route;epidemic protocol;frequent link break;end-to-end path;prophet;transfer probability;replication strategy;mobile opportunistic network;delay-tolerant network;random mobility model;direct-delivery protocol;simulation;realistic mobility trace;past encounter and transitivity history;history of past encounter and transitivity;unicast;routing protocol |
train_C-50 | CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses | This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it. | 1. INTRODUCTION
Search and rescue of people in emergency situation in a
timely manner is an extremely important service. It has
been difficult to provide such a service due to lack of timely
information needed to determine the current location of a
person who may be in an emergency situation. With the
emergence of pervasive computing, several systems [12, 19,
1, 5, 6, 4, 11] have been developed over the last few years
that make use of small devices such as cell phones,
sensors, etc. All these systems require a connected network
via satellites, GSM base stations, or mobile devices. This
requirement severely limits their applicability, particularly
in remote wilderness areas where maintaining a connected
network is very difficult.
For example, a GSM transmitter has to be in the range
of a base station to transmit. As a result, it cannot operate
in most wilderness areas. While a satellite transmitter is
the only viable solution in wilderness areas, it is typically
expensive and cumbersome. Furthermore, a line of sight is
required to transmit to satellite, and that makes it
infeasible to stay connected in narrow canyons, large cities with
skyscrapers, rain forests, or even when there is a roof or some
other obstruction above the transmitter, e.g. in a car. An
RF transmitter has a relatively smaller range of
transmission. So, while an in-situ sensor is cheap as a single unit, it is
expensive to build a large network that can provide
connectivity over a large wilderness area. In a mobile environment
where sensors are carried by moving people, power-efficient
routing is difficult to implement and maintain over a large
wilderness area. In fact, building an adhoc sensor network
using only the sensors worn by hikers is nearly impossible
due to a relatively small number of sensors spread over a
large wilderness area.
In this paper, we describe the design, implementation
and evaluation of a search and rescue system called
CenWits (Connection-less Sensor-Based Tracking System
Using Witnesses). CenWits is comprised of mobile, in-situ
sensors that are worn by subjects (people, wild animals, or
in-animate objects), access points (AP) that collect
information from these sensors, and GPS receivers and location
points (LP) that provide location information to the
sensors. A subject uses GPS receivers (when it can connect to
a satellite) and LPs to determine its current location. The
key idea of CenWits is that it uses a concept of witnesses
to convey a subject"s movement and location information
to the outside world. This averts a need for maintaining a
connected network to transmit location information to the
outside world. In particular, there is no need for
expensive GSM or satellite transmitters, or maintaining an adhoc
network of in-situ sensors in CenWits.
180
CenWits employs several important mechanisms to
address the key problem of resource constraints (low signal
strength, low power and limited memory) in sensors. In
particular, it makes a judicious use of the combined
storage capability of sensors to filter, organize and store
important information, combined battery power of sensors to
ensure that the system remains operational for longer time
periods, and intermittent network connectivity to propagate
information to a processing center.
The problem of low signal strengths (short range RF
communication) is addressed by avoiding a need for maintaining
a connected network. Instead, CenWits propagates the
location information of sensors using the concept of witnesses
through an intermittently connected network. As a result,
this system can be deployed in remote wilderness areas, as
well as in large urban areas with skyscrapers and other tall
structures. Also, this makes CenWits cost-effective. A
subject only needs to wear light-weight and low-cost sensors
that have GPS receivers but no expensive GSM or satellite
transmitters. Furthermore, since there is no need for a
connected sensor network, there is no need to deploy sensors in
very large numbers.
The problem of limited battery life and limited memory of
a sensor is addressed by incorporating the concepts of groups
and partitions. Groups and partitions allow sensors to stay
in sleep or receive modes most of the time. Using groups and
partitions, the location information collected by a sensor can
be distributed among several sensors, thereby reducing the
amount of memory needed in one sensor to store that
information. In fact, CenWits provides an adaptive tradeoff
between memory and power consumption of sensors. Each
sensor can dynamically adjust its power and memory
consumption based on its remaining power or available memory.
It has amply been noted that the strength of sensor
networks comes from the fact that several sensor nodes can
be distributed over a relatively large area to construct a
multihop network. This paper demonstrates that important
large-scale applications can be built using sensors by
judiciously integrating the storage, communication and
computation capabilities of sensors. The paper describes
important techniques to combine memory, transmission and
battery power of many sensors to address resource constraints
in the context of a search and rescue application. However,
these techniques are quite general. We discuss several other
sensor-based applications that can employ these techniques.
While CenWits addresses the general location tracking
and reporting problem in a wide-area network, there are
two important differences from the earlier work done in this
area. First, unlike earlier location tracking solutions,
CenWits does not require a connected network. Second, unlike
earlier location tracking solutions, CenWits does not aim for
a very high accuracy of localization. Instead, the main goal
is to provide an approximate, small area where search and
rescue efforts can be concentrated.
The rest of this paper is organized as follows. In Section
2, we overview some of the recent projects and technologies
related to movement and location tracking, and search and
rescue systems. In Section 3, we describe the overall
architecture of CenWits, and provide a high-level description of
its functionality. In the next section, Section 4, we discuss
power and memory management in CenWits. To simplify
our presentation, we will focus on a specific application of
tracking lost/injured hikers in all these sections. In
Section 6, we describe a prototype implementation of CenWits
and present performance measured from this
implementation. We discuss how the ideas of CenWits can be used
to build several other applications in Section 7. Finally, in
Section 8, we discuss some related issues and conclude the
paper.
2. RELATED WORK
A survey of location systems for ubiquitous computing
is provided in [11]. A location tracking system for adhoc
sensor networks using anchor sensors as reference to gain
location information and spread it out to outer node is
proposed in [17]. Most location tracking systems in adhoc
sensor networks are for benefiting geographic-aware routing.
They don"t fit well for our purposes. The well-known
active badge system [19] lets a user carry a badge around.
An infrared sensor in the room can detect the presence of
a badge and determine the location and identification of
the person. This is a useful system for indoor environment,
where GPS doesn"t work. Locationing using 802.11 devices
is probably the cheapest solution for indoor position
tracking [8]. Because of the popularity and low cost of 802.11
devices, several business solutions based on this technology
have been developed[1].
A system that combines two mature technologies and is
viable in suburban area where a user can see clear sky and
has GSM cellular reception at the same time is currently
available[5]. This system receives GPS signal from a
satellite and locates itself, draws location on a map, and sends
location information through GSM network to the others
who are interested in the user"s location.
A very simple system to monitor children consists an RF
transmitter and a receiver. The system alarms the holder
of the receiver when the transmitter is about to run out of
range [6].
Personal Locater Beacons (PLB) has been used for avalanche
rescuing for years. A skier carries an RF transmitter that
emits beacons periodically, so that a rescue team can find
his/her location based on the strength of the RF signal.
Luxury version of PLB combines a GPS receiver and a
COSPASSARSAT satellite transmitter that can transmit user"s
location in latitude and longitude to the rescue team whenever
an accident happens [4]. However, the device either is turned
on all the time resulting in fast battery drain, or must be
turned on after the accident to function.
Another related technology in widespread use today is the
ONSTAR system [3], typically used in several luxury cars.
In this system, a GPS unit provides position information,
and a powerful transmitter relays that information via
satellite to a customer service center. Designed for emergencies,
the system can be triggered either by the user with the push
of a button, or by a catastrophic accident. Once the system
has been triggered, a human representative attempts to gain
communication with the user via a cell phone built as an
incar device. If contact cannot be made, emergency services
are dispatched to the location provided by GPS. Like PLBs,
this system has several limitations. First, it is heavy and
expensive. It requires a satellite transmitter and a connected
network. If connectivity with either the GPS network or a
communication satellite cannot be maintained, the system
fails. Unfortunately, these are common obstacles
encountered in deep canyons, narrow streets in large cities, parking
garages, and a number of other places.
181
The Lifetch system uses GPS receiver board combined
with a GSM/GPRS transmitter and an RF transmitter in
one wireless sensor node called Intelligent Communication
Unit (ICU). An ICU first attempts to transmit its location
to a control center through GSM/GPRS network. If that
fails, it connects with other ICUs (adhoc network) to
forward its location information until the information reaches
an ICU that has GSM/GPRS reception. This ICU then
transmits the location information of the original ICU via
the GSM/GPRS network.
ZebraNet is a system designed to study the moving
patterns of zebras [13]. It utilizes two protocols: History-based
protocol and flooding protocol. History-based protocol is
used when the zebras are grazing and not moving around
too much. While this might be useful for tracking zebras,
it"s not suitable for tracking hikers because two hikers are
most likely to meet each other only once on a trail. In
the flooding protocol, a node dumps its data to a neighbor
whenever it finds one and doesn"t delete its own copy until
it finds a base station. Without considering routing loops,
packet filtering and grouping, the size of data on a node will
grow exponentially and drain the power and memory of a
sensor node with in a short time. Instead, Cenwits uses a
four-phase hand-shake protocol to ensure that a node
transmits only as much information as the other node is willing
to receive. While ZebraNet is designed for a big group of
sensors moving together in the same direction with same
speed, Cenwits is designed to be used in the scenario where
sensors move in different directions at different speeds.
Delay tolerant network architecture addresses some
important problems in challenged (resource-constrained)
networks [9]. While this work is mainly concerned with
interoperability of challenged networks, some problems related
to occasionally-connected networks are similar to the ones
we have addressed in CenWits.
Among all these systems, luxury PLB and Lifetch are
designed for location tracking in wilderness areas. However,
both of these systems require a connected network. Luxury
PLB requires the user to transmit a signal to a satellite,
while Lifetch requires connection to GSM/GPRS network.
Luxury PLB transmits location information, only when an
accident happens. However, if the user is buried in the snow
or falls into a deep canyon, there is almost no chance for the
signal to go through and be relayed to the rescue team. This
is because satellite transmission needs line of sight.
Furthermore, since there is no known history of user"s location, it is
not possible for the rescue team to infer the current location
of the user. Another disadvantage of luxury PLB is that a
satellite transmitter is very expensive, costing in the range
of $750. Lifetch attempts to transmit the location
information by GSM/GPRS and adhoc sensor network that uses
AODV as the routing protocol. However, having a cellular
reception in remote areas in wilderness areas, e.g.
American national parks is unlikely. Furthermore, it is extremely
unlikely that ICUs worn by hikers will be able to form an
adhoc network in a large wilderness area. This is because
the hikers are mobile and it is very unlikely to have several
ICUs placed dense enough to forward packets even on a very
popular hike route.
CenWits is designed to address the limitations of systems
such as luxury PLB and Lifetch. It is designed to
provide hikers, skiers, and climbers who have their activities
mainly in wilderness areas a much higher chance to convey
their location information to a control center. It is not
reliant upon constant connectivity with any communication
medium. Rather, it communicates information along from
user to user, finally arriving at a control center. Unlike
several of the systems discussed so far, it does not require that
a user"s unit is constantly turned on. In fact, it can discover
a victim"s location, even if the victim"s sensor was off at the
time of accident and has remained off since then. CenWits
solves one of the greatest problems plaguing modern search
and rescue systems: it has an inherent on-site storage
capability. This means someone within the network will have
access to the last-known-location information of a victim,
and perhaps his bearing and speed information as well.
Figure 1: Hiker A and Hiker B are are not in the
range of each other
3. CENWITS
We describe CenWits in the context of locating lost/injured
hikers in wilderness areas. Each hiker wears a sensor (MICA2
motes in our prototype) equipped with a GPS receiver and
an RF transmitter. Each sensor is assigned a unique ID and
maintains its current location based on the signal received by
its GPS receiver. It also emits beacons periodically. When
any two sensors are in the range of one another, they record
the presence of each other (witness information), and also
exchange the witness information they recorded earlier. The
key idea here is that if two sensors come with in range of
each other at any time, they become each other"s witnesses.
Later on, if the hiker wearing one of these sensors is lost, the
other sensor can convey the last known (witnessed) location
of the lost hiker. Furthermore, by exchanging the witness
information that each sensor recorded earlier, the witness
information is propagated beyond a direct contact between
two sensors.
To convey witness information to a processing center or to
a rescue team, access points are established at well-known
locations that the hikers are expected to pass through, e.g.
at the trail heads, trail ends, intersection of different trails,
scenic view points, resting areas, and so on. Whenever a
sensor node is in the vicinity of an access point, all witness
information stored in that sensor is automatically dumped
to the access point. Access points are connected to a
processing center via satellite or some other network1
. The
witness information is downloaded to the processing center
from various access points at regular intervals. In case,
connection to an access point is lost, the information from that
1
A connection is needed only between access points and a
processing center. There is no need for any connection
between different access points.
182
access point can be downloaded manually, e.g. by UAVs.
To estimate the speed, location and direction of a hiker at
any point in time, all witness information of that hiker that
has been collected from various access points is processed.
Figure 2: Hiker A and Hiker B are in the range
of each other. A records the presence of B and B
records the presence of A. A and B become each
other"s witnesses.
Figure 3: Hiker A is in the range of an access
point. It uploads its recorded witness information
and clears its memory.
An example of how CenWits operates is illustrated in
Figures 1, 2 and 3. First, hikers A and B are on two close
trails, but out of range of each other (Figure 1). This is
a very common scenario during a hike. For example, on a
popular four-hour hike, a hiker might run into as many as
20 other hikers. This accounts for one encounter every 12
minutes on average. A slow hiker can go 1 mile (5,280 feet)
per hour. Thus in 12 minutes a slow hiker can go as far as
1056 feet. This implies that if we were to put 20 hikers on a
4-hour, one-way hike evenly, the range of each sensor node
should be at least 1056 feet for them to communicate with
one another continuously. The signal strength starts
dropping rapidly for two Mica2 nodes to communicate with each
other when they are 180 feet away, and is completely lost
when they are 230 feet away from each other[7]. So, for the
sensors to form a sensor network on a 4-hour hiking trail,
there should be at least 120 hikers scattered evenly. Clearly,
this is extremely unlikely. In fact, in a 4-hour, less-popular
hiking trail, one might only run into say five other hikers.
CenWits takes advantage of the fact that sensors can
communicate with one another and record their presence. Given
a walking speed of one mile per hour (88 feet per minute)
and Mica2 range of about 150 feet for non-line-of-sight radio
transmission, two hikers have about 150/88 = 1.7 minutes to
discover the presence of each other and exchange their
witness information. We therefore design our system to have
each sensor emit a beacon every one-and-a-half minute. In
Figure 2, hiker B"s sensor emits a beacon when A is in range,
this triggers A to exchange data with B. A communicates
the following information to B: My ID is A; I saw C at 1:23
PM at (39◦
49.3277655", 105◦
39.1126776"), I saw E at 3:09
PM at (40◦
49.2234879", 105◦
20.3290168"). B then replies
with My ID is B; I saw K at 11:20 AM at (39◦
51.4531655",
105◦
41.6776223"). In addition, A records I saw B at 4:17
PM at (41◦
29.3177354", 105◦
04.9106211") and B records I
saw A at 4:17 PM at (41◦
29.3177354", 105◦
04.9106211").
B goes on his way to overnight camping while A heads
back to trail head where there is an AP, which emits beacon
every 5 seconds to avoid missing any hiker. A dumps all
witness information it has collected to the access point. This
is shown in Figure 3.
3.1 Witness Information: Storage
A critical concern is that there is limited amount of
memory available on motes (4 KB SDRAM memory, 128 KB
flash memory, and 4-512 KB EEPROM). So, it is important
to organize witness information efficiently. CenWits stores
witness information at each node as a set of witness records
(Format is shown in Figure 4.
1 B
Node ID Record Time X, Y Location Time Hop Count
1 B 3 B 8 B 3 B
Figure 4: Format of a witness record.
When two nodes i and j encounter each other, each node
generates a new witness record. In the witness record
generated by i, Node ID is j, Record Time is the current time
in i"s clock, (X,Y) are the coordinates of the location of i
that i recorded most recently (either from satellite or an
LP), Location Time is the time when the this location was
recorded, and Hop Count is 0.
Each node is assigned a unique Node Id when it enters a
trail. In our current prototype, we have allocated one byte
for Node Id, although this can be increased to two or more
bytes if a large number of hikers are expected to be present
at the same time. We can represent time in 17 bits to a
second precision. So, we have allocated 3 bytes each for Record
Time and Location Time. The circumference of the Earth
is approximately 40,075 KM. If we use a 32-bit number to
represent both longitude and latitude, the precision we get
is 40,075,000/232
= 0.0093 meter = 0.37 inches, which is
quite precise for our needs. So, we have allocated 4 bytes
each for X and Y coordinates of the location of a node. In
fact, a foot precision can be achieved by using only 27 bits.
3.2 Location Point and Location Inference
Although a GPS receiver provides an accurate location
information, it has it"s limitation. In canyons and rainy
forests, a GPS receiver does not work. When there is a
heavy cloud cover, GPS users have experienced inaccuracy
in the reported location as well. Unfortunately, a lot of
hiking trails are in dense forests and canyons, and it"s not that
uncommon to rain after hikers start hiking. To address this,
CenWits incorporates the idea of location points (LP). A
location point can update a sensor node with its current
location whenever the node is near that LP. LPs are placed at
different locations in a wilderness area where GPS receivers
don"t work. An LP is a very simple device that emits
prerecorded location information at some regular time interval.
It can be placed in difficult-to-reach places such as deep
canyons and dense rain forests by simply dropping them
from an airplane. LPs allow a sensor node to determine
its current location more accurately. However, they are not
183
an essential requirement of CenWits. If an LP runs out of
power, the CenWits will continue to work correctly.
Figure 5: GPS receiver not working correctly.
Sensors then have to rely on LP to provide coordination
In Figure 5, B cannot get GPS reception due to bad
weather. It then runs into A on the trail who doesn"t have
GPS reception either. Their sensors record the presence of
each other. After 10 minutes, A is in range of an LP that
provides an accurate location information to A. When A
returns to trail head and uploads its data (Figure 6), the
system can draw a circle centered at the LP from which A
fetched location information for the range of encounter
location of A and B. By Overlapping this circle with the trail
map, two or three possible location of encounter can be
inferred. Thus when a rescue is required, the possible location
of B can be better inferred (See Figures 7 and 8).
Figure 6: A is back to trail head, It reports the
time of encounter with B to AP, but no location
information to AP
Figure 7: B is still missing after sunset. CenWits
infers the last contact point and draws the circle of
possible current locations based on average hiking
speed
CenWits requires that the clocks of different sensor nodes
be loosely synchronized with one another. Such a
synchronization is trivial when GPS coverage is available. In
addition, sensor nodes in CenWits synchronize their clocks
whenever they are in the range of an AP or an LP. The
Figure 8: Based on overlapping landscape, B might
have hiked to wrong branch and fallen off a cliff. Hot
rescue areas can thus be determined
synchronization accuracy Cenwits needs is of the order of a
second or so. As long as the clocks are synchronized with in
one second range, whether A met B at 12:37"45 or 12:37"46
doesn"t matter in the ordering of witness events and
inferring the path.
4. MEMORY AND POWER MANAGEMENT
CenWits employs several important mechanisms to
conserve power and memory. It is important to note while
current sensor nodes have limited amount of memory,
future sensor nodes are expected to have much more memory.
With this in mind, the main focus in our design is to
provide a tradeoff between the amount of memory available and
amount of power consumption.
4.1 Memory Management
The size of witness information stored at a node can get
very large. This is because the node may come across several
other nodes during a hike, and may end up accumulating a
large amount of witness information over time. To address
this problem, CenWits allows a node to pro-actively free up
some parts of its memory periodically. This raises an
interesting question of when and which witness record should be
deleted from the memory of a node? CenWits uses three
criteria to determine this: record count, hop count, and record
gap.
Record count refers to the number of witness records with
same node id that a node has stored in its memory. A node
maintains an integer parameter MAX RECORD COUNT.
It stores at most MAX RECORD COUNT witness records
of any node.
Every witness record has a hop count field that stores the
number times (hops) this record has been transferred since
being created. Initially this field is set to 0. Whenever a
node receives a witness record from another node, it
increments the hop count of that record by 1. A node maintains
an integer parameter called MAX HOP COUNT. It keeps
only those witness records in its memory, whose hop count
is less than MAX HOP COUNT. The MAX HOP COUNT
parameter provides a balance between two conflicting goals:
(1) To ensure that a witness record has been propagated to
and thus stored at as many nodes as possible, so that it has
a high probability of being dumped at some AP as quickly
as possible; and (2) To ensure that a witness record is stored
only at a few nodes, so that it does not clog up too much of
the combined memory of all sensor nodes. We chose to use
hop count instead of time-to-live to decide when to drop a
packet. The main reason for this is that the probability of
a packet reaching an AP goes up as the hop count adds up.
For example, when the hop count if 5 for a specific record,
184
the record is in at least 5 sensor nodes. On the other hand,
if we discard old records, without considering hop count,
there is no guarantee that the record is present in any other
sensor node.
Record gap refers to the time difference between the record
times of two witness records with the same node id. To
save memory, a node n ensures the the record gap between
any two witness records with the same node id is at least
MIN RECORD GAP. For each node id i, n stores the
witness record with the most recent record time rti, the witness
with most recent record time that is at least MIN RECORD GAP
time units before rti, and so on until the record count limit
(MAX RECORD COUNT) is reached.
When a node is tight in memory, it adjusts the three
parameters, MAX RECORD COUNT, MAX HOP COUNT and
MIN RECORD GAP to free up some memory. It
decrements MAX RECORD COUNT and MAX HOP COUNT,
and increments MIN RECORD GAP. It then first erases
all witness records whose hop count exceeds the reduced
MAX HOP COUNT value, and then erases witness records
to satisfy the record gap criteria. Also, when a node has
extra memory space available, e.g. after dumping its witness
information at an access point, it resets MAX RECORD COUNT,
MAX HOP COUNT and MIN RECORD GAP to some
predefined values.
4.2 Power Management
An important advantage of using sensors for tracking
purposes is that we can regulate the behavior of a sensor node
based on current conditions. For example, we mentioned
earlier that a sensor should emit a beacon every 1.7 minute,
given a hiking speed of 1 mile/hour. However, if a user is
moving 10 feet/sec, a beacon should be emitted every 10
seconds. If a user is not moving at all, a beacon can be
emitted every 10 minutes. In the night, a sensor can be put
into sleep mode to save energy, when a user is not likely to
move at all for a relatively longer period of time. If a user
is active for only eight hours in a day, we can put the sensor
into sleep mode for the other 16 hours and thus save 2/3rd
of the energy.
In addition, a sensor node can choose to not send any
beacons during some time intervals. For example, suppose
hiker A has communicated its witness information to three
other hikers in the last five minutes. If it is running low
on power, it can go to receive mode or sleep mode for the
next ten minutes. It goes to receive mode if it is still willing
to receive additional witness information from hikers that it
encounters in the next ten minutes. It goes to sleep mode if
it is extremely low on power.
The bandwidth and energy limitations of sensor nodes
require that the amount of data transferred among the nodes
be reduced to minimum. It has been observed that in some
scenarios 3000 instructions could be executed for the same
energy cost of sending a bit 100m by radio [15]. To reduce
the amount of data transfer, CenWits employs a handshake
protocol that two nodes execute when they encounter one
another. The goal of this protocol is to ensure that a node
transmits only as much witness information as the other
node is willing to receive. This protocol is initiated when
a node i receives a beacon containing the node ID of the
sender node j and i has not exchanged witness information
with j in the last δ time units. Assume that i < j. The
protocol consists of four phases (See Figure 9):
1. Phase I: Node i sends its receive constraints and the
number of witness records it has in its memory.
2. Phase II: On receiving this message from i, j sends its
receive constraints and the number of witness records
it has in its memory.
3. Phase III: On receiving the above message from j, i
sends its witness information (filtered based on receive
constraints received in phase II).
4. Phase IV: After receiving the witness records from
i, j sends its witness information (filtered based on
receive constraints received in phase I).
j
<Constaints, Witness info size>
<Constaints, Witness info size>
<Filtered Witness info>
<Filtered Witness info>
i j
j
j
i
i
i
Figure 9: Four-Phase Hand Shake Protocol (i < j)
Receive constraints are a function of memory and power.
In the most general case, they are comprised of the three
parameters (record count, hop count and record gap) used
for memory management. If i is low on memory, it specifies
the maximum number of records it is willing to accept from
j. Similarly, i can ask j to send only those records that
have hop count value less than MAX HOP COUNT − 1.
Finally, i can include its MIN RECORD GAP value in its
receive constraints. Note that the handshake protocol is
beneficial to both i and j. They save memory by receiving
only as much information as they are willing to accept and
conserve energy by sending only as many witness records as
needed.
It turns out that filtering witness records based on
MIN RECORD GAP is complex. It requires that the
witness records of any given node be arranged in an order sorted
by their record time values. Maintaining this sorted order is
complex in memory, because new witness records with the
same node id can arrive later that may have to be inserted
in between to preserve the sorted order. For this reason, the
receive constraints in the current CenWits prototype do not
include record gap.
Suppose i specifies a hop count value of 3. In this case,
j checks the hop count field of every witness record before
sending them. If the hop count value is greater than 3, the
record is not transmitted.
4.3 Groups and Partitions
To further reduce communication and increase the
lifetime of our system, we introduce the notion of groups. The
idea is based on the concept of abstract regions presented
in [20]. A group is a set of n nodes that can be defined
in terms of radio connectivity, geographic location, or other
properties of nodes. All nodes within a group can
communicate directly with one another and they share information
to maintain their view of the external world. At any point
in time, a group has exactly one leader that communicates
185
with external nodes on behalf of the entire group. A group
can be static, meaning that the group membership does not
change over the period of time, or it could be dynamic in
which case nodes can leave or join the group. To make our
analysis simple and to explain the advantages of group, we
first discuss static groups.
A static group is formed at the start of a hiking trail or ski
slope. Suppose there are five family members who want to
go for a hike in the Rocky Mountain National Park. Before
these members start their hike, each one of them is given
a sensor node and the information is entered in the system
that the five nodes form a group. Each group member is
given a unique id and every group member knows about
other members of the group. The group, as a whole, is also
assigned an id to distinguish it from other groups in the
system.
Figure 10: A group of five people. Node 2 is the
group leader and it is communicating on behalf of
the group with an external node 17. All other
(shown in a lighter shade) are in sleep mode.
As the group moves through the trail, it exchanges
information with other nodes or groups that it comes across. At
any point in time, only one group member, called the leader,
sends and receives information on behalf of the group and
all other n − 1 group members are put in the sleep mode
(See Figure 10). It is this property of groups that saves
us energy. The group leadership is time-multiplexed among
the group members. This is done to make sure that a single
node does not run out of battery due to continuous exchange
of information. Thus after every t seconds, the leadership
is passed on to another node, called the successor, and the
leader (now an ordinary member) is put to sleep. Since
energy is dear, we do not implement an extensive election
algorithm for choosing the successor. Instead, we choose the
successor on the basis of node id. The node with the next
highest id in the group is chosen as the successor. The last
node, of course, chooses the node with the lowest id as its
successor.
We now discuss the data storage schemes for groups.
Memory is a scarce resource in sensor nodes and it is therefore
important that witness information be stored efficiently among
group members. Efficient data storage is not a trivial task
when it comes to groups. The tradeoff is between simplicity
of the scheme and memory savings. A simpler scheme
incurs lesser energy cost as compared to a more sophisticated
scheme, but offers lesser memory savings as well. This is
because in a more complicated scheme, the group members
have to coordinate to update and store information. After
considering a number of different schemes, we have come
to a conclusion that there is no optimal storage scheme for
groups. The system should be able to adapt according to
its requirements. If group members are low on battery, then
the group can adapt a scheme that is more energy efficient.
Similarly, if the group members are running out of memory,
they can adapt a scheme that is more memory efficient. We
first present a simple scheme that is very energy efficient but
does not offer significant memory savings. We then present
an alternate scheme that is much more memory efficient.
As already mentioned a group can receive information
only through the group leader. Whenever the leader comes
across an external node e, it receives information from that
node and saves it. In our first scheme, when the timeslot for
the leader expires, the leader passes this new information it
received from e to its successor. This is important because
during the next time slot, if the new leader comes across
another external node, it should be able to pass
information about all the external nodes this group has witnessed
so far. Thus the information is fully replicated on all nodes
to maintain the correct view of the world.
Our first scheme does not offer any memory savings but is
highly energy efficient and may be a good choice when the
group members are running low on battery. Except for the
time when the leadership is switched, all n − 1 members are
asleep at any given time. This means that a single member
is up for t seconds once every n∗t seconds and therefore has
to spend approximately only 1/nth
of its energy. Thus, if
there are 5 members in a group, we save 80% energy, which
is huge. More energy can be saved by increasing the group
size.
We now present an alternate data storage scheme that
aims at saving memory at the cost of energy. In this scheme
we divide the group into what we call partitions. Partitions
can be thought of as subgroups within a group. Each
partition must have at least two nodes in it. The nodes within a
partition are called peers. Each partition has one peer
designated as partition leader. The partition leader stays in
receive mode at all times, while all others peers a partition stay
in the sleep mode. Partition leadership is time-multiplexed
among the peers to make sure that a single node does not
run out of battery. Like before, a group has exactly one
leader and the leadership is time-multiplexed among
partitions. The group leader also serves as the partition leader
for the partition it belongs to (See Figure 11).
In this scheme, all partition leaders participate in
information exchange. Whenever a group comes across an external
node e, every partition leader receives all witness
information, but it only stores a subset of that information after
filtering. Information is filtered in such a way that each
partition leader has to store only B/K bytes of data, where
K is the number of partitions and B is the total number
of bytes received from e. Similarly when a group wants to
send witness information to e, each partition leader sends
only B/K bytes that are stored in the partition it belongs
to. However, before a partition leader can send information,
it must switch from receive mode to send mode. Also,
partition leaders must coordinate with one another to ensure
that they do not send their witness information at the same
time, i.e. their message do not collide. All this is achieved
by having the group leader send a signal to every partition
leader in turn.
186
Figure 11: The figure shows a group of eight nodes
divided into four partitions of 2 nodes each. Node
1 is the group leader whereas nodes 2, 9, and 7 are
partition leaders. All other nodes are in the sleep
mode.
Since the partition leadership is time-multiplexed, it is
important that any information received by the partition
leader, p1, be passed on to the next leader, p2. This has
to be done to make sure that p2 has all the information
that it might need to send when it comes across another
external node during its timeslot. One way of achieving this
is to wake p2 up just before p1"s timeslot expires and then
have p1 transfer information only to p2. An alternate is to
wake all the peers up at the time of leadership change, and
then have p1 broadcast the information to all peers. Each
peer saves the information sent by p1 and then goes back
to sleep. In both cases, the peers send acknowledgement to
the partition leader after receiving the information. In the
former method, only one node needs to wake up at the time
of leadership change, but the amount of information that
has to be transmitted between the nodes increases as time
passes. In the latter case, all nodes have to be woken up at
the time of leadership change, but small piece of information
has to be transmitted each time among the peers. Since
communication is much more expensive than bringing the
nodes up, we prefer the second method over the first one.
A group can be divided into partitions in more than one
way. For example, suppose we have a group of six members.
We can divide this group into three partitions of two peers
each, or two partitions with three peers each. The choice
once again depends on the requirements of the system. A
few big partitions will make the system more energy efficient.
This is because in this configuration, a greater number of
nodes will stay in sleep mode at any given point in time.
On the other hand, several small partitions will make the
system memory efficient, since each node will have to store
lesser information (See Figure 12).
A group that is divided into partitions must be able to
readjust itself when a node leaves or runs out of battery.
This is crucial because a partition must have at least two
nodes at any point in time to tolerate failure of one node.
For example, in figure 3 (a), if node 2 or node 5 dies, the
partition is left with only one node. Later on, if that single
node in the partition dies, all witness information stored in
that partition will be lost. We have devised a very simple
protocol to solve this problem. We first explain how
partiFigure 12: The figure shows two different ways of
partitioning a group of six nodes. In (a), a group
is divided into three partitions of two nodes. Node
1 is the group leader, nodes 9 and 5 are partition
leaders, and nodes 2, 3, and 6 are in sleep mode. In
(b) the group is divided into two partitions of three
nodes. Node 1 is the group leader, node 9 is the
partition leader and nodes 2, 3, 5, and 6 are in sleep
mode.
tions are adjusted when a peer dies, and then explain what
happens if a partition leader dies.
Suppose node 2 in figure 3 (a) dies. When node 5, the
partition leader, sends information to node 2, it does not
receive an acknowledgement from it and concludes that node
2 has died2
. At this point, node 5 contacts other partition
leaders (nodes 1 ... 9) using a broadcast message and
informs them that one of its peers has died. Upon hearing
this, each partition leader informs node 5 (i) the number of
nodes in its partition, (ii) a candidate node that node 5 can
take if the number of nodes in its partition is greater than
2, and (iii) the amount of witness information stored in its
partition. Upon hearing from every leader, node 5 chooses
the candidate node from the partition with maximum
number (must be greater than 2) of peers, and sends a message
back to all leaders. Node 5 then sends data to its new peer
to make sure that the information is replicated within the
partition.
However, if all partitions have exactly two nodes, then
node 5 must join another partition. It chooses the partition
that has the least amount of witness information to join. It
sends its witness information to the new partition leader.
Witness information and membership update is propagated
to all peers during the next partition leadership change.
We now consider the case where the partition leader dies.
If this happens, then we wait for the partition leadership to
change and for the new partition leader to eventually find
out that a peer has died. Once the new partition leader finds
out that it needs more peers, it proceeds with the protocol
explained above. However, in this case, we do lose
information that the previous partition leader might have received
just before it died. This problem can be solved by
implementing a more rigorous protocol, but we have decided to
give up on accuracy to save energy.
Our current design uses time-division multiplexing to
schedule wakeup and sleep modes in the sensor nodes. However,
recent work on radio wakeup sensors [10] can be used to do
this scheduling more efficiently. we plan to incorporate radio
wakeup sensors in CenWits when the hardware is mature.
2
The algorithm to conclude that a node has died can be
made more rigorous by having the partition leader query
the suspected node a few times.
187
5. SYSTEM EVALUATION
A sensor is constrained in the amount of memory and
power. In general, the amount of memory needed and power
consumption depends on a variety of factors such as node
density, number of hiker encounters, and the number of
access points. In this Section, we provide an estimate of how
long the power of a MICA2 mote will last under certain
assumtions.
First, we assume that each sensor node carries about 100
witness records. On encountering another hiker, a sensor
node transmits 50 witness records and receives 50 new
witness records. Since, each record is 16 bytes long, it will take
0.34 seconds to transmit 50 records and another 0.34
seconds to receive 50 records over a 19200 bps link. The power
consumption of MICA2 due to CPU processing,
transmission and reception are approximately 8.0 mA, 7.0 mA and
8.5 mA per hour respectively [18], and the capacity of an
alkaline battery is 2500mAh.
Since the radio module of Mica2 is half-duplex and
assuming that the CPU is always active when a node is awake,
power consumption due to transmission is 8 + 8.5 = 16.5
mA per hour and due to reception is 8 + 7 = 15mA per
hour. So, average power consumtion due to transmission
and reception is (16.5 + 15)/2 = 15.75 mA per hour.
Given that the capacity of an alkaline battery is 2500
mAh, a battery should last for 2500/15.75 = 159 hours of
transmission and reception. An encounter between two
hikers results in exchange of about 50 witness records that takes
about 0.68 seconds as calculated above. Thus, a single
alkaline battery can last for (159 ∗ 60 ∗ 60)/0.68 = 841764 hiker
encounters.
Assuming that a node emits a beacon every 90 seconds
and a hiker encounter occurs everytime a beacon is emitted
(worst-case scenario), a single alkaline battery will last for
(841764 ∗ 90)/(30 ∗ 24 ∗ 60 ∗ 60) = 29 days. Since, a Mica2
is equipped with two batteries, a Mica2 sensor can remain
operation for about two months. Notice that this
calculation is preliminary, because it assumes that hikers are active
24 hours of the day and a hiker encounter occurs every 90
seconds. In a more realistic scenario, power is expected to
last for a much longer time period. Also, this time period
will significantly increase when groups of hikers are moving
together.
Finally, the lifetime of a sensor running on two
batteries can definitely be increased significantly by using energy
scavenging techniques and energy harvesting techniques [16,
14].
6. PROTOTYPE IMPLEMENTATION
We have implemented a prototype of CenWits on MICA2
sensor 900MHz running Mantis OS 0.9.1b. One of the sensor
is equipped with MTS420CA GPS module, which is capable
of barometric pressure and two-axis acceleration sensing in
addition to GPS location tracking. We use SiRF, the serial
communication protocol, to control GPS module. SiRF has
a rich command set, but we record only X and Y coordinates.
A witness record is 16 bytes long. When a node starts up, it
stores its current location and emits a beacon
periodicallyin the prototype, a node emits a beacon every minute.
We have conducted a number of experiments with this
prototype. A detailed report on these experiments with the
raw data collected and photographs of hikers, access points
etc. is available at http://csel.cs.colorado.edu/∼huangjh/
Cenwits.index.htm. Here we report results from three of
them. In all these experiments, there are three access points
(A, B and C) where nodes dump their witness information.
These access points also provide location information to the
nodes that come with in their range. We first show how
CenWits can be used to determine the hiking trail a hiker is most
likely on and the speed at which he is hiking, and identify
hot search areas in case he is reported missing. Next, we
show the results of power and memory management
techniques of CenWits in conserving power and memory of a
sensor node in one of our experiments.
6.1 Locating Lost Hikers
The first experiment is called Direct Contact. It is a very
simple experiment in which a single hiker starts from A,
goes to B and then C, and finally returns to A (See Figure
13). The goal of this experiment is to illustrate that
CenWits can deduce the trail a hiker takes by processing witness
information.
Figure 13: Direct Contact Experiment
Node Id Record (X,Y) Location Hop
Time Time Count
1 15 (12,7) 15 0
1 33 (31,17) 33 0
1 46 (12,23) 46 0
1 10 (12,7) 10 0
1 48 (12,23) 48 0
1 16 (12,7) 16 0
1 34 (31,17) 34 0
Table 1: Witness information collected in the direct
contact experiment.
The witness information dumped at the three access points
was then collected and processed at a control center. Part
of the witness information collected at the control center is
shown in Table 1. The X,Y locations in this table
correspond to the location information provided by access points
A, B, and C. A is located at (12,7), B is located at (31,17)
and C is located at (12,23). Three encounter points
(between hiker 1 and the three access points) extracted from
188
this witness information are shown in Figure 13 (shown in
rectangular boxes). For example, A,1 at 16 means 1 came in
contact with A at time 16. Using this information, we can
infer the direction in which hiker 1 was moving and speed at
which he was moving. Furthermore, given a map of hiking
trails in this area, it is clearly possible to identify the hiking
trail that hiker 1 took.
The second experiment is called Indirect Inference. This
experiment is designed to illustrate that the location,
direction and speed of a hiker can be inferred by CenWits, even
if the hiker never comes in the range of any access point. It
illustrates the importance of witness information in search
and rescue applications. In this experiment, there are three
hikers, 1, 2 and 3. Hiker 1 takes a trail that goes along
access points A and B, while hiker 3 takes trail that goes along
access points C and B. Hiker 2 takes a trail that does not
come in the range of any access points. However, this hiker
meets hiker 1 and 3 during his hike. This is illustrated in
Figure 14.
Figure 14: Indirect Inference Experiment
Node Id Record (X,Y) Location Hop
Time Time Count
2 16 (12,7) 6 0
2 15 (12,7) 6 0
1 4 (12,7) 4 0
1 6 (12,7) 6 0
1 29 (31,17) 29 0
1 31 (31,17) 31 0
Table 2: Witness information collected from hiker 1
in indirect inference experiment.
Part of the witness information collected at the control
center from access points A, B and C is shown in Tables
2 and 3. There are some interesting data in these tables.
For example, the location time in some witness records is
not the same as the record time. This means that the node
that generated that record did not have its most up-to-date
location at the encounter time. For example, when hikers
1 and 2 meet at time 16, the last recorded location time of
Node Id Record (X,Y) Location Hop
Time Time Count
3 78 (12,23) 78 0
3 107 (31,17) 107 0
3 106 (31,17) 106 0
3 76 (12,23) 76 0
3 79 (12,23) 79 0
2 94 (12,23) 79 0
1 16 (?,?) ? 1
1 15 (?,?) ? 1
Table 3: Witness information collected from hiker 3
in indirect inference experiment.
hiker 1 is (12,7) recorded at time 6. So, node 1 generates
a witness record with record time 16, location (12,7) and
location time 6. In fact, the last two records in Table 3
have (?,?) as their location. This has happened because
these witness records were generate by hiker 2 during his
encounter with 1 at time 15 and 16. Until this time, hiker
2 hadn"t come in contact with any location points.
Interestingly, a more accurate location information of 1
and 2 encounter or 2 and 3 encounter can be computed by
process the witness information at the control center. It
took 25 units of time for hiker 1 to go from A (12,7) to B
(31,17). Assuming a constant hiking speed and a relatively
straight-line hike, it can be computed that at time 16, hiker
1 must have been at location (18,10). Thus (18,10) is a more
accurate location of encounter between 1 and 2.
Finally, our third experiment called Identifying Hot Search
Areas is designed to determine the trail a hiker has taken
and identify hot search areas for rescue after he is reported
missing. There are six hikers (1, 2, 3, 4, 5 and 6) in this
experiment. Figure 15 shows the trails that hikers 1, 2,
3, 4 and 5 took, along with the encounter points obtained
from witness records collected at the control center. For
brevity, we have not shown the entire witness information
collected at the control center. This information is available
at http://csel.cs.colorado.edu/∼huangjh/Cenwits/index.htm.
Figure 15: Identifying Hot Search Area Experiment
(without hiker 6)
189
Now suppose hiker 6 is reported missing at time 260. To
determine the hot search areas, the witness records of hiker
6 are processed to determine the trail he is most likely on,
the speed at which he had been moving, direction in which
he had been moving, and his last known location. Based on
this information and the hiking trail map, hot search areas
are identified. The hiking trail taken by hiker 6 as inferred
by CenWits is shown by a dotted line and the hot search
areas identified by CenWits are shown by dark lines inside
the dotted circle in Figure 16.
Figure 16: Identifying Hot Search Area Experiment
(with hiker 6)
6.2 Results of Power and Memory
Management
The witness information shown in Tables 1, 2 and 3 has
not been filtered using the three criteria described in
Section 4.1. For example, the witness records generated by 3 at
record times 76, 78 and 79 (see Table 3) have all been
generated due a single contact between access point C and node
3. By applying the record gap criteria, two of these three
records will be erased. Similarly, the witness records
generated by 1 at record times 10, 15 and 16 (see Table 1) have
all been generated due a single contact between access point
A and node 1. Again, by applying the record gap criteria,
two of these three records will be erased. Our experiments
did not generate enough data to test the impact of record
count or hop count criteria.
To evaluate the impact of these criteria, we simulated
CenWits to generate a significantly large number of records for a
given number of hikers and access points. We generated
witness records by having the hikers walk randomly. We applied
the three criteria to measure the amount of memory savings
in a sensor node. The results are shown in Table 4. The
number of hikers in this simulation was 10 and the number
of access points was 5. The number of witness records
reported in this table is an average number of witness records
a sensor node stored at the time of dump to an access point.
These results show that the three memory management
criteria significantly reduces the memory consumption of
sensor nodes in CenWits. For example, they can reduce
MAX MIN MAX # of
RECORD RECORD HOP Witness
COUNT GAP COUNT Records
5 5 5 628
4 5 5 421
3 5 5 316
5 10 5 311
5 20 5 207
5 5 4 462
5 5 3 341
3 20 3 161
Table 4: Impact of memory management techniques.
the memory consumption by up to 75%. However, these
results are premature at present for two reasons: (1) They
are generated via simulation of hikers walking at random;
and (2) It is not clear what impact the erasing of witness
records has on the accuracy of inferred location/hot search
areas of lost hikers. In our future work, we plan to undertake
a major study to address these two concerns.
7. OTHER APPLICATIONS
In addition to the hiking in wilderness areas, CenWits can
be used in several other applications, e.g. skiing, climbing,
wild life monitoring, and person tracking. Since CenWits
relies only on intermittent connectivity, it can take advantage
of the existing cheap and mature technologies, and thereby
make tracking cheaper and fairly accurate. Since CenWits
doesn"t rely on keeping track of a sensor holder all time,
but relies on maintaining witnesses, the system is relatively
cheaper and widely applicable. For example, there are some
dangerous cliffs in most ski resorts. But it is too expensive
for a ski resort to deploy a connected wireless sensor network
through out the mountain. Using CenWits, we can deploy
some sensors at the cliff boundaries. These boundary
sensors emit beacons quite frequently, e.g. every second, and
so can record presence of skiers who cross the boundary and
fall off the cliff. Ski patrols can cruise the mountains every
hour, and automatically query the boundary sensor when in
range using PDAs. If a PDA shows that a skier has been
close to the boundary sensor, the ski patrol can use a long
range walkie-talkie to query control center at the resort base
to check the witness record of the skier. If there is no
witness record after the recorded time in the boundary sensor,
there is a high chance that a rescue is needed.
In wildlife monitoring, a very popular method is to attach
a GPS receiver on the animals. To collect data, either a
satellite transmitter is used, or the data collector has to
wait until the GPS receiver brace falls off (after a year or so)
and then search for the GPS receiver. GPS transmitters are
very expensive, e.g. the one used in geese tracking is $3,000
each [2]. Also, it is not yet known if continuous radio signal
is harmful to the birds. Furthermore, a GPS transmitter is
quite bulky and uncomfortable, and as a result, birds always
try to get rid of it. Using CenWits, not only can we record
the presence of wildlife, we can also record the behavior
of wild animals, e.g. lions might follow the migration of
deers. CenWits does nor require any bulky and expensive
satellite transmitters, nor is there a need to wait for a year
and search for the braces. CenWits provides a very simple
and cost-effective solution in this case. Also, access points
190
can be strategically located, e.g. near a water source, to
increase chances of collecting up-to-date data. In fact, the
access points need not be statically located. They can be
placed in a low-altitude plane (e.g a UAV) and be flown over
a wilderness area to collect data from wildlife.
In large cities, CenWits can be used to complement GPS,
since GPS doesn"t work indoor and near skyscrapers. If a
person A is reported missing, and from the witness records
we find that his last contacts were C and D, we can trace
an approximate location quickly and quite efficiently.
8. DISCUSSION AND FUTURE WORK
This paper presents a new search and rescue system called
CenWits that has several advantages over the current search
and rescue systems. These advantages include a
looselycoupled system that relies only on intermittent network
connectivity, power and storage efficiency, and low cost. It
solves one of the greatest problems plaguing modern search
and rescue systems: it has an inherent on-site storage
capability. This means someone within the network will have
access to the last-known-location information of a victim,
and perhaps his bearing and speed information as well. It
utilizes the concept of witnesses to propagate information,
infer current possible location and speed of a subject, and
identify hot search and rescue areas in case of emergencies.
A large part of CenWits design focuses on addressing the
power and memory limitations of current sensor nodes. In
fact, power and memory constraints depend on how much
weight (of sensor node) a hiker is willing to carry and the
cost of these sensors. An important goal of CenWits is build
small chips that can be implanted in hiking boots or ski
jackets. This goal is similar to the avalanche beacons that
are currently implanted in ski jackets. We anticipate that
power and memory will continue to be constrained in such
an environment.
While the paper focuses on the development of a search
and rescue system, it also provides some innovative,
systemlevel ideas for information processing in a sensor network
system.
We have developed and experimented with a basic
prototype of CenWits at present. Future work includes
developing a more mature prototype addressing important issues
such as security, privacy, and high availability. There are
several pressing concerns regarding security, privacy, and
high availability in CenWits. For example, an adversary
can sniff the witness information to locate endangered
animals, females, children, etc. He may inject false information
in the system. An individual may not be comfortable with
providing his/her location and movement information, even
though he/she is definitely interested in being located in a
timely manner at the time of emergency. In general, people
in hiking community are friendly and usually trustworthy.
So, a bullet-proof security is not really required. However,
when CenWits is used in the context of other applications,
security requirements may change. Since the sensor nodes
used in CenWits are fragile, they can fail. In fact, the nature
and level of security, privacy and high availability support
needed in CenWits strongly depends on the application for
which it is being used and the individual subjects involved.
Accordingly, we plan to design a multi-level support for
security, privacy and high availability in CenWits.
So far, we have experimented with CenWits in a very
restricted environment with a small number of sensors. Our
next goal is to deploy this system in a much larger and more
realistic environment. In particular, discussions are currenly
underway to deploy CenWits in the Rocky Mountain and
Yosemite National Parks.
9. REFERENCES
[1] 802.11-based tracking system.
http://www.pangonetworks.com/locator.htm.
[2] Brent geese 2002. http://www.wwt.org.uk/brent/.
[3] The onstar system. http://www.onstar.com.
[4] Personal locator beacons with GPS receiver and
satellite transmitter. http://www.aeromedix.com/.
[5] Personal tracking using GPS and GSM system.
http://www.ulocate.com/trimtrac.html.
[6] Rf based kid tracking system.
http://www.ion-kids.com/.
[7] F. Alessio. Performance measurements with motes
technology. MSWiM"04, 2004.
[8] P. Bahl and V. N. Padmanabhan. RADAR: An
in-building RF-based user location and tracking
system. IEEE Infocom, 2000.
[9] K. Fall. A delay-tolerant network architecture for
challenged internets. In SIGCOMM, 2003.
[10] L. Gu and J. Stankovic. Radio triggered wake-up
capability for sensor networks. In Real-Time
Applications Symposium, 2004.
[11] J. Hightower and G. Borriello. Location systems for
ubiquitous computing. IEEE Computer, 2001.
[12] W. Jaskowski, K. Jedrzejek, B. Nyczkowski, and
S. Skowronek. Lifetch life saving system. CSIDC, 2004.
[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh,
and D. Rubenstein. Energy-efficient computing for
wildlife tracking: design tradeoffs and early
experiences with ZebraNet. In ASPLOS, 2002.
[14] K. Kansal and M. Srivastava. Energy harvesting aware
power management. In Wireless Sensor Networks: A
Systems Perspective, 2005.
[15] G. J. Pottie and W. J. Kaiser. Embedding the
internet: wireless integrated network sensors.
Communications of the ACM, 43(5), May 2000.
[16] S. Roundy, P. K. Wright, and J. Rabaey. A study of
low-level vibrations as a power source for wireless
sensor networks. Computer Communications, 26(11),
2003.
[17] C. Savarese, J. M. Rabaey, and J. Beutel. Locationing
in distributed ad-hoc wireless sensor networks.
ICASSP, 2001.
[18] V. Shnayder, M. Hempstead, B. Chen, G. Allen, and
M. Welsh. Simulating the power consumption of
large-scale sensor network applications. In Sensys,
2004.
[19] R. Want and A. Hopper. Active badges and personal
interactive computing objects. IEEE Transactions of
Consumer Electronics, 1992.
[20] M. Welsh and G. Mainland. Programming sensor
networks using abstract regions. First USENIX/ACM
Symposium on Networked Systems Design and
Implementation (NSDI "04), 2004.
191 | intermittent network connectivity;gp receiver;pervasive computing;connected network;satellite transmitter;emergency situation;search and rescue;witness;beacon;location tracking system;rf transmitter;group and partition;sensor network;hiker |
train_C-52 | Fairness in Dead-Reckoning based Distributed Multi-Player Games | In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an error measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing. | 1. INTRODUCTION
In a distributed multi-player game, players are normally
distributed across the Internet and have varying delays to each other
or to a central game server. Usually, in such games, the players are
part of the game and in addition they may control entities that make
up the game. During the course of the game, the players and the
entities move within the game space. A player sends information
about her movement as well as the movement of the entities she
controls to the other players using a Dead-Reckoning (DR) vector.
A DR vector contains information about the current position of the
player/entity in terms of x, y and z coordinates (at the time the DR
vector was sent) as well as the trajectory of the entity in terms of
the velocity component in each of the dimensions. Each of the
participating players receives such DR vectors from one another and
renders the other players/entities on the local consoles until a new
DR vector is received for that player/entity. In a peer-to-peer game,
players send DR vectors directly to each other; in a client-server
game, these DR vectors may be forwarded through a game server.
The idea of DR is used because it is almost impossible for
players/entities to exchange their current positions at every time unit.
DR vectors are quantization of the real trajectory (which we refer
to as real path) at a player. Normally, a new DR vector is computed
and sent whenever the real path deviates from the path extrapolated
using the previous DR vector (say, in terms of distance in the x, y,
z plane) by some amount specified by a threshold. We refer to the
trajectory that can be computed using the sequence of DR vectors
as the exported path. Therefore, at the sending player, there is a
deviation between the real path and the exported path. The error due
to this deviation can be removed if each movement of player/entity
is communicated to the other players at every time unit; that is a
DR vector is generated at every time unit thereby making the real
and exported paths the same. Given that it is not feasible to
satisfy this due to bandwidth limitations, this error is not of practical
interest. Therefore, the receiving players can, at best, follow the
exported path. Because of the network delay between the sending
and receiving players, when a DR vector is received and rendered
at a player, the original trajectory of the player/entity may have
already changed. Thus, in physical time, there is a deviation at the
receiving player between the exported path and the rendered
trajectory (which we refer to as placed path). We refer to this error
as the export error. Note that the export error, in turn, results in a
deviation between the real and the placed paths.
The export error manifests itself due to the deviation between the
exported path at the sender and the placed path at the receiver (i)
1
before the DR vector is received at the receiver (referred to as the
before export error, and (ii) after the DR vector is received at the
receiver (referred to as the after export error). In an earlier paper [1],
we showed that by synchronizing the clocks at all the players and
by using a technique based on time-stamping messages that carry
the DR vectors, we can guarantee that the after export error is made
zero. That is, the placed and the exported paths match after the DR
vector is received. We also showed that the before export error can
never be eliminated since there is always a non-zero network delay,
but can be significantly reduced using our technique [1].
Henceforth we assume that the players use such a technique which results
in unavoidable but small overall export error.
In this paper we consider the problem of different and varying
network delays between each sender-receiver pair of a DR vector,
and consequently, the different and varying export errors at the
receivers. Due to the difference in the export errors among the
receivers, the same entity is rendered at different physical time at
different receivers. This brings in unfairness in game playing. For
instance a player with a large delay would always see an entity
late in physical time compared to the other players and,
therefore, her action on the entity would be delayed (in physical time)
even if she reacted instantaneously after the entity was rendered.
Our goal in this paper is to improve the fairness of these games in
spite of the varying network delays by equalizing the export error
at the players. We explore whether the time-average of the export
errors (which is the cumulative export error over a period of time
averaged over the time period) at all the players can be made the
same by scheduling the sending of the DR vectors appropriately at
the sender. We propose two algorithms to achieve this.
Both the algorithms are based on delaying (or dropping) the
sending of DR vectors to some players on a continuous basis to
try and make the export error the same at all the players. At an
abstract level, the algorithm delays sending DR vectors to players
whose accumulated error so far in the game is smaller than others;
this would mean that the export error due to this DR vector at these
players will be larger than that of the other players, thereby making
them the same. The goal is to make this error at least approximately
equal at every DR vector with the deviation in the error becoming
smaller as time progresses.
The first algorithm (which we refer to as the scheduling
algorithm) is based on estimating the delay between players and
refining the sending of DR vectors by scheduling them to be sent
to different players at different times at every DR generation point.
Through an implementation of this algorithm using the open source
game BZflag, we show that this algorithm makes the game very fair
(we measure fairness in terms of the standard deviation of the
error). The drawback of this algorithm is that it tends to push the
error of all the players towards that of the player with the worst
error (which is the error at the farthest player, in terms of delay,
from the sender of the DR). To alleviate this effect, we propose
a budget based algorithm which budgets how the DRs are sent to
different players. At a high level, the algorithm is based on the
idea of sending more DRs to players who are farther away from
the sender compared to those who are closer. Experimental results
from BZflag illustrates that the budget based algorithm follows a
more balanced approach. It improves the fairness of the game but
at the same time does so without pushing up the mean error of the
players thereby maintaining the accuracy of the game. In addition,
the budget based algorithm is shown to achieve the same level of
accuracy of game playing as the current implementation of BZflag
using much less number of DR vectors.
2. PREVIOUS WORK
Earlier work on network games to deal with network latency has
mostly focussed on compensation techniques for packet delay and
loss [2, 3, 4]. These methods are aimed at making large delays and
message loss tolerable for players but does not consider the
problems that may be introduced by varying delays from the server to
different players or from the players to one another. For example,
the concept of local lag has been used in [3] where each player
delays every local operation for a certain amount of time so that
remote players can receive information about the local operation and
execute the same operation at the about same time, thus reducing
state inconsistencies. The online multi-player game MiMaze [2, 5,
6], for example, takes a static bucket synchronization approach to
compensate for variable network delays. In MiMaze, each player
delays all events by 100 ms regardless of whether they are
generated locally or remotely. Players with a network delay larger
than 100 ms simply cannot participate in the game. In general,
techniques based on bucket synchronization depend on imposing a
worst case delay on all the players.
There have been a few papers which have studied the problem of
fairness in a distributed game by more sophisticated message
delivery mechanisms. But these works [7, 8] assume the existence of
a global view of the game where a game server maintains a view
(or state) of the game. Players can introduce objects into the game
or delete objects that are already part of the game (for example, in
a first-person shooter game, by shooting down the object). These
additions and deletions are communicated to the game server
using action messages. Based on these action messages, the state
of the game is changed at the game server and these changes are
communicated to the players using update messages. Fairness is
achieved by ordering the delivery of action and update messages at
the game server and players respectively based on the notion of a
fair-order which takes into account the delays between the game
server and the different players. Objects that are part of the game
may move but how this information is communicated to the players
seems to be beyond the scope of these works. In this sense, these
works are very limited in scope and may be applicable only to
firstperson shooter games and that too to only games where players are
not part of the game.
DR vectors can be exchanged directly among the players
(peerto-peer model) or using a central server as a relay (client-server
model). It has been shown in [9] that multi-player games that
use DR vectors together with bucket synchronization are not
cheatproof unless additional mechanisms are put in place. Both the
scheduling algorithm and the budget-based algorithm described in
our paper use DR vectors and hence are not cheat-proof. For
example, a receiver could skew the delay estimate at the sender to
make the sender believe that the delay between the sender and the
receiver is high thereby gaining undue advantage. We emphasize
that the focus of this paper is on fairness without addressing the
issue of cheating.
In the next section, we describe the game model that we use
and illustrate how senders and receivers exchange DR vectors and
how entities are rendered at the receivers based on the time-stamp
augmented DR vector exchange as described in [1]. In Section 4,
we describe the DR vector scheduling algorithm that aims to make
the export error equal across the players with varying delays from
the sender of a DR vector, followed by experimental results
obtained from instrumentation of the scheduling algorithm on the
open source game BZFlag. Section 5, describes the budget based
algorithm that achieves improved fairness but without reducing the
level accuracy of game playing. Conclusions are presented in
Section 6.
2
3. GAME MODEL
The game architecture is based on players distributed across the
Internet and exchanging DR vectors to each other. The DR
vectors could either be sent directly from one player to another
(peerto-peer model) or could be sent through a game server which
receives the DR vector from a player and forwards it to other players
(client-server model). As mentioned before, we assume
synchronized clocks among the participating players.
Each DR vector sent from one player to another specifies the
trajectory of exactly one player/entity. We assume a linear DR vector
in that the information contained in the DR vector is only enough at
the receiving player to compute the trajectory and render the entity
in a straight line path. Such a DR vector contains information about
the starting position and velocity of the player/entity where the
velocity is constant1
. Thus, the DR vectors sent by a player specifies
the current time at the player when the DR vector is computed (not
the time at which this DR vector is sent to the other players as we
will explain later), the current position of the player/entity in terms
of the x, y, z coordinates and the velocity vector in the direction
of x, y and z coordinates. Specifically, the ith
DR vector sent by
player j about the kth
entity is denoted by DRj
ik and is represented
by the following tuple (Tj
ik, xj
ik, yj
ik, zj
ik, vxj
ik, vyj
ik, vzj
ik).
Without loss of generality, in the rest of the discussion, we
consider a sequence of DR vectors sent by only one player and for
only one entity. For simplicity, we consider a two dimensional
game space rather than a three dimensional one. Hence we use
DRi to denote the ith
such DR vector represented as the tuple
(Ti, xi, yi, vxi, vyi). The receiving player computes the starting
position for the entity based on xi, yi and the time difference
between when the DR vector is received and the time Ti at which it
was computed. Note that the computation of time difference is
feasible since all the clocks are synchronized. The receiving player
then uses the velocity components to project and render the
trajectory of the entity. This trajectory is followed until a new DR vector
is received which changes the position and/or velocity of the entity.
timeT1
Real
Exported
Placed
dt1
A
B
C
D
DR1
= (T1, x1, y1, vx1, vy1)
computed at time T1 and
sent to the receiver
DR0
= (T0, x0, y0, vx0, vy0)
computed at time T0 and
sent to the receiver
T0
dt0
Placed
E
Figure 1: Trajectories and deviations.
Based on this model, Figure 1 illustrates the sending and
receiv1
Other type of DR vectors include quadratic DR vectors which
specify the acceleration of the entity and cubic spline DR vectors
that consider the starting position and velocity and the ending
position and velocity of the entity.
ing of DR vectors and the different errors that are encountered. The
figure shows the reception of DR vectors at a player (henceforth
called the receiver). The horizontal axis shows the time which is
synchronized among all the players. The vertical axis tries to
conceptually capture the two-dimensional position of an entity.
Assume that at time T0 a DR vector DR0 is computed by the sender
and immediately sent to the receiver. Assume that DR0 is received
at the receiver after a delay of dt0 time units. The receiver
computes the initial position of the entity as (x0 + vx0 × dt0, y0 +
vy0 × dt0) (shown as point E). The thick line EBD represents the
projected and rendered trajectory at the receiver based on the
velocity components vx0 and vy0 (placed path). At time T1 a DR vector
DR1 is computed for the same entity and immediately sent to the
receiver2
. Assume that DR1 is received at the receiver after a delay
of dt1 time units. When this DR vector is received, assume that the
entity is at point D. A new position for the entity is computed as
(x1 + vx1 × dt1, y1 + vy0 × dt1) and the entity is moved to this
position (point C). The velocity components vx1 and vy1 are used
to project and render this entity further.
Let us now consider the error due to network delay. Although
DR1 was computed at time T1 and sent to the receiver, it did not
reach the receiver until time T1 + dt1. This means, although the
exported path based on DR1 at the sender at time T1 is the
trajectory AC, until time T1 + dt1, at the receiver, this entity was being
rendered at trajectory BD based on DR0. Only at time T1 + dt1
did the entity get moved to point C from which point onwards the
exported and the placed paths are the same. The deviation between
the exported and placed paths creates an error component which we
refer to as the export error. A way to represent the export error is
to compute the integral of the distance between the two trajectories
over the time when they are out of sync. We represent the integral
of the distances between the placed and exported paths due to some
DR DRi over a time interval [t1, t2] as Err(DRi, t1, t2). In the
figure, the export error due to DR1 is computed as the integral of
the distance between the trajectories AC and BD over the time
interval [T1, T1 + dt1]. Note that there could be other ways of
representing this error as well, but in this paper, we use the integral of
the distance between the two trajectories as a measure of the export
error. Note that there would have been an export error created due
to the reception of DR0 at which time the placed path would have
been based on a previous DR vector. This is not shown in the figure
but it serves to remind the reader that the export error is cumulative
when a sequence of DR vectors are received. Starting from time
T1 onwards, there is a deviation between the real and the exported
paths. As we discussed earlier, this export error is unavoidable.
The above figure and example illustrates one receiver only. But
in reality, DR vectors DR0 and DR1 are sent by the sender to all
the participating players. Each of these players receives DR0 and
DR1 after varying delays thereby creating different export error
values at different players. The goal of the DR vector scheduling
algorithm to be described in the next section is to make this
(cumulative) export error equal at every player independently for each of
the entities that make up the game.
4. SCHEDULING ALGORITHM
FORSENDING DR VECTORS
In Section 3 we showed how delay from the sender of a new DR
2
Normally, DR vectors are not computed on a periodic basis but
on an on-demand basis where the decision to compute a new DR
vector is based on some threshold being exceeded on the deviation
between the real path and the path exported by the previous DR
vector.
3
vector to the receiver of the DR vector could lead to export error
because of the deviation of the placed path from the exported path
at the receiver until this new DR vector is received. We also
mentioned that the goal of the DR vector scheduling algorithm is to
make the export error equal at all receivers over a period of time.
Since the game is played in a distributed environment, it makes
sense for the sender of an entity to keep track of all the errors at the
receivers and try to make them equal. However, the sender cannot
know the actual error at a receiver till it gets some information
regarding the error back from the receiver. Our algorithm estimates
the error to compute a schedule to send DR vectors to the receivers
and corrects the error when it gets feedbacks from the receivers. In
this section we provide motivations for the algorithm and describe
the steps it goes through. Throughout this section, we will use the
following example to illustrate the algorithm.
timeT1
Exported path
Placed path
at receiver 2
dt1
A
B
C
D
E
F
T0
G2
G1
dt2
DR1 sent
to receiver 1
DR1 sent
to receiver 2
T1
1 T1
2
da1
da2
G
H
I
J
K
L
N
M
DR1 estimated
to be received
by receiver 2
DR1 estimated
to be received
by receiver 1
DR1 actually
received by
receiver 1
DR1 actually
received by
receiver 2
DR0 sent to
both receivers
DR1 computed
by sender
Placed path
at receiver 1
Figure 2: DR vector flow between a sender and two receivers
and the evolution of estimated and actual placed paths at the
receivers. DR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to
both receivers. DR1 = (T1, T1
1 , x1, y1, vx1, vy1) sent at time
T1
1 = T1+δ1 to receiver 1 and DR1 = (T1, T2
1 , x1, y1, vx1, vy1)
sent at time T2
1 = T1 + δ2 to receiver 2.
Consider the example in Figure 2. The figure shows a single
sender sending DR vectors for an entity to two different receivers
1 and 2. DR0 computed at T0 is sent and received by the receivers
sometime between T0 and T1 at which time they move the location
of the entity to match the exported path. Thus, the path of the
entity is shown only from the point where the placed path matches
the exported path for DR0. Now consider DR1. At time T1, DR1
is computed by the sender but assume that it is not immediately
sent to the receivers and is only sent after time δ1 to receiver 1
(at time T1
1 = T1 + δ1) and after time δ2 to receiver 2 (at time
T2
1 = T1 + δ2). Note that the sender includes the sending
timestamp with the DR vector as shown in the figure. Assume that
the sender estimates (it will be clear shortly why the sender has to
estimate the delay) that after a delay of dt1, receiver 1 will receive
it, will use the coordinate and velocity parameters to compute the
entity"s current location and move it there (point C) and from this
time onwards, the exported and the placed paths will become the
same. However, in reality, receiver 1 receives DR1 after a delay
of da1 (which is less than sender"s estimates of dt1), and moves
the corresponding entity to point H. Similarly, the sender estimates
that after a delay of dt2, receiver 2 will receive DR1, will compute
the current location of the entity and move it to that point (point
E), while in reality it receives DR1 after a delay of da2 > dt2 and
moves the entity to point N. The other points shown on the placed
and exported paths will be used later in the discussion to describe
different error components.
4.1 Computation of Relative Export Error
Referring back to the discussion from Section 3, from the sender"s
perspective, the export error at receiver 1 due to DR1 is given
by Err(DR1, T1, T1 + δ1 + dt1) (the integral of the distance
between the trajectories AC and DB over the time interval [T1, T1 +
δ1 + dt1]) of Figure 2. This is due to the fact that the sender uses
the estimated delay dt1 to compute this error. Similarly, the
export error from the sender"s perspective at received 2 due to DR1
is given by Err(DR1, T1, T1 + δ2 + dt2) (the integral of the
distance between the trajectories AE and DF over the time interval
[T1, T1 + δ2 + dt2]). Note that the above errors from the sender"s
perspective are only estimates. In reality, the export error will be
either smaller or larger than the estimated value, based on whether
the delay estimate was larger or smaller than the actual delay that
DR1 experienced. This difference between the estimated and the
actual export error is the relative export error (which could either
be positive or negative) which occurs for every DR vector that is
sent and is accumulated at the sender.
The concept of relative export error is illustrated in Figure 2.
Since the actual delay to receiver 1 is da1, the export error
induced by DR1 at receiver 1 is Err(DR1, T1, T1 + δ1 + da1).
This means, there is an error in the estimated export error and the
sender can compute this error only after it gets a feedback from the
receiver about the actual delay for the delivery of DR1, i.e., the
value of da1. We propose that once receiver 1 receives DR1, it
sends the value of da1 back to the sender. The receiver can
compute this information as it knows the time at which DR1 was sent
(T1
1 = T1 + δ1, which is appended to the DR vector as shown in
Figure 2) and the local receiving time (which is synchronized with
the sender"s clock). Therefore, the sender computes the relative
export error for receiver 1, represented using R1 as
R1 = Err(DR1, T1, T1 + δ1 + dt1)
− Err(DR1, T1, T1 + δ1 + da1)
= Err(DR1, T1 + δ1 + dt1, T1 + δ1 + da1)
Similarly the relative export error for receiver 2 is computed as
R2 = Err(DR1, T1, T1 + δ2 + dt2)
− Err(DR1, T1, T1 + δ2 + da2)
= Err(DR1, T1 + δ2 + dt2, T1 + δ2 + da2)
Note that R1 > 0 as da1 < dt1, and R2 < 0 as da2 > dt2.
Relative export errors are computed by the sender as and when it
receives the feedback from the receivers. This example shows the
4
relative export error values after DR1 is sent and the corresponding
feedbacks are received.
4.2 Equalization of Error Among Receivers
We now explain what we mean by making the errors equal
at all the receivers and how this can be achieved. As stated
before the sender keeps estimates of the delays to the receivers, dt1
and dt2 in the example of Figure 2. This says that at time T1
when DR1 is computed, the sender already knows how long it may
take messages carrying this DR vector to reach the receivers. The
sender uses this information to compute the export errors, which are
Err(DR1, T1, T1 + δ1 + dt1) and Err(DR1, T1, T1 + δ2 + dt2)
for receivers 1 and 2, respectively. Note that the areas of these error
components are a function of δ1 and δ2 as well as the network
delays dt1 and dt2. If we are to make the exports errors due to DR1
the same at both receivers, the sender needs to choose δ1 and δ2
such that
Err(DR1, T1, T1 + δ1 + dt1) = Err(DR1, T1, T1 + δ2 + dt2).
But when T1 was computed there could already have been
accumulated relative export errors due to previous DR vectors (DR0 and
the ones before). Let us represent the accumulated relative error up
to DRi for receiver j as Ri
j. To accommodate these accumulated
relative errors, the sender should now choose δ1 and δ2 such that
R0
1 + Err(DR1, T1, T1 + δ1 + dt1) =
R0
2 + Err(DR1, T1, T1 + δ2 + dt2)
The δi determines the scheduling instant of the DR vector at the
sender for receiver i. This method of computation of δ"s ensures
that the accumulated export error (i.e., total actual error) for each
receiver equalizes at the transmission of each DR vector.
In order to establish this, assume that the feedback for DR vector
Di from a receiver comes to the sender before schedule for Di+1 is
computed. Let Si
m and Ai
m denote the estimated error for receiver
m used for computing schedule for Di and accumulated error for
receiver m computed after receiving feedback for Di, respectively.
Then Ri
m = Ai
m −Si
m. In order to compute the schedule instances
(i.e., δ"s) for Di, for any pair of receivers m and n, we do Ri−1
m +
Si
m = Ri−1
n + Si
n. The following theorem establishes the fact
that the accumulated export error is equalized at every scheduling
instant.
THEOREM 4.1. When the schedule instances for sending Di
are computed for any pair of receivers m and n, the following
condition is satisfied:
i−1
k=1
Ak
m + Si
m =
i−1
k=1
Ak
n + Si
n.
Proof: By induction. Assume that the premise holds for some i.
We show that it holds for i+1. The base case for i = 1 holds since
initially R0
m = R0
n = 0, and the S1
m = S1
n is used to compute the
scheduling instances.
In order to compute the schedule for Di+1, the we first compute
the relative errors as
Ri
m = Ai
m − Si
m, and Ri
n = Ai
n − Si
n.
Then to compute δ"s we execute
Ri
m + Si+1
m = Ri
n + Si+1
n
Ai
m − Si
m + Si+1
m = Ai
n − Si
n + Si+1
n .
Adding the condition of the premise on both sides we get,
i
k=1
Ak
m + Si+1
m =
i
k=1
Ak
n + Si+1
n .
4.3 Computation of the Export Error
Let us now consider how the export errors can be computed.
From the previous section, to find δ1 and δ2 we need to find
Err(DR1, T1, T1 +δ1 +dt1) and Err(DR1, T1, T1 +δ2 +dt2).
Note that the values of R0
1 and R0
2 are already known at the sender.
Consider the computation of Err(DR1, T1, T1 +δ1 +dt1). This is
the integral of the distance between the trajectories AC due to DR1
and BD due to DR0. From DR0 and DR1, point A is (X1, Y1) =
(x1, y1) and point B is (X0, Y0) = (x0 + (T1 − T0) × vx0, y0 +
(T1 − T0) × vy0). The trajectory AC can be represented as a
function of time as (X1(t), Y1(t) = (X1 + vx1 × t, Y1 + vy1 × t)
and the trajectory of BD can be represented as (X0(t), Y0(t) =
(X0 + vx0 × t, Y0 + vy0 × t).
The distance between the two trajectories as a function of time
then becomes,
dist(t) = (X1(t) − X0(t))2 + (Y1(t) − Y0(t))2
= ((X1 − X0) + (vx1 − vx0)t)2
+((Y1 − Y0) + (vy1 − vy0)t)2
= ((vx1 − vx0)2 + (vy1 − vy0)2)t2
+2((X1 − X0)(vx1 − vx0)
+(Y1 − Y0)(vy1 − vy0))t
+(X1 − X0)2 + (Y1 − Y0)2
Let
a = (vx1 − vx0)2
+ (vy1 − vy0)2
b = 2((X1 − X0)(vx1 − vx0)
+(Y1 − Y0)(vy1 − vy0))
c = (X1 − X0)2
+ (Y1 − Y0)2
Then dist(t) can be written as
dist(t) = a × t2 + b × t + c.
Then Err(DR1, t1, t2) for some time interval [t1, t2] becomes
t2
t1
dist(t) dt =
t2
t1
a × t2 + b × t + c dt.
A closed form solution for the indefinite integral
a × t2 + b × t + c dt =
(2at + b)
√
at2 + bt + c
4a
+
1
2
ln
1
2b
+ at
√
a
+ at2 + bt + c c
1
√
a
−
1
8
ln
1
2b
+ at
√
a
+ at2 + bt + c b2
a− 3
2
Err(DR1, T1, T1 +δ1 +dt1) and Err(DR1, T1, T1 +δ2 +dt2)
can then be calculated by applying the appropriate limits to the
above solution. In the next section, we consider the computation
of the δ"s for N receivers.
5
4.4 Computation of Scheduling Instants
We again look at the computation of δ"s by referring to Figure 2.
The sender chooses δ1 and δ2 such that R0
1 + Err(DR1, T1, T1 +
δ1 +dt1) = R0
2 +Err(DR1, T1, T1 +δ2 +dt2). If R0
1 and R0
2 both
are zero, then δ1 and δ2 should be chosen such that Err(DR1, T1, T1+
δ1 +dt1) = Err(DR1, T1, T1 +δ2 +dt2). This equality will hold
if δ1 + dt1 = δ2 + dt2. Thus, if there is no accumulated relative
export error, all that the sender needs to do is to choose the δ"s in
such a way that they counteract the difference in the delay to the
two receivers, so that they receive the DR vector at the same time.
As discussed earlier, because the sender is not able to a priori learn
the delay, there will always be an accumulated relative export error
from a previous DR vector that does have to be taken into account.
To delve deeper into this, consider the computation of the
export error as illustrated in the previous section. To compute the
δ"s we require that R0
1 + Err(DR1, T1, T1 + δ1 + dt1) = R0
2 +
Err(DR1, T1, T1 + δ2 + dt2). That is,
R0
1 +
T1+δ1+dt1
T1
dist(t) dt = R0
2 +
T1+δ2+dt2
T1
dist(t) dt.
That is
R0
1 +
T1+dt1
T1
dist(t) dt +
T1+dt1+δ1
T1+dt1
dist(t) dt =
R0
2 +
T1+dt2
T1
dist(t) dt +
T1+dt2+δ2
T1+dt2
dist(t) dt.
The components R0
1, R0
2, are already known to (or estimated by)
the sender. Further, the error components
T1+dt1
T1
dist(t) dt and
T1+dt2
T1
dist(t) dt can be a priori computed by the sender using
estimated values of dt1 and dt2. Let us use E1 to denote R0
1 +
T1+dt1
T1
dist(t) dt and E2 to denote R0
2 +
T1+dt2
T1
dist(t) dt.
Then, we require that
E1 +
T1+dt1+δ1
T1+dt1
dist(t) dt = E2 +
T1+dt2+δ2
T1+dt2
dist(t) dt.
Assume that E1 > E2. Then, for the above equation to hold, we
require that
T1+dt1+δ1
T1+dt1
dist(t) dt <
T1+dt2+δ2
T1+dt2
dist(t) dt.
To make the game as fast as possible within this framework, the δ
values should be made as small as possible so that DR vectors are
sent to the receivers as soon as possible subject to the fairness
requirement. Given this, we would choose δ1 to be zero and compute
δ2 from the equation
E1 = E2 +
T1+dt2+δ2
T1+dt2
dist(t) dt.
In general, if there are N receivers 1, . . . , N, when a sender
generates a DR vector and decides to schedule them to be sent, it first
computes the Ei values for all of them from the accumulated
relative export errors and estimates of delays. Then, it finds the smallest
of these values. Let Ek be the smallest value. The sender makes δk
to be zero and computes the rest of the δ"s from the equality
Ei +
T1+dti+δi
T1+dti
dist(t) dt = Ek,
∀i 1 ≤ i ≤ N, i = k. (1)
The δ"s thus obtained gives the scheduling instants of the DR
vector for the receivers.
4.5 Steps of the Scheduling Algorithm
For the purpose of the discussion below, as before let us denote
the accumulated relative export error at a sender for receiver k up
until DRi to be Ri
k. Let us denote the scheduled delay at the sender
before DRi is sent to receiver k as δi
k. Given the above discussion,
the algorithm steps are as follows:
1. The sender computes DRi at (say) time Ti and then
computes δi
k, and Ri−1
k , ∀k, 1 ≤ k ≤ N based on the estimation
of delays dtk, ∀k, 1 ≤ k ≤ N as per Equation (1). It
schedules, DRi to be sent to receiver k at time Ti + δi
k.
2. The DR vectors are sent to the receivers at the scheduled
times which are received after a delay of dak, ∀k, 1 ≤ k ≤
N where dak ≤ or > dtk. The receivers send the value of
dak back to the sender (the receiver can compute this value
based on the time stamps on the DR vector as described
earlier).
3. The sender computes Ri
k as described earlier and illustrated
in Figure 2. The sender also recomputes (using exponential
averaging method similar to round-trip time estimation by
TCP [10]) the estimate of delay dtk from the new value of
dak for receiver k.
4. Go back to Step 1 to compute DRi+1 when it is required
and follow the steps of the algorithm to schedule and send
this DR vector to the receivers.
4.6 Handling Cases in Practice
So far we implicity assumed that DRi is sent out to all receivers
before a decision is made to compute the next DR vector DRi+1,
and the receivers send the value of dak corresponding to DRi and
this information reaches the sender before it computes DRi+1 so
that it can compute Ri+1
k and then use it in the computation of δi+1
k .
Two issues need consideration with respect to the above algorithm
when it is used in practice.
• It may so happen that a new DR vector is computed even
before the previous DR vector is sent out to all receivers.
How will this situation be handled?
• What happens if the feedback does not arrive before DRi+1
is computed and scheduled to be sent?
Let us consider the first scenario. We assume that DRi has been
scheduled to be sent and the scheduling instants are such that δi
1 <
δi
2 < · · · < δi
N . Assume that DRi+1 is to be computed (because
the real path has deviated exceeding a threshold from the path
exported by DRi) at time Ti+1 where Ti + δi
k < Ti+1 < Ti + δi
k+1.
This means, DRi has been sent only to receivers up to k in the
scheduled order. In our algorithm, in this case, the scheduled delay
ordering queue is flushed which means DRi is not sent to receivers
still queued to receive it, but a new scheduling order is computed
for all the receivers to send DRi+1.
For those receivers who have been sent DRi, assume for now
that daj, 1 ≤ j ≤ k has been received from all receivers (the
scenario where daj has not been received will be considered as a part
of the second scenario later). For these receivers, Ei
j, 1 ≤ j ≤ k
can be computed. For those receivers j, k + 1 ≤ j ≤ N to
whom DRi was not sent Ei
j does not apply. Consider a receiver
j, k + 1 ≤ j ≤ N to whom DRi was not sent. Refer to
Figure 3. For such a receiver j, when DRi+1 is to be scheduled and
6
timeTi
Exported path
dtj
A
B
C
D
Ti-1
Gi
j
DRi+1 computed by sender
and DRi for receiver k+1 to
N is removed from queue
DRi+1 scheduled
for receiver k+1
Ti+1
G
H
E
F
DRi scheduled
for receiver j
DRi computed
by sender
Placed path
at receiver k+1
Gi+1
j
Figure 3: Schedule computation when DRi is not sent to
receiver j, k + 1 ≤ j ≤ N.
δi+1
j needs to be computed, the total export error is the accumulated
relative export error at time Ti when schedule for DRi was
computed, plus the integral of the distance between the two trajectories
AC and BD of Figure 3 over the time interval [Ti, Ti+1 + δi+1
j +
dtj]. Note that this integral is given by Err(DRi, Ti, Ti+1) +
Err(DRi+1, Ti+1, Ti+1 + δi+1
j + dtj). Therefore, instead of Ei
j
of Equation (1), we use the value Ri−1
j + Err(DRi, Ti, Ti+1) +
Err(DRi+1, Ti+1, Ti+1 + δi+1
j + dtj) where Ri−1
j is relative
export error used when the schedule for DRi was computed.
Now consider the second scenario. Here the feedback dak
corresponding to DRi has not arrived before DRi+1 is computed and
scheduled. In this case, Ri
k cannot be computed. Thus, in the
computation of δk for DRi+1, this will be assumed to be zero. We
do assume that a reliable mechanism is used to send dak back to
the sender. When this information arrives at a later time, Ri
k will
be computed and accumulated to future relative export errors (for
example Ri+1
k if dak is received before DRi+2 is computed) and
used in the computation of δk when a future DR vector is to be
scheduled (for example DRi+2).
4.7 Experimental Results
In order to evaluate the effectiveness and quantify benefits
obtained through the use of the scheduling algorithm, we implemented
the proposed algorithm in BZFlag (Battle Zone Flag) [11] game.
It is a first-person shooter game where the players in teams drive
tanks and move within a battle field. The aim of the players is to
navigate and capture flags belonging to the other team and bring
them back to their own area. The players shoot each other"s tanks
using shooting bullets. The movement of the tanks as well as that
of the shots are exchanged among the players using DR vectors.
We have modified the implementation of BZFlag to
incorporate synchronized clocks among the players and the server and
exchange time-stamps with the DR vector. We set up a testbed with
four players running the instrumented version of BZFlag, with one
as a sender and the rest as receivers. The scheduling approach and
the base case where each DR vector was sent to all the receivers
concurrently at every trigger point were implemented in the same
run by tagging the DR vectors according to the type of approach
used to send the DR vector. NISTNet [12] was used to introduce
delays across the sender and the three receivers. Mean delays of
800ms, 500ms and 200ms were introduced between the sender and
first, second and the third receiver, respectively. We introduce a
variance of 100 msec (to the mean delay of each receiver) to model
variability in delay. The sender logged the errors of each receiver
every 100 milliseconds for both the scheduling approach and the
base case. The sender also calculated the standard deviation and
the mean of the accumulated export error of all the receivers every
100 milliseconds. Figure 4 plots the mean and standard deviation
of the accumulated export error of all the receivers in the
scheduling case against the base case. Note that the x-axis of these graphs
(and the other graphs that follow) represents the system time when
the snapshot of the game was taken.
Observe that the standard deviation of the error with scheduling
is much lower as compared to the base case. This implies that the
accumulated errors of the receivers in the scheduling case are closer
to one another. This shows that the scheduling approach achieves
fairness among the receivers even if they are at different distances
(i.e, latencies) from the sender.
Observe that the mean of the accumulated error increased
multifold with scheduling in comparison to the base case. Further
exploration for the reason for the rise in the mean led to the conclusion
that every time the DR vectors are scheduled in a way to equalize
the total error, it pushes each receivers total error higher. Also, as
the accumulated error has an estimated component, the schedule is
not accurate to equalize the errors for the receivers, leading to the
DR vector reaching earlier or later than the actual schedule. In
either case, the error is not equalized and if the DR vector reaches
late, it actually increases the error for a receiver beyond the highest
accumulated error. This means that at the next trigger, this receiver
will be the one with highest error and every other receiver"s error
will be pushed to this error value. This flip-flop effect leads to
the increase in the accumulated error for all the receivers.
The scheduling for fairness leads to the decrease in standard
deviation (i.e., increases the fairness among different players), but it
comes at the cost of higher mean error, which may not be a
desirable feature. This led us to explore different ways of equalizing the
accumulated errors. The approach discussed in the following
section is a heuristic approach based on the following idea. Using the
same amount of DR vectors over time as in the base case, instead
of sending the DR vectors to all the receivers at the same frequency
as in the base case, if we can increase the frequency of sending
the DR vectors to the receiver with higher accumulated error and
decrease the frequency of sending DR vectors to the receiver with
lower accumulated error, we can equalize the export error of all
receivers over time. At the same time we wish to decrease the
error of the receiver with the highest accumulated error in the base
case (of course, this receiver would be sent more DR vectors than
in the base case). We refer to such an algorithm as a budget based
algorithm.
5. BUDGET BASED ALGORITHM
In a game, the sender of an entity sends DR vectors to all the
receivers every time a threshold is crossed by the entity. Lower
the threshold, more DR vectors are generated during a given time
period. Since the DR vectors are sent to all the receivers and the
network delay between the sender-receiver pairs cannot be avoided,
the before export error 3
with the most distant player will always
3
Note that after export error is eliminated by using synchronized
clock among the players.
7
0
1000
2000
3000
4000
5000
15950 16000 16050 16100 16150 16200 16250 16300
MeanAccumulatedError
Time in Seconds
Base Case
Scheduling Algorithm #1
0
50
100
150
200
250
300
350
400
450
500
15950 16000 16050 16100 16150 16200 16250 16300
StandardDeviationofAccumulatedError
Time in Seconds
Base Case
Scheduling Algorithm #1
Figure 4: Mean and standard deviation of error with scheduling and without (i.e., base case).
be higher than the rest. In order to mitigate the imbalance in the
error, we propose to send DR vectors selectively to different
players based on the accumulated errors of these players. The budget
based algorithm is based on this idea and there are two variations
of it. One is a probabilistic budget based scheme and the other, a
deterministic budget base scheme.
5.1 Probabilistic budget based scheme
The probabilistic budget based scheme has three main steps: a)
lower the dead reckoning threshold but at the same time keep the
total number of DRs sent the same as the base case, b) at every
trigger, probabilistically pick a player to send the DR vector to,
and c) send the DR vector to the chosen player. These steps are
described below.
The lowering of DR threshold is implemented as follows.
Lowering the threshold is equivalent to increasing the number of trigger
points where DR vectors are generated. Suppose the threshold is
such that the number of triggers caused by it in the base case is t
and at each trigger n DR vectors sent by the sender, which results
in a total of nt DR vectors. Our goal is to keep the total number of
DR vectors sent by the sender fixed at nt, but lower the number of
DR vectors sent at each trigger (i.e., do not send the DR vector to
all the receivers). Let n and t be the number of DR vectors sent
at each trigger and number of triggers respectively in the modified
case. We want to ensure n t = nt. Since we want to increase the
number of trigger points, i.e, t > t, this would mean that n < n.
That is, not all receivers will be sent the DR vector at every trigger.
In the probabilistic budget based scheme, at each trigger, a
probability is calculated for each receiver to be sent a DR vector and
only one receiver is sent the DR (n = 1). This probability is based
on the relative weights of the receivers" accumulated errors. That
is, a receiver with a higher accumulated error will have a higher
probability of being sent the DR vector. Consider that the
accumulated error for three players are a1, a2 and a3 respectively.
Then the probability of player 1 receiving the DR vector would
be a1
a1+a2+a3
. Similarly for the other players. Once the player is
picked, the DR vector is sent to that player.
To compare the probabilistic budget based algorithm with the
base case, we needed to lower the threshold for the base case (for
fair comparison). As the dead reckoning threshold in the base
case was already very fine, it was decided that instead of
lowering the threshold, the probabilistic budget based approach would
be compared against a modified base case that would use the
normal threshold as the budget based algorithm but the base case was
modified such that every third trigger would be actually used to
send out a DR vector to all the three receivers used in our
experiments. This was called as the 1/3 base case as it resulted in 1/3
number of DR vectors being sent as compared to the base case.
The budget per trigger for the probability based approach was
calculated as one DR vector at each trigger as compared to three DR
vectors at every third trigger in the 1/3 base case; thus the two cases
lead to the same number of DR vectors being sent out over time.
In order to evaluate the effectiveness of the probabilistic budget
based algorithm, we instrumented the BZFlag game to use this
approach. We used the same testbed consisting of one sender and
three receivers with delays of 800ms, 500ms and 200ms from the
sender and with low delay variance (100ms) and moderate delay
variance (180ms). The results are shown in Figures 5 and 6. As
mentioned earlier, the x-axis of these graphs represents the system
time when the snapshot of the game was taken. Observe from the
figures that the standard deviation of the accumulated error among
the receivers with the probabilistic budget based algorithm is less
than the 1/3 base case and the mean is a little higher than the 1/3
base case. This implies that the game is fairer as compared to the
1/3 base case at the cost of increasing the mean error by a small
amount as compared to the 1/3 base case.
The increase in mean error in the probabilistic case compared to
the 1/3 base case can be attributed to the fact that the even though
the probabilistic approach on average sends the same number of
DR vectors as the 1/3 base case, it sometimes sends DR vectors to
a receiver less frequently and sometimes more frequently than the
1/3 base case due to its probabilistic nature. When a receiver does
not receive a DR vector for a long time, the receiver"s trajectory
is more and more off of the sender"s trajectory and hence the rate
of buildup of the error at the receiver is higher. At times when
a receiver receives DR vectors more frequently, it builds up error
at a lower rate but there is no way of reversing the error that was
built up when it did not receive a DR vector for a long time. This
leads the receivers to build up more error in the probabilistic case
as compared to the 1/3 base case where the receivers receive a DR
vector almost periodically.
8
0
200
400
600
800
1000
15950 16000 16050 16100 16150 16200 16250 16300
MeanAccumulatedError
Time in Seconds
1/3 Base Case
Deterministic Algorithm
Probabilistic Algorithm
0
50
100
150
200
250
300
350
400
450
500
15950 16000 16050 16100 16150 16200 16250 16300
StandardDeviationofAccumulatedError
Time in Seconds
1/3 Base Case
Deterministic Algorithm
Probabilistic Algorithm
Figure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance.
0
200
400
600
800
1000
16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180
MeanAccumulatedError
Time in Seconds
1/3 Base Case
Deterministic Algorithm
Probabilistic Algorithm
0
50
100
150
200
250
300
16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180
StandardDeviationofAccumulatedError
Time in Seconds
1/3 Base Case
Deterministic Algorithm
Probabilistic Algorithm
Figure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay
variance.
5.2 Deterministic budget based scheme
To bound the increase in mean error we decided to modify the
budget based algorithm to be deterministic. The first two steps
of the algorithm are the same as in the probabilistic algorithm; the
trigger points are increased to lower the threshold and accumulated
errors are used to compute the probability that a receiver will
receiver a DR vector. Once these steps are completed, a deterministic
schedule for the receiver is computed as follows:
1. If there is any receiver(s) tagged to receive a DR vector at
the current trigger, the sender sends out the DR vector to the
respective receiver(s). If at least one receiver was sent a DR
vector, the sender calculates the probabilities of each receiver
receiving a DR vector as explained before and follows steps
2 to 6, else it does not do anything.
2. For each receiver, the probability value is multiplied with the
budget available at each trigger (which is set to 1 as explained
below) to give the frequency of sending the DR vector to each
receiver.
3. If any of the receiver"s frequency after multiplying with the
budget goes over 1, the receiver"s frequency is set as 1 and
the surplus amount is equally distributed to all the receivers
by adding the amount to their existing frequencies. This
process is repeated until all the receivers have a frequency of
less than or equal to 1. This is due to the fact that at a trigger
we cannot send more than one DR vector to the respective
receiver. That will be wastage of DR vectors by sending
redundant information.
4. (1/frequency) gives us the schedule at which the sender should
send DR vectors to the respective receiver. Credit obtained
previously (explained in step 5) if any is subtracted from the
schedule. Observe that the resulting value of the schedule
might not be an integer; hence, the value is rounded off by
taking the ceiling of the schedule. For example, if the
frequency is 1/3.5, this implies that we would like to have a DR
vector sent every 3.5 triggers. However, we are constrained
to send it at the 4th trigger giving us a credit of 0.5. When we
do send the DR vector next time, we would be able to send it
9
on the 3rd trigger because of the 0.5 credit.
5. The difference between the schedule and the ceiling of the
schedule is the credit that the receiver has obtained which
is remembered for the future and used at the next time as
explained in step 4.
6. For each of those receivers who were sent a DR vector at
the current trigger, the receivers are tagged to receive the
next DR vector at the trigger that happens exactly schedule
(the ceiling of the schedule) number of times away from the
current trigger. Observe that no other receiver"s schedule is
modified at this point as they all are running a schedule
calculated at some previous point of time. Those schedules will
be automatically modified at the trigger when they are
scheduled to receive the next DR vector. At the first trigger, the
sender sends the DR vector to all the receivers and uses a
relative probability of 1/n for each receiver and follows the
steps 2 to 6 to calculate the next schedule for each receiver
in the same way as mentioned for other triggers. This
algorithm ensures that every receiver has a guaranteed schedule
of receiving DR vectors and hence there is no irregularity in
sending the DR vector to any receiver as was observed in the
budget based probabilistic algorithm.
We used the testbed described earlier (three receivers with
varying delays) to evaluate the deterministic algorithm using the budget
of 1 DR vector per trigger so as to use the same number of DR
vectors as in the 1/3 base case. Results from our experiments are
shown in Figures 5 and 6. It can be observed that the standard
deviation of error in the deterministic budget based algorithm is less
than the 1/3 base case and also has the same mean error as the 1/3
base case. This indicates that the deterministic algorithm is more
fair than the 1/3 base case and at the same time does not increase
the mean error thereby leading to a better game quality compared
to the probabilistic algorithm.
In general, when comparing the deterministic approach to the
probabilistic approach, we found that the mean accumulated
error was always less in the deterministic approach. With respect to
standard deviation of the accumulated error, we found that in the
fixed or low variance cases, the deterministic approach was
generally lower, but in higher variance cases, it was harder to draw
conclusions as the probabilistic approach was sometimes better than
the deterministic approach.
6. CONCLUSIONS AND FUTURE WORK
In distributed multi-player games played across the Internet,
object and player trajectory within the game space are exchanged in
terms of DR vectors. Due to the variable delay between players,
these DR vectors reach different players at different times. There is
unfair advantage gained by receivers who are closer to the sender
of the DR as they are able to render the sender"s position more
accurately in real time. In this paper, we first developed a model
for estimating the error in rendering player trajectories at the
receivers. We then presented an algorithm based on scheduling the
DR vectors to be sent to different players at different times thereby
equalizing the error at different players. This algorithm is aimed
at making the game fair to all players, but tends to increase the
mean error of the players. To counter this effect, we presented
budget based algorithms where the DR vectors are still
scheduled to be sent at different players at different times but the
algorithm balances the need for fairness with the requirement that the
error of the worst case players (who are furthest from the sender)
are not increased compared to the base case (where all DR vectors
are sent to all players every time a DR vector is generated). We
presented two variations of the budget based algorithms and through
experimentation showed that the algorithms reduce the standard
deviation of the error thereby making the game more fair and at the
same time has comparable mean error to the base case.
7. REFERENCES
[1] S.Aggarwal, H. Banavar, A. Khandelwal, S. Mukherjee, and
S. Rangarajan, Accuracy in Dead-Reckoning based
Distributed Multi-Player Games, Proceedings of ACM
SIGCOMM 2004 Workshop on Network and System Support
for Games (NetGames 2004), Aug. 2004.
[2] L. Gautier and C. Diot, Design and Evaluation of MiMaze,
a Multiplayer Game on the Internet, in Proc. of IEEE
Multimedia (ICMCS"98), 1998.
[3] M. Mauve, Consistency in Replicated Continuous
Interactive Media, in Proc. of the ACM Conference on
Computer Supported Cooperative Work (CSCW"00), 2000,
pp. 181-190.
[4] S.K. Singhal and D.R. Cheriton, Exploiting Position
History for Efficient Remote Rendering in Networked
Virtual Reality, Presence: Teleoperators and Virtual
Environments, vol. 4, no. 2, pp. 169-193, 1995.
[5] C. Diot and L. Gautier, A Distributed Architecture for
Multiplayer Interactive Applications on the Internet, in
IEEE Network Magazine, 1999, vol. 13, pp. 6-15.
[6] L. Pantel and L.C. Wolf, On the Impact of Delay on
Real-Time Multiplayer Games, in Proc. of ACM
NOSSDAV"02, May 2002.
[7] Y. Lin, K. Guo, and S. Paul, Sync-MS: Synchronized
Messaging Service for Real-Time Multi-Player Distributed
Games, in Proc. of 10th IEEE International Conference on
Network Protocols (ICNP), Nov 2002.
[8] K. Guo, S. Mukherjee, S. Rangarajan, and S. Paul, A Fair
Message Exchange Framework for Distributed Multi-Player
Games, in Proc. of NetGames2003, May 2003.
[9] N. E. Baughman and B. N. Levine, Cheat-Proof Playout for
Centralized and Distributed Online Games, in Proc. of IEEE
INFOCOM"01, April 2001.
[10] M. Allman and V. Paxson, On Estimating End-to-End
Network Path Properties, in Proc. of ACM SIGCOMM"99,
Sept. 1999.
[11] BZFlag Forum, BZFlag Game, URL:
http://www.bzflag.org.
[12] Nation Institute of Standards and Technology, NIST Net,
URL: http://snad.ncsl.nist.gov/nistnet/.
10 | fairness;dead-reckoning vector;export error;network delay;budget based algorithm;clock synchronization;distribute multi-player game;bucket synchronization;mean error;distributed multi-player game;quantization;dead-reckon;scheduling algorithm;accuracy |
train_C-53 | Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games | Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. | 1. INTRODUCTION
Nowadays, many distributed multiplayer games adopt replicated
architectures. In such games, the states of entities are changed not
only by the operations of players, but also by the passing of time
[1, 2]. These games are referred to as Continuous Distributed
Multiplayer Games (CDMG). Like other distributed applications,
CDMG also suffer from the consistency problem caused by
network transmission delay. Although new network techniques
(e.g. QoS) can reduce or at least bound the delay, they can not
completely eliminate it, as there exists the physical speed
limitation of light, for instance, 100 ms is needed for light to
propagate from Europe to Australia [3]. There are many studies
about the effects of network transmission delay in different
applications [4, 5, 6, 7]. In replication based games, network
transmission delay makes the states of local and remote sites to be
inconsistent, which can cause serious problems, such as reducing
the fairness of a game and leading to paradoxical situations etc. In
order to maintain consistency for distributed systems, many
different approaches have been proposed, among which local lag
and Dead-Reckoning (DR) are two representative approaches.
Mauve et al [1] proposed local lag to maintain high consistency
for replicated continuous applications. It synchronizes the
physical clocks of all sites in a system. After an operation is
issued at local site, it delays the execution of the operation for a
short time. During this short time period the operation is
transmitted to remote sites, and all sites try to execute the
operation at a same physical time. In order to tackle the
inconsistency caused by exceptional network transmission delay,
a time warp based mechanism is proposed to repair the state.
Local lag can achieve significant high consistency, but it is based
on operation transmission, which forwards every operation on a
shared entity to remote sites. Since operation transmission
mechanism requests that all operations should be transmitted in a
reliable way, message filtering is difficult to be deployed and the
scalability of a system is limited.
DR is based on state transmission mechanism. In addition to the
high fidelity model that maintains the accurate states of its own
entities, each site also has a DR model that estimates the states of
all entities (including its own entities). After each update of its
own entities, a site compares the accurate state with the estimated
one. If the difference exceeds a pre-defined threshold, a state
update would be transmitted to all sites and all DR models would
be corrected. Through state estimation, DR can not only maintain
consistency but also decrease the number of transmitted state
updates. Compared with aforementioned local lag, DR cannot
maintain high consistency. Due to network transmission delay,
when a remote site receives a state update of an entity the state of
the entity might have changed at the site sending the state update.
In order to make DR maintain high consistency, Aggarwal et al [8]
proposed Globally Synchronized DR (GS-DR), which
synchronizes the physical clocks of all sites in a system and adds
time stamps to transmitted state updates. Detailed description of
GS-DR can be found in Section 3.
When a state update is available, GS-DR immediately updates the
state of local site and then transmits the state update to remote
sites, which causes the states of local site and remote sites to be
inconsistent in the transmission procedure. Thus with the
synchronization of physical clocks, GS-DR can eliminate after
inconsistency, but it cannot tackle before inconsistency [8]. In this
paper, we propose a new method named globally synchronized
DR with Local Lag (GS-DR-LL), which combines local lag and
GS-DR. By delaying the update to local site, GS-DR-LL can
achieve higher consistency than GS-DR. The rest of this paper is
organized as follows: Section 2 gives the definition of consistency
and corresponding metrics; the cause of the inconsistency of DR
is analyzed in Section 3; Section 4 describes how GS-DR-LL
works; performance evaluation is presented in Section 5; Section
6 concludes the paper.
2. CONSISTENCY DEFINITIONS AND
METRICS
The consistency of replicated applications has already been well
defined in discrete domain [9, 10, 11, 12], but few related work
has been done in continuous domain. Mauve et al [1] have given a
definition of consistency for replicated applications in continuous
domain, but the definition is based on operation transmission and
it is difficult for the definition to describe state transmission based
methods (e.g. DR). Here, we present an alternative definition of
consistency in continuous domain, which suits state transmission
based methods well.
Given two distinct sites i and j, which have replicated a shared
entity e, at a given time t, the states of e at sites i and j are Si(t)
and Sj(t).
DEFINITION 1: the states of e at sites i and j are consistent at
time t, iff:
De(i, j, t) = |Si(t) - Sj(t)| = 0 (1)
DEFINITION 2: the states of e at sites i and j are consistent
between time t1 and t2 (t1 < t2), iff:
De(i, j, t1, t2) = dt|)t(S)t(S|
t2
t1
ji = 0 (2)
In this paper, formulas (1) and (2) are used to determine whether
the states of shared entities are consistent between local and
remote sites. Due to network transmission delay, it is difficult to
maintain the states of shared entities absolutely consistent.
Corresponding metrics are needed to measure the consistency of
shared entities between local and remote sites.
De(i, j, t) can be used as a metric to measure the degree of
consistency at a certain time point. If De(i, j, t1) > De(i, j, t2), it
can be stated that between sites i and j, the consistency of the
states of entity e at time point t1 is lower than that at time point t2.
If De(i, j, t) > De(l, k, t), it can be stated that, at time point t, the
consistency of the states of entity e between sites i and j is lower
than that between sites l and k.
Similarly, De(i, j, t1, t2) can been used as a metric to measure the
degree of consistency in a certain time period. If De(i, j, t1, t2) >
De(i, j, t3, t4) and |t1 - t2| = |t3 - t4|, it can be stated that between
sites i and j, the consistency of the states of entity e between time
points t1 and t2 is lower than that between time points t3 and t4. If
De(i, j, t1, t2) > De(l, k, t1, t2), it can be stated that between time
points t1 and t2, the consistency of the states of entity e between
sites i and j is lower than that between sites l and k.
In DR, the states of entities are composed of the positions and
orientations of entities and some prediction related parameters
(e.g. the velocities of entities). Given two distinct sites i and j,
which have replicated a shared entity e, at a given time point t, the
positions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De(i, j,
t) and D (i, j, t1, t2) could be calculated as:
De(i, j, t) = )zz()yy()xx( jtit
2
jtit
2
jtit
2
(3)
De(i, j, t1, t2)
= dt)zz()yy()xx(
2t
1t jtit
2
jtit
2
jtit
2
(4)
In this paper, formulas (3) and (4) are used as metrics to measure
the consistency of shared entities between local and remote sites.
3. INCONSISTENCY IN DR
The inconsistency in DR can be divided into two sections by the
time point when a remote site receives a state update. The
inconsistency before a remote site receives a state update is
referred to as before inconsistency, and the inconsistency after a
remote site receives a state update is referred to as after
inconsistency. Before inconsistency and after inconsistency are
similar with the terms before export error and after export error
[8].
After inconsistency is caused by the lack of synchronization
between the physical clocks of all sites in a system. By employing
physical clock synchronization, GS-DR can accurately calculate
the states of shared entities after receiving state updates, and it
can eliminate after inconsistency. Before inconsistency is caused
by two reasons. The first reason is the delay of sending state
updates, as local site does not send a state update unless the
difference between accurate state and the estimated one is larger
than a predefined threshold. The second reason is network
transmission delay, as a shared entity can be synchronized only
after remote sites receiving corresponding state update.
Figure 1. The paths of a shared entity by using GS-DR.
For example, it is assumed that the velocity of a shared entity is
the only parameter to predict the entity"s position, and current
position of the entity can be calculated by its last position and
current velocity. To simplify the description, it is also assumed
that there are only two sites i and j in a game session, site i acts as
2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
local site and site j acts as remote site, and t1 is the time point the
local site updates the state of the shared entity. Figure 1 illustrates
the paths of the shared entity at local site and remote site in x axis
by using GS-DR. At the beginning, the positions of the shared
entity are the same at sites i and j and the velocity of the shared
entity is 0. Before time point t0, the paths of the shared entity at
sites i and j in x coordinate are exactly the same. At time point t0,
the player at site i issues an operation, which changes the velocity
in x axis to v0. Site i first periodically checks whether the
difference between the accurate position of the shared entity and
the estimated one, 0 in this case, is larger than a predefined
threshold. At time point t1, site i finds that the difference is larger
than the threshold and it sends a state update to site j. The state
update contains the position and velocity of the shared entity at
time point t1 and time point t1 is also attached as a timestamp. At
time point t2, the state update reaches site j, and the received state
and the time deviation between time points t1 and t2 are used to
calculate the current position of the shared entity. Then site j
updates its replicated entity"s position and velocity, and the paths
of the shared entity at sites i and j overlap again.
From Figure 1, it can be seen that the after inconsistency is 0, and
the before consistency is composed of two parts, D1 and D2. D1
is De(i, j, t0, t1) and it is caused by the state filtering mechanism
of DR. D2 is De(i, j, t1, t2) and it is caused by network
transmission delay.
4. GLOBALLY SYNCHRONIZED DR
WITH LOCAL LAG
From the analysis in Section 3, It can be seen that GS-DR can
eliminate after inconsistency, but it cannot effectively tackle
before inconsistency. In order to decrease before inconsistency,
we propose GS-DR-LL, which combines GS-DR with local lag
and can effectively decrease before inconsistency.
In GS-DR-LL, the state of a shared entity at a certain time point t
is notated as S = (t, pos, par 1, par 2, ……, par n), in which pos
means the position of the entity and par 1 to par n means the
parameters to calculate the position of the entity. In order to
simplify the description of GS-DR-LL, it is assumed that there are
only one shared entity and one remote site.
At the beginning of a game session, the states of the shared entity
are the same at local and remote sites, with the same position p0
and parameters pars0 (pars represents all the parameters). Local
site keeps three states: the real state of the entity Sreal, the
predicted state at remote site Sp-remote, and the latest state updated
to remote site Slate. Remote site keep only one state Sremote, which
is the real state of the entity at remote site. Therefore, at the
beginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0,
p0, pars0). In GS-DR-LL, it is assumed that the physical clocks of
all sites are synchronized with a deviation of less than 50 ms
(using NTP or GPS clock). Furthermore, it is necessary to make
corrections to a physical clock in a way that does not result in
decreasing the value of the clock, for example by slowing down
or halting the clock for a period of time. Additionally it is
assumed that the game scene is updated at a fixed frequency and
T stands for the time interval between two consecutive updates,
for example, if the scene update frequency is 50 Hz, T would be
20 ms. n stands for the lag value used by local lag, and t stands for
current physical time.
After updating the scene, local site waits for a constant amount of
time T. During this time period, local site receives the operations
of the player and stores them in a list L. All operations in L are
sorted by their issue time. At the end of time period T, local site
executes all stored operations, whose issue time is between t - T
and t, on Slate to get the new Slate, and it also executes all stored
operations, whose issue time is between t - (n + T) and t - n, on
Sreal to get the new Sreal. Additionally, local site uses Sp-remote and
corresponding prediction methods to estimate the new Sp-remote.
After new Slate, Sreal, and Sp-remote are calculated, local site
compares whether the difference between the new Slate and
Spremote exceeds the predefined threshold. If YES, local site sends
new Slate to remote site and Sp-remote is updated with new Slate. Note
that the timestamp of the sent state update is t. After that, local
site uses Sreal to update local scene and deletes the operations,
whose issue time is less than t - n, from L.
After updating the scene, remote site waits for a constant amount
of time T. During this time period, remote site stores received
state update(s) in a list R. All state updates in R are sorted by their
timestamps. At the end of time period T, remote site checks
whether R contains state updates whose timestamps are less than t
- n. Note that t is current physical time and it increases during the
transmission of state updates. If YES, it uses these state updates
and corresponding prediction methods to calculate the new Sremote,
else they use Sremote and corresponding prediction methods to
estimate the new Sremote. After that, local site uses Sremote to update
local scene and deletes the sate updates, whose timestamps are
less than t - n, from R.
From the above description, it can been see that the main
difference between GS-DR and GS-DR-LL is that GS-DR-LL
uses the operations, whose issue time is less than t - n, to
calculate Sreal. That means that the scene seen by local player is
the results of the operations issued a period of time (i.e. n) ago.
Meanwhile, if the results of issued operations make the difference
between Slate and Sp-remote exceed a predefined threshold,
corresponding state updates are sent to remote sites immediately.
The aforementioned is the basic mechanism of GS-DR-LL. In the
case with multiple shared entities and remote sites, local site
calculates Slate, Sreal, and Sp-remote for different shared entities
respectively, if there are multiple Slate need to be transmitted, local
site packets them in one state update and then send it to all remote
sites.
Figure 2 illustrates the paths of a shared entity at local site and
remote site while using GS-DR and GS-DR-LL. All conditions
are the same with the conditions used in the aforementioned
example describing GS-DR. Compared with t1, t2, and n, T (i.e.
the time interval between two consecutive updates) is quite small
and it is ignored in the following description.
At time point t0, the player at site i issues an operation, which
changes the velocity of the shared entity form 0 to v0. By using
GS-DR-LL, the results of the operation are updated to local scene
at time point t0 + n. However the operation is immediately used
to calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time
point t1 site i finds that the difference between accurate position
and the estimated one is larger than the threshold and it sends a
state update to site j. At time point t2, the state update is received
by remote site j. Assuming that the timestamp of the state update
is less than t - n, site j uses it to update local scene immediately.
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3
With GS-DR, the time period of before inconsistency is (t2 - t1) +
(t1 - t0), whereas it decreases to (t2 - t1 - n) + (t1 - t0) with the
help of GS-DR-LL. Note that t2 - t1 is caused by network
transmission delay and t1 - t0 is caused by the state filtering
mechanism of DR. If n is larger than t2 - t1, GS-DR-LL can
eliminate the before inconsistency caused by network
transmission delay, but it cannot eliminate the before
inconsistency caused by the state filtering mechanism of DR
(unless the threshold is set to 0). In highly interactive games,
which request high consistency and GS-DR-LL might be
employed, the results of operations are quite difficult to be
estimated and a small threshold must be used. Thus, in practice,
most before inconsistency is caused by network transmission
delay and GS-DR-LL has the capability to eliminate such before
inconsistency.
Figure 2. The paths of a shared entity by using GS-DR and
GS-DR-LL.
To GS-DR-LL, the selection of lag value n is very important, and
both network transmission delay and the effects of local lag on
interaction should be considered. According to the results of HCI
related researches, humans cannot perceive the delay imposed on
a system when it is smaller than a specific value, and the specific
value depends on both the system and the task. For example, in a
graphical user interface a delay of approximately 150 ms cannot
be noticed for keyboard interaction and the threshold increases to
195 ms for mouse interaction [13], and a delay of up to 50 ms is
uncritical for a car-racing game [5]. Thus if network transmission
delay is less than the specific value of a game system, n can be set
to the specific value. Else n can be set in terms of the effects of
local lag on the interaction of a system [14]. In the case that a
large n must be used, some HCI methods (e.g. echo [15]) can be
used to relieve the negative effects of the large lag. In the case
that n is larger than the network transmission delay, GS-DR-LL
can eliminate most before inconsistency. Traditional local lag
requests that the lag value must be larger than typical network
transmission delay, otherwise state repairs would flood the system.
However GS-DR-LL allows n to be smaller than typical network
transmission delay. In this case, the before inconsistency caused
by network transmission delay still exists, but it can be decreased.
5. PERFORMANCE EVALUATION
In order to evaluate GS-DR-LL and compare it with GS-DR in a
real application, we had implemented both two methods in a
networked game named spaceship [1]. Spaceship is a very simple
networked computer game, in which players can control their
spaceships to accelerate, decelerate, turn, and shoot spaceships
controlled by remote players with laser beams. If a spaceship is
hit by a laser beam, its life points decrease one. If the life points
of a spaceship decrease to 0, the spaceship is removed from the
game and the player controlling the spaceship loses the game.
In our practical implementation, GS-DR-LL and GS-DR
coexisted in the game system, and the test bed was composed of
two computers connected by 100 M switched Ethernet, with one
computer acted as local site and the other acted as remote site. In
order to simulate network transmission delay, a specific module
was developed to delay all packets transmitted between the two
computers in terms of a predefined delay value.
The main purpose of performance evaluation is to study the
effects of GS-DR-LL on decreasing before inconsistency in a
particular game system under different thresholds, lags, and
network transmission delays. Two different thresholds were used
in the evaluation, one is 10 pixels deviation in position or 15
degrees deviation in orientation, and the other is 4 pixels or 5
degrees. Six different combinations of lag and network
transmission delay were used in the evaluation and they could be
divided into two categories. In one category, the lag was fixed at
300 ms and three different network transmission delays (100 ms,
300 ms, and 500 ms) were used. In the other category, the
network transmission delay was fixed at 800 ms and three
different lags (100 ms, 300 ms, and 500 ms) were used. Therefore
the total number of settings used in the evaluation was 12 (2 × 6).
The procedure of performance evaluation was composed of three
steps. In the first step, two participants were employed to play the
game, and the operation sequences were recorded. Based on the
records, a sub operation sequence, which lasted about one minute
and included different operations (e.g. accelerate, decelerate, and
turn), was selected. In the second step, the physical clocks of the
two computers were synchronized first. Under different settings
and consistency maintenance approaches, the selected sub
operation sequence was played back on one computer, and it
drove the two spaceships, one was local and the other was remote,
to move. Meanwhile, the tracks of the spaceships on the two
computers were recorded separately and they were called as a
track couple. Since there are 12 settings and 2 consistency
maintenance approaches, the total number of recorded track
couples was 24. In the last step, to each track couple, the
inconsistency between them was calculated, and the unit of
inconsistency was pixel. Since the physical clocks of the two
computers were synchronized, the calculation of inconsistency
was quite simple. The inconsistency at a particular time point was
the distance between the positions of the two spaceships at that
time point (i.e. formula (3)).
In order to show the results of inconsistency in a clear way, only
parts of the results, which last about 7 seconds, are used in the
following figures, and the figures show almost the same parts of
the results. Figures 3, 4, and 5 show the results of inconsistency
when the lag is fixed at 300 ms and the network transmission
delays are 100, 300, and 500 ms. It can been seen that
inconsistency does exist, but in most of the time it is 0.
Additionally, inconsistency increases with the network
transmission delay, but decreases with the threshold. Compared
with GS-DR, GS-DR-LL can decrease more inconsistency, and it
eliminates most inconsistency when the network transmission
delay is 100 ms and the threshold is 4 pixels or 5 degrees.
4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
According to the prediction and state filtering mechanisms of DR,
inconsistency cannot be completely eliminated if the threshold is
not 0. With the definitions of before inconsistency and after
inconsistency, it can be indicated that GS-DR and GS-DR-LL
both can eliminate after inconsistency, and GS-DR-LL can
effectively decrease before inconsistency. It can be foreseen that
with proper lag and threshold (e.g. the lag is larger than the
network transmission delay and the threshold is 0), GS-DR-LL
even can eliminate before inconsistency.
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 3. Inconsistency when the network transmission delay is 100 ms and the lag is 300 ms.
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels) GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 4. Inconsistency when the network transmission delay is 300 ms and the lag is 300 ms.
0
10
20
30
40
0.0 1.5 3.1 4.6 6.2
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 5. Inconsistency when the network transmission delay is 500 ms and the lag is 300 ms.
Figures 6, 7, and 8 show the results of inconsistency when the
network transmission delay is fixed at 800 ms and the lag are 100,
300, and 500 ms. It can be seen that with GS-DR-LL before
inconsistency decreases with the lag. In traditional local lag, the
lag must be set to a value larger than typical network transmission
delay, otherwise the state repairs would flood the system. From
the above results it can be seen that there does not exist any
constraint on the selection of the lag, with GS-DR-LL a system
would work fine even if the lag is much smaller than the network
transmission delay.
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5
From all above results, it can be indicated that GS-DR and
GSDR-LL both can eliminate after inconsistency, and GS-DR-LL
can effectively decrease before inconsistency, and the effects
increase with the lag.
0
10
20
30
40
0.0 1.5 3.1 4.7 6.2
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 6. Inconsistency when the network transmission delay is 800 ms and the lag is 100 ms.
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 7. Inconsistency when the network transmission delay is 800 ms and the lag is 300 ms.
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 10 pixels or 15degrees
0
10
20
30
40
0.0 1.5 3.1 4.6 6.1
Time (seconds)
Inconsistency(pixels)
GS-DR-LL GS-DR
The threshold is 4 pixels or 5degrees
Figure 8. Inconsistency when the network transmission delay is 800 ms and the lag is 500 ms.
6. CONCLUSIONS
Compared with traditional DR, GS-DR can eliminate after
inconsistency through the synchronization of physical clocks, but
it cannot tackle before inconsistency, which would significantly
influence the usability and fairness of a game. In this paper, we
proposed a method named GS-DR-LL, which combines local lag
and GS-DR, to decrease before inconsistency through delaying
updating the execution results of local operations to local scene.
Performance evaluation indicates that GS-DR-LL can effectively
decrease before inconsistency, and the effects increase with the
lag.
GS-DR-LL has significant implications to consistency
maintenance approaches. First, GS-DR-LL shows that improved
DR can not only eliminate after inconsistency but also decrease
6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
before inconsistency, with proper lag and threshold, it would even
eliminate before inconsistency. As a result, the application of DR
can be greatly broadened and it could be used in the systems
which request high consistency (e.g. highly interactive games).
Second, GS-DR-LL shows that by combining local lag and
GSDR, the constraint on selecting lag value is removed and a lag,
which is smaller than typical network transmission delay, could
be used. As a result, the application of local lag can be greatly
broadened and it could be used in the systems which have large
typical network transmission delay (e.g. Internet based games).
7. REFERENCES
[1] Mauve, M., Vogel, J., Hilt, V., and Effelsberg, W. Local-Lag
and Timewarp: Providing Consistency for Replicated
Continuous Applications. IEEE Transactions on Multimedia,
Vol. 6, No.1, 2004, 47-57.
[2] Li, F.W., Li, L.W., and Lau, R.W. Supporting Continuous
Consistency in Multiplayer Online Games. In Proc. of ACM
Multimedia, 2004, 388-391.
[3] Pantel, L. and Wolf, L. On the Suitability of Dead
Reckoning Schemes for Games. In Proc. of NetGames, 2002,
79-84.
[4] Alhalabi, M.O., Horiguchi, S., and Kunifuji, S. An
Experimental Study on the Effects of Network Delay in
Cooperative Shared Haptic Virtual Environment. Computers
and Graphics, Vol. 27, No. 2, 2003, 205-213.
[5] Pantel, L. and Wolf, L.C. On the Impact of Delay on
RealTime Multiplayer Games. In Proc. of NOSSDAV, 2002,
2329.
[6] Meehan, M., Razzaque, S., Whitton, M.C., and Brooks, F.P.
Effect of Latency on Presence in Stressful Virtual
Environments. In Proc. of IEEE VR, 2003, 141-148.
[7] Bernier, Y.W. Latency Compensation Methods in
Client/Server In-Game Protocol Design and Optimization. In
Proc. of Game Developers Conference, 2001.
[8] Aggarwal, S., Banavar, H., and Khandelwal, A. Accuracy in
Dead-Reckoning based Distributed Multi-Player Games. In
Proc. of NetGames, 2004, 161-165.
[9] Raynal, M. and Schiper, A. From Causal Consistency to
Sequential Consistency in Shared Memory Systems. In Proc.
of Conference on Foundations of Software Technology and
Theoretical Computer Science, 1995, 180-194.
[10] Ahamad, M., Burns, J.E., Hutto, P.W., and Neiger, G. Causal
Memory. In Proc. of International Workshop on Distributed
Algorithms, 1991, 9-30.
[11] Herlihy, M. and Wing, J. Linearizability: a Correctness
Condition for Concurrent Objects. ACM Transactions on
Programming Languages and Systems, Vol. 12, No. 3, 1990,
463-492.
[12] Misra, J. Axioms for Memory Access in Asynchronous
Hardware Systems. ACM Transactions on Programming
Languages and Systems, Vol. 8, No. 1, 1986, 142-153.
[13] Dabrowski, J.R. and Munson, E.V. Is 100 Milliseconds too
Fast. In Proc. of SIGCHI Conference on Human Factors in
Computing Systems, 2001, 317-318.
[14] Chen, H., Chen, L., and Chen, G.C. Effects of Local-Lag
Mechanism on Cooperation Performance in a Desktop CVE
System. Journal of Computer Science and Technology, Vol.
20, No. 3, 2005, 396-401.
[15] Chen, L., Chen, H., and Chen, G.C. Echo: a Method to
Improve the Interaction Quality of CVEs. In Proc. of IEEE
VR, 2005, 269-270.
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7 | local lag;physical clock;time warp;usability and fairness;continuous replicate application;network transmission delay;distribute multi-player game;accurate state;gs-dr-ll;dead-reckon;multiplayer game;consistency;correction |
train_C-54 | Remote Access to Large Spatial Databases | Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server"s behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. | 1. INTRODUCTION
In recent years, enterprises in the public and private
sectors have provided access to large volumes of spatial data
over the Internet. Interactive work with such large volumes
of online spatial data is a challenging task. We have been
developing an interactive browser for accessing spatial online
databases: the SAND (Spatial and Non-spatial Data)
Internet Browser. Users of this browser can interactively and
visually manipulate spatial data remotely. Unfortunately,
interactive remote access to spatial data slows to a crawl
without proper data access mechanisms. We developed two
separate methods for improving the system performance,
together, form a dynamic network infrastructure that is highly
scalable and provides a satisfactory user experience for
interactions with large volumes of online spatial data.
The core functionality responsible for the actual database
operations is performed by the server-based SAND system.
SAND is a spatial database system developed at the
University of Maryland [12]. The client-side SAND Internet
Browser provides a graphical user interface to the facilities
of SAND over the Internet. Users specify queries by
choosing the desired selection conditions from a variety of menus
and dialog boxes.
SAND Internet Browser is Java-based, which makes it
deployable across many platforms. In addition, since Java has
often been installed on target computers beforehand, our
clients can be deployed on these systems with little or no
need for any additional software installation or
customization. The system can start being utilized immediately
without any prior setup which can be extremely beneficial in
time-sensitive usage scenarios such as emergencies.
There are two ways to deploy SAND. First, any standard
Web browser can be used to retrieve and run the client piece
(SAND Internet Browser) as a Java application or an applet.
This way, users across various platforms can continuously
access large spatial data on a remote location with little or
15
no need for any preceding software installation. The second
option is to use a stand-alone SAND Internet Browser along
with a locally-installed Internet-enabled database
management system (server piece). In this case, the SAND Internet
Browser can still be utilized to view data from remote
locations. However, frequently accessed data can be downloaded
to the local database on demand, and subsequently accessed
locally. Power users can also upload large volumes of spatial
data back to the remote server using this enhanced client.
We focused our efforts in two directions. We first aimed at
developing a client-server architecture with efficient caching
methods to balance local resources on one side and the
significant latency of the network connection on the other. The
low bandwidth of this connection is the primary concern in
both cases. The outcome of this research primarily addresses
the issues of our first type of usage (i.e., as a remote browser
application or an applet) for our browser and other similar
applications. The second direction aims at helping users
that wish to manipulate large volumes of online data for
prolonged periods. We have developed a centralized
peerto-peer approach to provide the users with the ability to
transfer large volumes of data (i.e., whole data sets to the
local database) more efficiently by better utilizing the
distributed network resources among active clients of a
clientserver architecture. We call this architecture
APPOINTApproach for Peer-to-Peer Offloading the INTernet. The
results of this research addresses primarily the issues of the
second type of usage for our SAND Internet Browser (i.e.,
as a stand-alone application).
The rest of this paper is organized as follows. Section 2
describes our client-server approach in more detail. Section 3
focuses on APPOINT, our peer-to-peer approach. Section 4
discusses our work in relation to existing work. Section 5
outlines a sample SAND Internet Browser scenario for both
of our remote access approaches. Section 6 contains
concluding remarks as well as future research directions.
2. THE CLIENT-SERVER APPROACH
Traditionally, Geographic Information Systems (GIS)
such as ArcInfo from ESRI [2] and many spatial databases
are designed to be stand-alone products. The spatial
database is kept on the same computer or local area network
from where it is visualized and queried. This architecture
allows for instantaneous transfer of large amounts of data
between the spatial database and the visualization module
so that it is perfectly reasonable to use large-bandwidth
protocols for communication between them. There are however
many applications where a more distributed approach is
desirable. In these cases, the database is maintained in one
location while users need to work with it from possibly distant
sites over the network (e.g., the Internet). These connections
can be far slower and less reliable than local area networks
and thus it is desirable to limit the data flow between the
database (server) and the visualization unit (client) in order
to get a timely response from the system.
Our client-server approach (Figure 1) allows the actual
database engine to be run in a central location maintained
by spatial database experts, while end users acquire a
Javabased client component that provides them with a gateway
into the SAND spatial database engine.
Our client is more than a simple image viewer. Instead, it
operates on vector data allowing the client to execute many
operations such as zooming or locational queries locally. In
Figure 1: SAND Internet Browser - Client-Server
architecture.
essence, a simple spatial database engine is run on the client.
This database keeps a copy of a subset of the whole database
whose full version is maintained on the server. This is a
concept similar to ‘caching". In our case, the client acts as
a lightweight server in that given data, it evaluates queries
and provides the visualization module with objects to be
displayed. It initiates communication with the server only
in cases where it does not have enough data stored locally.
Since the locally run database is only updated when
additional or newer data is needed, our architecture allows the
system to minimize the network traffic between the client
and the server when executing the most common user-side
operations such as zooming and panning. In fact, as long
as the user explores one region at a time (i.e., he or she is
not panning all over the database), no additional data needs
to be retrieved after the initial population of the client-side
database. This makes the system much more responsive
than the Web mapping services. Due to the complexity of
evaluating arbitrary queries (i.e., more complex queries than
window queries that are needed for database visualization),
we do not perform user-specified queries on the client. All
user queries are still evaluated on the server side and the
results are downloaded onto the client for display. However,
assuming that the queries are selective enough (i.e., there are
far fewer elements returned from the query than the number
of elements in the database), the response delay is usually
within reasonable limits.
2.1 Client-Server Communication
As mentioned above, the SAND Internet Browser is a
client piece of the remotely accessible spatial database server
built around the SAND kernel. In order to communicate
with the server, whose application programming interface
(API) is a Tcl-based scripting language, a servlet specifically
designed to interface the SAND Internet Browser with the
SAND kernel is required on the server side. This servlet
listens on a given port of the server for incoming requests from
the client. It translates these requests into the SAND-Tcl
language. Next, it transmits these SAND-Tcl commands or
scripts to the SAND kernel. After results are provided by
the kernel, the servlet fetches and processes them, and then
sends those results back to the originating client.
Once the Java servlet is launched, it waits for a client to
initiate a connection. It handles both requests for the actual
client Java code (needed when the client is run as an applet)
and the SAND traffic. When the client piece is launched,
it connects back to the SAND servlet, the communication
is driven by the client piece; the server only responds to
the client"s queries. The client initiates a transaction by
6
sending a query. The Java servlet parses the query and
creates a corresponding SAND-Tcl expression or script in
the SAND kernel"s native format. It is then sent to the
kernel for evaluation or execution. The kernel"s response
naturally depends on the query and can be a boolean value,
a number or a string representing a value (e.g., a default
color) or, a whole tuple (e.g., in response to a nearest tuple
query). If a script was sent to the kernel (e.g., requesting
all the tuples matching some criteria), then an arbitrary
amount of data can be returned by the SAND server. In this
case, the data is first compressed before it is sent over the
network to the client. The data stream gets decompressed
at the client before the results are parsed.
Notice, that if another spatial database was to be used
instead of the SAND kernel, then only a simple
modification to the servlet would need to be made in order for the
SAND Internet Browser to function properly. In
particular, the queries sent by the client would need to be recoded
into another query language which is native to this different
spatial database. The format of the protocol used for
communication between the servlet and the client is unaffected.
3. THE PEER-TO-PEER APPROACH
Many users may want to work on a complete spatial data
set for a prolonged period of time. In this case, making an
initial investment of downloading the whole data set may be
needed to guarantee a satisfactory session. Unfortunately,
spatial data tends to be large. A few download requests
to a large data set from a set of idle clients waiting to be
served can slow the server to a crawl. This is due to the fact
that the common client-server approach to transferring data
between the two ends of a connection assumes a designated
role for each one of the ends (i.e, some clients and a server).
We built APPOINT as a centralized peer-to-peer system
to demonstrate our approach for improving the common
client-server systems. A server still exists. There is a
central source for the data and a decision mechanism for the
service. The environment still functions as a client-server
environment under many circumstances. Yet, unlike many
common client-server environments, APPOINT maintains
more information about the clients. This includes,
inventories of what each client downloads, their availabilities, etc.
When the client-server service starts to perform poorly or
a request for a data item comes from a client with a poor
connection to the server, APPOINT can start appointing
appropriate active clients of the system to serve on behalf
of the server, i.e., clients who have already volunteered their
services and can take on the role of peers (hence, moving
from a client-server scheme to a peer-to-peer scheme). The
directory service for the active clients is still performed by
the server but the server no longer serves all of the requests.
In this scheme, clients are used mainly for the purpose of
sharing their networking resources rather than introducing
new content and hence they help offload the server and scale
up the service. The existence of a server is simpler in terms
of management of dynamic peers in comparison to pure
peerto-peer approaches where a flood of messages to discover
who is still active in the system should be used by each peer
that needs to make a decision. The server is also the main
source of data and under regular circumstances it may not
forward the service.
Data is assumed to be formed of files. A single file forms
the atomic means of communication. APPOINT optimizes
requests with respect to these atomic requests. Frequently
accessed data sets are replicated as a byproduct of having
been requested by a large number of users. This opens up
the potential for bypassing the server in future downloads for
the data by other users as there are now many new points of
access to it. Bypassing the server is useful when the server"s
bandwidth is limited. Existence of a server assures that
unpopular data is also available at all times. The service
depends on the availability of the server. The server is now
more resilient to congestion as the service is more scalable.
Backups and other maintenance activities are already
being performed on the server and hence no extra
administrative effort is needed for the dynamic peers. If a peer goes
down, no extra precautions are taken. In fact, APPOINT
does not require any additional resources from an already
existing client-server environment but, instead, expands its
capability. The peers simply get on to or get off from a table
on the server.
Uploading data is achieved in a similar manner as
downloading data. For uploads, the active clients can again be
utilized. Users can upload their data to a set of peers other
than the server if the server is busy or resides in a distant
location. Eventually the data is propagated to the server.
All of the operations are performed in a transparent
fashion to the clients. Upon initial connection to the server,
they can be queried as to whether or not they want to share
their idle networking time and disk space. The rest of the
operations follow transparently after the initial contact.
APPOINT works on the application layer but not on lower
layers. This achieves platform independence and easy
deployment of the system. APPOINT is not a replacement but
an addition to the current client-server architectures. We
developed a library of function calls that when placed in a
client-server architecture starts the service. We are
developing advanced peer selection schemes that incorporate the
location of active clients, bandwidth among active clients,
data-size to be transferred, load on active clients, and
availability of active clients to form a complete means of selecting
the best clients that can become efficient alternatives to the
server.
With APPOINT we are defining a very simple API that
could be used within an existing client-server system easily.
Instead of denial of service or a slow connection, this API
can be utilized to forward the service appropriately. The
API for the server side is:
start(serverPortNo)
makeFileAvailable(file,location,boolean)
callback receivedFile(file,location)
callback errorReceivingFile(file,location,error)
stop()
Similarly the API for the client side is:
start(clientPortNo,serverPortNo,serverAddress)
makeFileAvailable(file,location,boolean)
receiveFile(file,location)
sendFile(file,location)
stop()
The server, after starting the APPOINT service, can make
all of the data files available to the clients by using the
makeFileAvailable method. This will enable APPOINT
to treat the server as one of the peers.
The two callback methods of the server are invoked when
a file is received from a client, or when an error is
encountered while receiving a file from a client. APPOINT
guar7
Figure 2: The localization operation in APPOINT.
antees that at least one of the callbacks will be called so
that the user (who may not be online anymore) can always
be notified (i.e., via email). Clients localizing large data
files can make these files available to the public by using the
makeFileAvailable method on the client side.
For example, in our SAND Internet Browser, we have the
localization of spatial data as a function that can be chosen
from our menus. This functionality enables users to
download data sets completely to their local disks before starting
their queries or analysis. In our implementation, we have
calls to the APPOINT service both on the client and the
server sides as mentioned above. Hence, when a localization
request comes to the SAND Internet Browser, the browser
leaves the decisions to optimally find and localize a data set
to the APPOINT service. Our server also makes its data
files available over APPOINT. The mechanism for the
localization operation is shown with more details from the
APPOINT protocols in Figure 2. The upload operation is
performed in a similar fashion.
4. RELATED WORK
There has been a substantial amount of research on
remote access to spatial data. One specific approach has
been adopted by numerous Web-based mapping services
(MapQuest [5], MapsOnUs [6], etc.). The goal in this
approach is to enable remote users, typically only equipped
with standard Web browsers, to access the company"s
spatial database server and retrieve information in the form of
pictorial maps from them. The solution presented by most
of these vendors is based on performing all the calculations
on the server side and transferring only bitmaps that
represent results of user queries and commands. Although the
advantage of this solution is the minimization of both
hardware and software resources on the client site, the resulting
product has severe limitations in terms of available
functionality and response time (each user action results in a new
bitmap being transferred to the client).
Work described in [9] examines a client-server
architecture for viewing large images that operates over a
lowbandwidth network connection. It presents a technique
based on wavelet transformations that allows the
minimization of the amount of data needed to be transferred over
the network between the server and the client. In this case,
while the server holds the full representation of the large
image, only a limited amount of data needs to be transferred
to the client to enable it to display a currently requested
view into the image. On the client side, the image is
reconstructed into a pyramid representation to speed up zooming
and panning operations. Both the client and the server keep
a common mask that indicates what parts of the image are
available on the client and what needs to be requested. This
also allows dropping unnecessary parts of the image from the
main memory on the server.
Other related work has been reported in [16] where a
client-server architecture is described that is designed to
provide end users with access to a server. It is assumed that
this data server manages vast databases that are impractical
to be stored on individual clients. This work blends raster
data management (stored in pyramids [22]) with vector data
stored in quadtrees [19, 20].
For our peer-to-peer transfer approach (APPOINT),
Napster is the forefather where a directory service is centralized
on a server and users exchange music files that they have
stored on their local disks. Our application domain, where
the data is already freely available to the public, forms a
prime candidate for such a peer-to-peer approach. Gnutella
is a pure (decentralized) peer-to-peer file exchange system.
Unfortunately, it suffers from scalability issues, i.e., floods of
messages between peers in order to map connectivity in the
system are required. Other systems followed these popular
systems, each addressing a different flavor of sharing over
the Internet. Many peer-to-peer storage systems have also
recently emerged. PAST [18], Eternity Service [7], CFS [10],
and OceanStore [15] are some peer-to-peer storage systems.
Some of these systems have focused on anonymity while
others have focused on persistence of storage. Also, other
approaches, like SETI@Home [21], made other resources, such
as idle CPUs, work together over the Internet to solve large
scale computational problems. Our goal is different than
these approaches. With APPOINT, we want to improve
existing client-server systems in terms of performance by using
idle networking resources among active clients. Hence, other
issues like anonymity, decentralization, and persistence of
storage were less important in our decisions. Confirming
the authenticity of the indirectly delivered data sets is not
yet addressed with APPOINT. We want to expand our
research, in the future, to address this issue.
From our perspective, although APPOINT employs some
of the techniques used in peer-to-peer systems, it is also
closely related to current Web caching architectures.
Squirrel [13] forms the middle ground. It creates a pure
peer-topeer collaborative Web cache among the Web browser caches
of the machines in a local-area network. Except for this
recent peer-to-peer approach, Web caching is mostly a
wellstudied topic in the realm of server/proxy level caching [8,
11, 14, 17]. Collaborative Web caching systems, the most
relevant of these for our research, focus on creating
either a hierarchical, hash-based, central directory-based, or
multicast-based caching schemes. We do not compete with
these approaches. In fact, APPOINT can work in
tandem with collaborative Web caching if they are deployed
together. We try to address the situation where a request
arrives at a server, meaning all the caches report a miss.
Hence, the point where the server is reached can be used to
take a central decision but then the actual service request
can be forwarded to a set of active clients, i.e., the
down8
load and upload operations. Cache misses are especially
common in the type of large data-based services on which
we are working. Most of the Web caching schemes that are
in use today employ a replacement policy that gives a
priority to replacing the largest sized items over smaller-sized
ones. Hence, these policies would lead to the immediate
replacement of our relatively large data files even though they
may be used frequently. In addition, in our case, the user
community that accesses a certain data file may also be very
dispersed from a network point of view and thus cannot take
advantage of any of the caching schemes. Finally, none of
the Web caching methods address the symmetric issue of
large data uploads.
5. A SAMPLE APPLICATION
FedStats [1] is an online source that enables ordinary
citizens access to official statistics of numerous federal agencies
without knowing in advance which agency produced them.
We are using a FedStats data set as a testbed for our work.
Our goal is to provide more power to the users of FedStats
by utilizing the SAND Internet Browser. As an example,
we looked at two data files corresponding to
Environmental Protection Agency (EPA)-regulated facilities that have
chlorine and arsenic, respectively. For each file, we had the
following information available: EPA-ID, name, street, city,
state, zip code, latitude, longitude, followed by flags to
indicate if that facility is in the following EPA programs:
Hazardous Waste, Wastewater Discharge, Air Emissions,
Abandoned Toxic Waste Dump, and Active Toxic Release.
We put this data into a SAND relation where the spatial
attribute ‘location" corresponds to the latitude and
longitude. Some queries that can be handled with our system on
this data include:
1. Find all EPA-regulated facilities that have arsenic and
participate in the Air Emissions program, and:
(a) Lie in Georgia to Illinois, alphabetically.
(b) Lie within Arkansas or 30 miles within its border.
(c) Lie within 30 miles of the border of Arkansas (i.e.,
both sides of the border).
2. For each EPA-regulated facility that has arsenic, find
all EPA-regulated facilities that have chlorine and:
(a) That are closer to it than to any other
EPAregulated facility that has arsenic.
(b) That participate in the Air Emissions program
and are closer to it than to any other
EPAregulated facility which has arsenic. In order to
avoid reporting a particular facility more than
once, we use our ‘group by EPA-ID" mechanism.
Figure 3 illustrates the output of an example query that
finds all arsenic sites within a given distance of the border of
Arkansas. The sites are obtained in an incremental manner
with respect to a given point. This ordering is shown by
using different color shades.
With this example data, it is possible to work with the
SAND Internet Browser online as an applet (connecting to
a remote server) or after localizing the data and then
opening it locally. In the first case, for each action taken, the
client-server architecture will decide what to ask for from
the server. In the latter case, the browser will use the
peerto-peer APPOINT architecture for first localizing the data.
6. CONCLUDING REMARKS
An overview of our efforts in providing remote access to
large spatial data has been given. We have outlined our
approaches and introduced their individual elements. Our
client-server approach improves the system performance by
using efficient caching methods when a remote server is
accessed from thin-clients. APPOINT forms an alternative
approach that improves performance under an existing
clientserver system by using idle client resources when individual
users want work on a data set for longer periods of time
using their client computers.
For the future, we envision development of new efficient
algorithms that will support large online data transfers within
our peer-to-peer approach using multiple peers
simultaneously. We assume that a peer (client) can become
unavailable at any anytime and hence provisions need to be in place
to handle such a situation. To address this, we will augment
our methods to include efficient dynamic updates. Upon
completion of this step of our work, we also plan to run
comprehensive performance studies on our methods.
Another issue is how to access data from different sources
in different formats. In order to access multiple data sources
in real time, it is desirable to look for a mechanism that
would support data exchange by design. The XML
protocol [3] has emerged to become virtually a standard for
describing and communicating arbitrary data. GML [4] is
an XML variant that is becoming increasingly popular for
exchange of geographical data. We are currently working
on making SAND XML-compatible so that the user can
instantly retrieve spatial data provided by various agencies in
the GML format via their Web services and then explore,
query, or process this data further within the SAND
framework. This will turn the SAND system into a universal tool
for accessing any spatial data set as it will be deployable on
most platforms, work efficiently given large amounts of data,
be able to tap any GML-enabled data source, and provide
an easy to use graphical user interface. This will also
convert the SAND system from a research-oriented prototype
into a product that could be used by end users for
accessing, viewing, and analyzing their data efficiently and with
minimum effort.
7. REFERENCES
[1] Fedstats: The gateway to statistics from over 100 U.S.
federal agencies. http://www.fedstats.gov/, 2001.
[2] Arcinfo: Scalable system of software for geographic
data creation, management, integration, analysis, and
dissemination. http://www.esri.com/software/
arcgis/arcinfo/index.html, 2002.
[3] Extensible markup language (xml).
http://www.w3.org/XML/, 2002.
[4] Geography markup language (gml) 2.0.
http://opengis.net/gml/01-029/GML2.html, 2002.
[5] Mapquest: Consumer-focused interactive mapping site
on the web. http://www.mapquest.com, 2002.
[6] Mapsonus: Suite of online geographic services.
http://www.mapsonus.com, 2002.
[7] R. Anderson. The Eternity Service. In Proceedings of
the PRAGOCRYPT"96, pages 242-252, Prague, Czech
Republic, September 1996.
[8] L. Breslau, P. Cao, L. Fan, G. Phillips, and
S. Shenker. Web caching and Zipf-like distributions:
9
Figure 3: Sample output from the SAND Internet Browser - Large dark dots indicate the result of a query
that looks for all arsenic sites within a given distance from Arkansas. Different color shades are used to
indicate ranking order by the distance from a given point.
Evidence and implications. In Proceedings of the IEEE
Infocom"99, pages 126-134, New York, NY, March
1999.
[9] E. Chang, C. Yap, and T. Yen. Realtime visualization
of large images over a thinwire. In R. Yagel and
H. Hagen, editors, Proceedings IEEE Visualization"97
(Late Breaking Hot Topics), pages 45-48, Phoenix,
AZ, October 1997.
[10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and
I. Stoica. Wide-area cooperative storage with CFS. In
Proceedings of the ACM SOSP"01, pages 202-215,
Banff, AL, October 2001.
[11] A. Dingle and T. Partl. Web cache coherence.
Computer Networks and ISDN Systems,
28(7-11):907-920, May 1996.
[12] C. Esperan¸ca and H. Samet. Experience with
SAND/Tcl: a scripting tool for spatial databases.
Journal of Visual Languages and Computing,
13(2):229-255, April 2002.
[13] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A
decentralized peer-to-peer Web cache. Rice
University/Microsoft Research, submitted for
publication, 2002.
[14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad,
R. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and
Y. Yerushalmi. Web caching with consistent hashing.
Computer Networks, 31(11-16):1203-1213, May 1999.
[15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski,
P. Eaton, D. Geels, R. Gummadi, S. Rhea,
H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao.
OceanStore: An architecture for global-scale persistent
store. In Proceedings of the ACM ASPLOS"00, pages
190-201, Cambridge, MA, November 2000.
[16] M. Potmesil. Maps alive: viewing geospatial
information on the WWW. Computer Networks and
ISDN Systems, 29(8-13):1327-1342, September 1997.
Also Hyper Proceedings of the 6th International World
Wide Web Conference, Santa Clara, CA, April 1997.
[17] M. Rabinovich, J. Chase, and S. Gadde. Not all hits
are created equal: Cooperative proxy caching over a
wide-area network. Computer Networks and ISDN
Systems, 30(22-23):2253-2259, November 1998.
[18] A. Rowstron and P. Druschel. Storage management
and caching in PAST, a large-scale, persistent
peer-to-peer storage utility. In Proceedings of the ACM
SOSP"01, pages 160-173, Banff, AL, October 2001.
[19] H. Samet. Applications of Spatial Data Structures:
Computer Graphics, Image Processing, and GIS.
Addison-Wesley, Reading, MA, 1990.
[20] H. Samet. The Design and Analysis of Spatial Data
Structures. Addison-Wesley, Reading, MA, 1990.
[21] SETI@Home. http://setiathome.ssl.berkeley.edu/,
2001.
[22] L. J. Williams. Pyramidal parametrics. Computer
Graphics, 17(3):1-11, July 1983. Also Proceedings of
the SIGGRAPH"83 Conference, Detroit, July 1983.
10 | large spatial datum;sand;datum visualization;remote access;internet;datum management;web browser;gi;dynamic network infrastructure;client/server;spatial query evaluation;client-server architecture;peer-to-peer;centralized peer-to-peer approach;internet-enabled database management system;network latency |
train_C-55 | Context Awareness for Group Interaction Support | In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction. | 1. INTRODUCTION
Today"s computing environments are characterized by an
increasing number of powerful, wirelessly connected mobile
devices. Users can move throughout an environment while
carrying their computers with them and having remote access to
information and services, anytime and anywhere. New situations
appear, where the user"s context - for example his current
location or nearby people - is more dynamic; computation does
not occur at a single location and in a single context any longer,
but comprises a multitude of situations and locations. This
development leads to a new class of applications, which are
aware of the context in which they run in and thus bringing virtual
and real worlds together.
Motivated by this and the fact, that only a few studies have been
done for supporting group communication in such computing
environments [12], we have developed a system, which we refer
to as Group Interaction Support System (GISS). It supports group
interaction in mobile distributed computing environments in a
way that group members need not to at the same place any longer
in order to interact with each other or just to be aware of the
others situation.
In the following subchapters, we will give a short overview on
context aware computing and motivate its benefits for supporting
group interaction. A software framework for developing
contextsensitive applications is presented, which serves as middleware
for GISS. Chapter 2 presents the architecture of GISS, and chapter
3 and 4 discuss the location sensing and group interaction
concepts of GISS in more detail. Chapter 5 gives a final summary
of our work.
1.1 What is Context Computing?
According to Merriam-Webster"s Online Dictionary1
, context is
defined as the interrelated conditions in which something exists
or occurs. Because this definition is very general, many
approaches have been made to define the notion of context with
respect to computing environments.
Most definitions of context are done by enumerating examples or
by choosing synonyms for context. The term context-aware has
been introduced first in [10] where context is referred to as
location, identities of nearby people and objects, and changes to
those objects. In [2], context is also defined by an enumeration of
examples, namely location, identities of the people around the
user, the time of the day, season, temperature etc. [9] defines
context as the user"s location, environment, identity and time.
Here we conform to a widely accepted and more formal
definition, which defines context as any information than can be
used to characterize the situation of an entity. An entity is a
person, place, or object that is considered relevant to the
interaction between a user and an application, including the user
and applications themselves. [4]
[4] identifies four primary types of context information
(sometimes referred to as context dimensions), that are - with
respect to characterizing the situation of an entity - more
important than others. These are location, identity, time and
activity, which can also be used to derive other sources of
contextual information (secondary context types). For example, if
we know a person"s identity, we can easily derive related
information about this person from several data sources (e.g. day
of birth or e-mail address).
According to this definition, [4] defines a system to be
contextaware if it uses context to provide relevant information and/or
services to the user, where relevancy depends on the user"s task.
[4] also gives a classification of features for context-aware
applications, which comprises presentation of information and
services to a user, automatic execution of a service and tagging of
context to information for later retrieval.
Figure 1. Layers of a context-aware system
Context computing is based on two major issues, namely
identifying relevant context (identity, location, time, activity) and
using obtained context (automatic execution, presentation,
tagging). In order to do this, there are a few layers between (see
Figure 1). First, the obtained low-level context information has to
be transformed, aggregated and interpreted (context
transformation) and represented in an abstract context world
model (context representation), either centralized or
decentralized. Finally, the stored context information is used to
trigger certain context events (context triggering). [7]
1.2 Group Interaction in Context
After these abstract and formal definitions about what context and
context computing is, we will now focus on the main goal of this
work, namely how the interaction of mobile group members can
be supported by using context information.
In [6] we have identified organizational systems to be crucial for
supporting mobile groups (see Figure 2). First, there has to be an
Information and Knowledge Management System, which is
capable of supporting a team with its information processing- and
knowledge gathering needs. The next part is the Awareness
System, which is dedicated to the perceptualisation of the effects
of team activity. It does this by communicating work context,
agenda and workspace information to the users. The Interaction
Systems provide support for the communication among team
members, either synchronous or asynchronous, and for the shared
access to artefacts, such as documents. Mobility Systems deploy
mechanisms to enable any-place access to team memory as well
as the capturing and delivery of awareness information from and
to any places. Finally yet importantly, the organisational
innovation system integrates aspects of the team itself, like roles,
leadership and shared facilities.
With respect to these five aspects of team support, we focus on
interaction and partly cover mobility- and awareness-support.
Group interaction includes all means that enable group members
to communicate freely with all the other members. At this point,
the question how context information can be used for supporting
group interaction comes up. We believe that information about
the current situation of a person provides a surplus value to
existing group interaction systems. Context information facilitates
group interaction by allowing each member to be aware of the
availability status or the current location of each other group
member, which again makes it possible to form groups
dynamically, to place virtual post-its in the real world or to
determine which people are around.
Figure 2. Support for Mobile Groups [6]
Most of today"s context-aware applications use location and time
only, and location is referred to as a crucial type of context
information [3]. We also see the importance of location
information in mobile and ubiquitous environments, wherefore a
main focus of our work is on the utilization of location
information and information about users in spatial proximity.
Nevertheless, we believe that location, as the only used type of
context information, is not sufficient to support group interaction,
wherefore we also take advantage of the other three primary
types, namely identity, time and activity. This provides a
comprehensive description of a user"s current situation and thus
enabling numerous means for supporting group interaction, which
are described in detail in chapter 4.4.
When we look at the types of context information stated above,
we can see that all of them are single user-centred, taking into
account only the context of the user itself. We believe, that for the
support of group interaction, the status of the group itself has also
be taken into account. Therefore, we have added a fifth
contextdimension group-context, which comprises more than the sum of
the individual member"s contexts. Group context includes any
information about the situation of a whole group, for example
how many members a group currently has or if a certain group
meets right now.
1.3 Context Middleware
The Group Interaction Support System (GISS) uses the
softwareframework introduced in [1], which serves as a middleware for
developing context-sensitive applications. This so-called Context
Framework is based on a distributed communication architecture
and it supports different kinds of transport protocols and message
coding mechanisms.
89
A main feature of the framework is the abstraction of context
information retrieval via various sensors and its delivery to a level
where no difference appears, for the application designer,
between these different kinds of context retrieval mechanisms; the
information retrieval is hidden from the application developer.
This is achieved by so-called entities, which describe
objectse.g. a human user - that are important for a certain context
scenario.
Entities express their functionality by the use of so-called
attributes, which can be loaded into the entity. These attributes
are complex pieces of software, which are implemented as Java
classes. Typical attributes are encapsulations of sensors, but they
can also be used to implement context services, for example to
notify other entities about location changes of users.
Each entity can contain a collection of such attributes, where an
entity itself is an attribute. The initial set of attributes an entity
contains can change dynamically at runtime, if an entity loads or
unloads attributes from the local storage or over the network. In
order to load and deploy new attributes, an entity has to reference
a class loader and a transport and lookup layer, which manages
the lookup mechanism for discovering other entities and the
transport. XML configuration files specify which initial set of
entities should be loaded and which attributes these entities own.
The communication between entities and attributes is based on
context events. Each attribute is able to trigger events, which are
addressed to other attributes and entities respectively,
independently on which physical computer they are running.
Among other things, and event contains the name of the event and
a list of parameters delivering information about the event itself.
Related with this event-based architecture is the use of ECA
(Event-Condition-Action)-rules for defining the behaviour of the
context system. Therefore, every entity has a rule-interpreter,
which catches triggered events, checks conditions associated with
them and causes certain actions. These rules are referenced by the
entity"s XML configuration. A rule itself is even able to trigger
the insertion of new rules or the unloading of existing rules at
runtime in order to change the behaviour of the context system
dynamically.
To sum up, the context framework provides a flexible, distributed
architecture for hiding low-level sensor data from high-level
applications and it hides external communication details from the
application developer. Furthermore, it is able to adapt its
behaviour dynamically by loading attributes, entities or
ECArules at runtime.
2. ARCHITECTURE OVERVIEW
As GISS uses the Context Framework described in chapter 1.3 as
middleware, every user is represented by an entity, as well as the
central server, which is responsible for context transformation,
context representation and context triggering (cf. Figure 1).
A main part of our work is about the automated acquisition of
position information and its sensor-independent provision at
application level. We do not only sense the current location of
users, but also determine spatial proximities between them.
Developing the architecture, we focused on keeping the client as
simple as possible and reducing the communication between
client and server to a minimum.
Each client may have various location and/or proximity sensors
attached, which are encapsulated by respective Context
Framework-attributes (Sensor Encapsulation). These attributes
are responsible for integrating native sensor-implementations into
the Context Framework and sending sensor-dependent position
information to the server. We consider it very important to
support different types of sensors even at the same time, in order
to improve location accuracy on the one hand, while providing a
pervasive location-sensing environment with seamless transition
between different location sensing techniques on the other hand.
All location- and proximity-sensors supported are represented by
server-side context-attributes, which correspond to the client-side
sensor encapsulation-attributes and abstract the sensor-dependent
position information received from all users via the wireless
network (sensor abstraction). This requires a context repository,
where the mapping of diverse physical positions to standardized
locations is stored.
The standardized location- and proximity-information of each
user is then passed to the so-called Sensor Fusion-attributes,
one for symbolic locations and a second one for spatial
proximities. Their job is to merge location- and
proximityinformation of clients, respectively, which is described in detail in
Chapter 3.3. Every time the symbolic location of a user or the
spatial proximity between two users changes, the Sensor
Fusion-attributes notify the GISS Core-attribute, which
controls the application.
Because of the abstraction of sensor-dependent position
information, the system can easily be extended by additional
sensors, just by implementing the (typically two) attributes for
encapsulating sensors (some sensors may not need a client-side
part), abstracting physical positions and observing the interface to
GISS Core.
Figure 3. Architecture of the Group Interaction Support
System (GISS)
The GISS Core-attribute is the central coordinator of the
application as it shows to the user. It not only serves as an
interface to the location-sensing subsystem, but also collects
further context information in other dimensions (time, identity or
activity).
90
Every time a change in the context of one or more users is
detected, GISS Core evaluates the effect of these changes on
the user, on the groups he belongs to and on the other members of
these groups. Whenever necessary, events are thrown to the
affected clients to trigger context-aware activities, like changing
the presentation of awareness information or the execution of
services.
The client-side part of the application is kept as simple as
possible. Furthermore, modular design was not only an issue on
the sensor side but also when designing the user interface
architecture. Thus, the complete user interface can be easily
exchanged, if all of the defined events are taken into account and
understood by the new interface-attribute.
The currently implemented user interface is split up in two parts,
which are also represented by two attributes. The central attribute
on client-side is the so-called Instant Messenger Encapsulation,
which on the one hand interacts with the server through events
and on the other hand serves as a proxy for the external
application the user interface is built on.
As external application, we use an existing open source instant
messenger - the ICQ2
-compliant Simple Instant Messenger
(SIM)3
. We have chosen and instant messenger as front-end
because it provides a well-known interface for most users and
facilitates a seamless integration of group interaction support, thus
increasing acceptance and ease of use. As the basic functionality
of the instant messenger - to serve as a client in an instant
messenger network - remains fully functional, our application is
able to use the features already provided by the messenger. For
example, the contexts activity and identity are derived from the
messenger network as it is described later.
The Instant Messenger Encapsulation is also responsible for
supporting group communication. Through the interface of the
messenger, it provides means of synchronous and asynchronous
communication as well as a context-aware reminder system and
tools for managing groups and the own availability status.
The second part of the user interface is a visualisation of the
user"s locations, which is implemented in the attribute Viewer.
The current implementation provides a two-dimensional map of
the campus, but it can easily be replaced by other visualisations, a
three-dimensional VRML-model for example. Furthermore, this
visualisation is used to show the artefacts for asynchronous
communication. Based on a floor plan-view of the geographical
area the user currently resides in, it gives a quick overview of
which people are nearby, their state and provides means to
interact with them.
In the following chapters 3 and 4, we describe the location
sensing-backend and the application front-end for supporting
group interaction in more detail.
3. LOCATION SENSING
In the following chapter, we will introduce a location model,
which is used for representing locations; afterwards, we will
describe the integration of location- and proximity-sensors in
2
http://www.icq.com/
3
http://sim-icq.sourceforge.net
more detail. Finally, we will have a closer look on the fusion of
location- and proximity-information, acquired by various sensors.
3.1 Location Model
A location model (i.e. a context representation for the
contextinformation location) is needed to represent the locations of users,
in order to be able to facilitate location-related queries like given
a location, return a list of all the objects there or given an
object, return its current location. In general, there are two
approaches [3,5]: symbolic models, which represent location as
abstract symbols, and a geometric model, which represent
location as coordinates.
We have chosen a symbolic location model, which refers to
locations as abstract symbols like Room P111 or Physics
Building, because we do not require geometric location data.
Instead, abstract symbols are more convenient for human
interaction at application level. Furthermore, we use a symbolic
location containment hierarchy similar to the one introduced in
[11], which consists of top-level regions, which contain buildings,
which contain floors, and the floors again contain rooms. We also
distinguish four types, namely region (e.g. a whole campus),
section (e.g. a building or an outdoor section), level (e.g. a certain
floor in a building) and area (e.g. a certain room). We introduce a
fifth type of location, which we refer to as semantic. These
socalled semantic locations can appear at any level in the hierarchy
and they can be nested, but they do not necessarily have a
geographic representation. Examples for such semantic locations
are tagged objects within a room (e.g. a desk and a printer on this
desk) or the name of a department, which contains certain rooms.
Figure 4. Symbolic Location Containment Hierarchy
The hierarchy of symbolic locations as well as the type of each
position is stored in the context repository.
3.2 Sensors
Our architecture supports two different kinds of sensors: location
sensors, which acquire location information, and proximity
sensors, which detect spatial proximities between users.
As described above, each sensor has a server- and in most cases a
corresponding client-side-implementation, too. While the
clientattributes (Sensor Abstraction) are responsible for acquiring
low-level sensor-data and transmitting it to the server, the
corresponding Sensor Encapsulation-attributes transform them
into a uniform and sensor-independent format, namely symbolic
locations and IDs of users in spatial proximity, respectively.
91
Afterwards, the respective attribute Sensor Fusion is being
triggered with this sensor-independent information of a certain
user, detected by a particular sensor. Such notifications are
performed every time the sensor acquired new information.
Accordingly, Sensor Abstraction-attributes are responsible to
detect when a certain sensor is no longer available on the client
side (e.g. if it has been unplugged by the user) or when position
respectively proximity could not be determined any longer (e.g.
RFID reader cannot detect tags) and notify the corresponding
sensor fusion about this.
3.2.1 Location Sensors
In order to sense physical positions, the Sensor
Encapsulationattributes asynchronously transmit sensor-dependent position
information to the server. The corresponding location Sensor
Abstraction-attributes collect these physical positions delivered
by the sensors of all users, and perform a repository-lookup in
order to get the associated symbolic location. This requires certain
tables for each sensor, which map physical positions to symbolic
locations. One physical position may have multiple symbolic
locations at different accuracy-levels in the location hierarchy
assigned to, for example if a sensor covers several rooms. If such
a mapping could be found, an event is thrown in order to notify
the attribute Location Sensor Fusion about the symbolic
locations a certain sensor of a particular user determined.
We have prototypically implemented three kinds of location
sensors, which are based on WLAN (IEEE 802.11), Bluetooth and
RFID (Radio Frequency Identification). We have chosen these
three completely different sensors because of their differences
concerning accuracy, coverage and administrative effort, in order
to evaluate the flexibility of our system (see Table 1).
The most accurate one is an RFID sensor, which is based on an
active RFID-reader. As soon as the reader is plugged into the
client, it scans for active RFID tags in range and transmits their
serial numbers to the server, where they are mapped to symbolic
locations. We also take into account RSSI (Radio Signal Strength
Information), which provides position accuracy of few
centimetres and thus enables us to determine which RFID-tag is
nearest. Due to this high accuracy, RFID is used for locating users
within rooms. The administration is quite simple; once a new
RFID tag is placed, its serial number simply has to be assigned to
a single symbolic location. A drawback is the poor availability,
which can be traced back to the fact that RFID readers are still
very expensive.
The second one is an 802.11 WLAN sensor. Therefore, we
integrated a purely software-based, commercial WLAN
positioning system for tracking clients on the university
campuswide WLAN infrastructure. The reached position accuracy is in
the range of few meters and thus is suitable for location sensing at
the granularity of rooms. A big disadvantage is that a map of the
whole area has to be calibrated with measuring points at a
distance of 5 meters each. Because most mobile computers are
equipped with WLAN technology and the positioning-system is a
software-only solution, nearly everyone is able to use this kind of
sensor.
Finally, we have implemented a Bluetooth sensor, which detects
Bluetooth tags (i.e. Bluetooth-modules with known position) in
range and transmits them to the server that maps to symbolic
locations. Because of the fact that we do not use signal
strengthinformation in the current implementation, the accuracy is above
10 meters and therefore a single Bluetooth MAC address is
associated with several symbolic locations, according to the
physical locations such a Bluetooth module covers. This leads to
the disadvantage that the range of each Bluetooth-tag has to be
determined and mapped to symbolic locations within this range.
Table 1. Comparison of implemented sensors
Sensor Accuracy Coverage Administration
RFID < 10 cm poor easy
WLAN 1-4 m very well
very
timeconsuming
Bluetooth ~ 10 m well time-consuming
3.2.2 Proximity Sensors
Any sensor that is able to detect whether two users are in spatial
proximity is referred to as proximity sensor. Similar to the
location sensors, the Proximity Sensor Abstraction-attributes
collect physical proximity information of all users and transform
them to mappings of user-IDs.
We have implemented two types of proximity-sensors, which are
based on Bluetooth on the one hand and on fused symbolic
locations (see chapter 3.3.1) on the other hand.
The Bluetooth-implementation goes along with the
implementation of the Bluetooth-based location sensor. The
already determined Bluetooth MAC addresses in range of a
certain client are being compared with those of all other clients,
and each time the attribute Bluetooth Sensor Abstraction
detects congruence, it notifies the proximity sensor fusion about
this.
The second sensor is based on symbolic locations processed by
Location Sensor Fusion, wherefore it does not need a client-side
implementation. Each time the fused symbolic location of a
certain user changes, it checks whether he is at the same symbolic
location like another user and again notifies the proximity sensor
fusion about the proximity between these two users. The range
can be restricted to any level of the location containment
hierarchy, for example to room granularity.
A currently unresolved issue is the incomparable granularity of
different proximity sensors. For example, the symbolic locations
at same level in the location hierarchy mostly do not cover the
same geographic area.
3.3 Sensor Fusion
Core of the location sensing subsystem is the sensor fusion. It
merges data of various sensors, while coping with differences
concerning accuracy, coverage and sample-rate. According to the
two kinds of sensors described in chapter 3.2, we distinguish
between fusion of location sensors on the one hand, and fusion of
proximity sensors on the other hand.
The fusion of symbolic locations as well as the fusion of spatial
proximities operates on standardized information (cf. Figure 3).
This has the advantage, that additional position- and
proximitysensors can be added easily or the fusion algorithms can be
replaced by ones that are more sophisticated.
92
Fusion is performed for each user separately and takes into
account the measurements at a single point in time only (i.e. no
history information is used for determining the current location of
a certain user). The algorithm collects all events thrown by the
Sensor Abstraction-attributes, performs fusion and triggers the
GISS Core-attribute if the symbolic location of a certain user or
the spatial proximity between users changed.
An important feature is the persistent storage of location- and
proximity-history in a database in order to allow future retrieval.
This enables applications to visualize the movement of users for
example.
3.3.1 Location Sensor Fusion
Goal of the fusion of location information is to improve precision
and accuracy by merging the set of symbolic locations supplied
by various location sensors, in order to reduce the number of
these locations to a minimum, ideally to a single symbolic
location per user. This is quite difficult, because different sensors
may differ in accuracy and sample rate as well.
The Location Sensor Fusion-attribute is triggered by events,
which are thrown by the Location Sensor
Abstractionattributes. These events contain information about the identity of
the user concerned, his current location and the sensor by which
the location has been determined.
If the attribute Location Sensor Fusion receives such an event,
it checks if the amount of symbolic locations of the user
concerned has changed (compared with the last event). If this is
the case, it notifies the GISS Core-attribute about all symbolic
locations this user is currently associated with.
However, this information is not very useful on its own if a
certain user is associated with several locations. As described in
chapter 3.2.1, a single location sensor may deliver multiple
symbolic locations. Moreover, a certain user may have several
location sensors, which supply symbolic locations differing in
accuracy (i.e. different levels in the location containment
hierarchy). To cope with this challenge, we implemented a fusion
algorithm in order to reduce the number of symbolic locations to a
minimum (ideally to a single location).
In a first step, each symbolic location is associated with its
number of occurrences. A symbolic location may occur several
times if it is referred to by more than one sensor or if a single
sensor detects multiple tags, which again refer to several
locations. Furthermore, this number is added to the previously
calculated number of occurrences of each symbolic location,
which is a child-location of the considered one in the location
containment hierarchy. For example, if - in Figure 4 - room2
occurs two times and desk occurs a single time, the value 2 of
room2 is added to the value 1 of desk, whereby desk finally
gets the value 3. In a final step, only those symbolic locations are
left which are assigned with the highest number of occurrences.
A further reduction can be achieved by assigning priorities to
sensors (based on accuracy and confidence) and cumulating these
priorities for each symbolic location instead of just counting the
number of occurrences.
If the remaining fused locations have changed (i.e. if they differ
from the fused locations the considered user is currently
associated with), they are provided with the current timestamp,
written to the database and the GISS-attribute is notified about
where the user is probably located.
Finally, the most accurate, common location in the location
hierarchy is calculated (i.e. the least upper bound of these
symbolic locations) in order to get a single symbolic location. If it
changes, the GISS Core-attribute is triggered again.
3.3.2 Proximity Sensor Fusion
Proximity sensor fusion is much simpler than the fusion of
symbolic locations. The corresponding proximity sensor
fusionattribute is triggered by events, which are thrown by the
Proximity Sensor Abstraction-attributes. These special events
contain information about the identity of the two users concerned,
if they are currently in spatial proximity or if proximity no longer
persists, and by which proximity-sensor this has been detected.
If the sensor fusion-attribute is notified by a certain Proximity
Sensor Abstraction-attribute about an existing spatial proximity,
it first checks if these two users are already known to be in
proximity (detected either by another user or by another
proximity-sensor of the user, which caused the event). If not, this
change in proximity is written to the context repository with
current timestamp. Similarly, if the attribute Proximity Fusion
is notified about an ended proximity, it checks if the users are still
known to be in proximity, and writes this change to the repository
if not.
Finally, if spatial proximity between the two users actually
changed, an event is thrown to notify the GISS Core-attribute
about this.
4. CONTEXTSENSITIVE INTERACTION
4.1 Overview
In most of today"s systems supporting interaction in groups, the
provided means lack any awareness of the user"s current context,
thus being unable to adapt to his needs.
In our approach, we use context information to enhance
interaction and provide further services, which offer new
possibilities to the user. Furthermore, we believe that interaction
in groups also has to take into account the current context of the
group itself and not only the context of individual group
members. For this reason, we also retrieve information about the
group"s current context, derived from the contexts of the group
members together with some sort of meta-information (see
chapter 4.3).
The sources of context used for our application correspond with
the four primary context types given in chapter 1.1 - identity (I),
location (L), time (T) and activity (A). As stated before, we also
take into account the context of the group the user is interaction
with, so that we could add a fifth type of context
informationgroup awareness (G) - to the classification. Using this context
information, we can trigger context-aware activities in all of the
three categories described in chapter 1.1 - presentation of
information (P), automatic execution of services (A) and tagging
of context to information for later retrieval (T).
Table 2 gives an overview of activities we have already
implemented; they are described comprehensively in chapter 4.4.
The table also shows which types of context information are used
for each activity and the category the activity could be classified
in.
93
Table 2. Classification of implemented context-aware
activities
Service L T I A G P A T
Location Visualisation X X X
Group Building Support X X X X
Support for Synchronous
Communication
X X X X
Support for Asynchronous
Communication
X X X X X X X
Availability Management X X X
Task Management Support X X X X
Meeting Support X X X X X X
Reasons for implementing these very features are to take
advantage of all four types of context information in order to
support group interaction by utilizing a comprehensive knowledge
about the situation a single user or a whole group is in.
A critical issue for the user acceptance of such a system is the
usability of its interface. We have evaluated several ways of
presenting context-aware means of interaction to the user, until
we came to the solution we use right now. Although we think that
the user interface that has been implemented now offers the best
trade-off between seamless integration of features and ease of use,
it would be no problem to extend the architecture with other user
interfaces, even on different platforms.
The chosen solution is based on an existing instant messenger,
which offers several possibilities to integrate our system (see
chapter 4.2). The biggest advantage of this approach is that the
user is confronted with a graphical user interface he is already
used to in most cases. Furthermore, our system uses an instant
messenger account as an identifier, so that the user does not have
to register a further account anywhere else (for example, the user
can use his already existing ICQ2
-account).
4.2 Instant Messenger Integration
Our system is based upon an existing instant messenger, the
socalled Simple Instant Messenger (SIM)3
. The implementation of
this messenger is carried out as a project at Sourceforge4
.
SIM supports multiple messenger protocols such as AIM5
, ICQ2
and MSN6
. It also supports connections to multiple accounts at
the same time. Furthermore, full support for SMS-notification
(where provided from the used protocol) is given.
SIM is based on a plug-in concept. All protocols as well as parts
of the user-interface are implemented as plug-ins. Its architecture
is also used to extend the application"s abilities to communicate
with external applications. For this purpose, a remote control
plug-in is provided, by which SIM can be controlled from
external applications via socket connection. This remote control
interface is extensively used by GISS for retrieving the contact
list, setting the user"s availability-state or sending messages. The
4
http://sourceforge.net/
5
http://www.aim.com/
6
http://messenger.msn.com/
functionality of the plug-in was extended in several ways, for
example to accept messages for an account (as if they would have
been sent via the messenger network).
The messenger, more exactly the contact list (i.e. a list of profiles
of all people registered with the instant messenger, which is
visualized by listing their names as it can be seen in Figure 5), is
also used to display locations of other members of the groups a
user belongs to. This provides location awareness without taking
too much space or requesting the user"s full attention. A more
comprehensive description of these features is given in chapter
4.4.
4.3 Sources of Context Information
While the location-context of a user is obtained from our location
sensing subsystem described in chapter 3, we consider further
types of context than location relevant for the support of group
interaction, too.
Local time as a very important context dimension can be easily
retrieved from the real time clock of the user"s system. Besides
location and time, we also use context information of user"s
activity and identity, where we exploit the functionality provided
by the underlying instant messenger system. Identity (or more
exactly, the mapping of IDs to names as well as additional
information from the user"s profile) can be distilled out of the
contents of the user"s contact list.
Information about the activity or a certain user is only available in
a very restricted area, namely the activity at the computer itself.
Other activities like making a phone call or something similar,
cannot be recognized with the current implementation of the
activity sensor. The only context-information used is the instant
messenger"s availability state, thus only providing a very coarse
classification of the user"s activity (online, offline, away, busy
etc.). Although this may not seem to be very much information, it
is surely relevant and can be used to improve or even enable
several services.
Having collected the context information from all available users,
it is now possible to distil some information about the context of a
certain group. Information about the context of a group includes
how many members the group currently has, if the group meets
right now, which members are participating at a meeting, how
many members have read which of the available posts from other
team members and so on.
Therefore, some additional information like a list of members for
each group is needed. These lists can be assembled manually (by
users joining and leaving groups) or retrieved automatically. The
context of a group is secondary context and is aggregated from
the available contexts of the group members. Every time the
context of a single group member changes, the context of the
whole group is changing and has to be recalculated.
With knowledge about a user"s context and the context of the
groups he belongs to, we can provide several context-aware
services to the user, which enhance his interaction abilities. A
brief description of these services is given in chapter 4.4.
94
4.4 Group Interaction Support
4.4.1 Visualisation of Location Information
An important feature is the visualisation of location information,
thus allowing users to be aware of the location of other users and
members of groups he joined, respectively.
As already described in chapter 2, we use two different forms of
visualisation. The maybe more important one is to display
location information in the contact list of the instant messenger,
right beside the name, thus being always visible while not
drawing the user"s attention on it (compared with a
twodimensional view for example, which requires a own window for
displaying a map of the environment).
Due to the restricted space in the contact list, it has been
necessary to implement some sort of level-of-detail concept. As
we use a hierarchical location model, we are able to determine the
most accurate common location of two users. In the contact list,
the current symbolic location one level below the previously
calculated common location is then displayed. If, for example,
user A currently resides in room P121 at the first floor of a
building and user B, which has to be displayed in the contact list
of user A, is in room P304 at the third floor, the most accurate
common location of these two users is the building they are in.
For that reason, the floor (i.e. one level beyond the common
location, namely the building) of user B is displayed in the
contact list of user A. If both people reside on the same floor or
even in the same room, the room would be taken.
Figure 5 shows a screenshot of the Simple Instant Messenger3
,
where the current location of those people, whose location is
known by GISS, is displayed in brackets right beside their name.
On top of the image, the heightened, integrated GISS-toolbar is
shown, which currently contains the following, implemented
functionality (from left to right): Asynchronous communication
for groups (see chapter 4.4.4), context-aware reminders (see
chapter 4.4.6), two-dimensional visualisation of
locationinformation, forming and managing groups (see chapter 4.4.2),
context-aware availability-management (see chapter 4.4.5) and
finally a button for terminating GISS.
Figure 5. GISS integration in Simple Instant Messenger3
As displaying just this short form of location may not be enough
for the user, because he may want to see the most accurate
position available, a fully qualified position is shown if a name
in the contact-list is clicked (e.g. in the form of
desk@room2@department1@1stfloor@building 1@campus).
The second possible form of visualisation is a graphical one. We
have evaluated a three-dimensional view, which was based on a
VRML model of the respective area (cf. Figure 6). Due to lacks in
navigational and usability issues, we decided to use a
twodimensional view of the floor (it is referred to as level in the
location hierarchy, cf. Figure 4). Other levels of granularity like
section (e.g. building) and region (e.g. campus) are also provided.
In this floor-plan-based view, the current locations are shown in
the manner of ICQ2
contacts, which are placed at the currently
sensed location of the respective person. The availability-status of
a user, for example away if he is not on the computer right now,
or busy if he does not want to be disturbed, is visualized by
colour-coding the ICQ2
-flower left beside the name. Furthermore,
the floor-plan-view shows so-called the virtual post-its, which are
virtual counterparts of real-life post-its and serve as our means of
asynchronous communication (more about virtual post-its can be
found in chapter 4.4.4).
Figure 6. 3D-view of the floor (VRML)
Figure 7 shows the two-dimensional map of a certain floor, where
several users are currently located (visualized by their name and
the flower left beside). The location of the client, on which the
map is displayed, is visualized by a green circle. Down to the
right, two virtual post-its can be seen.
Figure 7. 2D view of the floor
Another feature of the 2D-view is the visualisation of
locationhistory of users. As we store the complete history of a user"s
locations together with a timestamp, we are able to provide
information about the locations he has been back in time. When
the mouse is moved over the name of a certain user in the
2Dview, footprints of a user, placed at the locations he has been,
are faded out the stronger, the older the location information is.
95
4.4.2 Forming and Managing Groups
To support interaction in groups, it is first necessary to form
groups. As groups can have different purposes, we distinguish two
types of groups.
So-called static groups are groups, which are built up manually
by people joining and leaving them. Static groups can be further
divided into two subtypes. In open static groups, everybody can
join and leave anytime, useful for example to form a group of
lecture attendees of some sort of interest group. Closed static
groups have an owner, who decides, which persons are allowed to
join, although everybody could leave again at any time. Closed
groups enable users for example to create a group of their friends,
thus being able to communicate with them easily.
In contrast to that, we also support the creation of dynamic
groups. They are formed among persons, who are at the same
location at the same time. The creation of dynamic groups is only
performed at locations, where it makes sense to form groups, for
example in lecture halls or meeting rooms, but not on corridors or
outdoor. It would also be not very meaningful to form a group
only of the people residing in the left front sector of a hall;
instead, the complete hall should be considered. For these
reasons, all the defined locations in the hierarchy are tagged,
whether they allow the formation of groups or not. Dynamic
groups are also not only formed granularity of rooms, but also on
higher levels in the hierarchy, for example with the people
currently residing in the area of a department.
As the members of dynamic groups constantly change, it is
possible to create an open static group out of them.
4.4.3 Synchronous Communication for Groups
The most important form of synchronous communication on
computers today is instant messaging; some people even see
instant messaging to be the real killer application on the Internet.
This has also motivated the decision to build GISS upon an
instant messaging system.
In today"s messenger systems, peer-to-peer-communication is
extensively supported. However, when it comes to
communication in groups, the support is rather poor most of the
time. Often, only sending a message to multiple recipients is
supported, lacking means to take into account the current state of
the recipients. Furthermore, groups can only be formed of
members in one"s contact list, thus being not able to send
messages to a group, where not all of its members are known
(which may be the case in settings, where the participants of a
lecture form a group).
Our approach does not have the mentioned restrictions. We
introduce group-entries in the user"s contact list; enable him or
his to send messages to this group easily, without knowing who
exactly is currently a member of this group. Furthermore, group
messages are only delivered to persons, who are currently not
busy, thus preventing a disturbance by a message, which is
possibly unimportant for the user.
These features cannot be carried out in the messenger network
itself, so whenever a message to a group account is sent, we
intercept it and route it through our system to all the recipients,
which are available at a certain time. Communication via a group
account is also stored centrally, enabling people to query missed
messages or simply viewing the message history.
4.4.4 Asynchronous Communication for Groups
Asynchronous communication in groups is not a new idea. The
goal of this approach is not to reinvent the wheel, as email is
maybe the most widely used form of asynchronous
communication on computers and is broadly accepted and
standardized. In out work, we aim at the combination of
asynchronous communication with location awareness.
For this reason, we introduce the concept of so-called virtual
postits (cp. [13]), which are messages that are bound to physical
locations. These virtual post-its could be either visible for all
users that are passing by or they can be restricted to be visible for
certain groups of people only. Moreover, a virtual post-it can also
have an expiry date after which it is dropped and not displayed
anymore. Virtual post-its can also be commented by others, thus
providing some from of forum-like interaction, where each post-it
forms a thread.
Virtual post-its are displayed automatically, whenever a user
(available) passes by the first time. Afterwards, post-its can be
accessed via the 2D-viewer, where all visible post-its are shown.
All readers of a post-it are logged and displayed when viewing it,
providing some sort of awareness about the group members"
activities in the past.
4.4.5 Context-aware Availability Management
Instant messengers in general provide some kind of availability
information about a user. Although this information can be only
defined in a very coarse granularity, we have decided to use these
means of gathering activity context, because the introduction of
an additional one would strongly decrease the usability of the
system.
To support the user managing his availability, we provide an
interface that lets the user define rules to adapt his availability to
the current context. These rules follow the form on event (E) if
condition (C) then action (A), which is directly supported by the
ECA-rules of the Context Framework described in chapter 1.3.
The testing of conditions is periodically triggered by throwing
events (whenever the context of a user changes). The condition
itself is defined by the user, who can demand the change of his
availability status as the action in the rule. As a condition, the user
can define his location, a certain time (also triggering daily, every
week or every month) or any logical combination of these criteria.
4.4.6 Context-Aware Reminders
Reminders [14] are used to give the user the opportunity of
defining tasks and being reminded of those, when certain criteria
are fulfilled. Thus, a reminder can be seen as a post-it to oneself,
which is only visible in certain cases. Reminders can be bound to
a certain place or time, but also to spatial proximity of users or
groups. These criteria can be combined with Boolean operators,
thus providing a powerful means to remind the user of tasks that
he wants to carry out when a certain context occurs.
A reminder will only pop up the first time the actual context
meets the defined criterion. On showing up the reminder, the user
has the chance to resubmit it to be reminded again, for example
five minutes later or the next time a certain user is in spatial
proximity.
96
4.4.7 Context-Aware Recognition and Notification of
Group Meetings
With the available context information, we try to recognize
meetings of a group. The determination of the criteria, when the
system recognizes a group having a meeting, is part of the
ongoing work. In a first approach, we use the location- and
activity-context of the group members to determine a meeting.
Whenever more than 50 % of the members of a group are
available at a location, where a meeting is considered to make
sense (e.g. not on a corridor), a meeting minutes post-it is created
at this location and all absent group members are notified of the
meeting and the location it takes place.
During the meeting, the comment-feature of virtual post-its
provides a means to take notes for all of the participants. When
members are joining or leaving the meeting, this is automatically
added as a note to the list of comments.
Like the recognition of the beginning of a meeting, the
recognition of its end is still part of ongoing work. If the end of
the meeting is recognized, all group members get the complete list
of comments as a meeting protocol at the end of the meeting.
5. CONCLUSIONS
This paper discussed the potentials of support for group
interaction by using context information. First, we introduced the
notions of context and context computing and motivated their
value for supporting group interaction.
An architecture is presented to support context-aware group
interaction in mobile, distributed environments. It is built upon a
flexible and extensible framework, thus enabling an easy adoption
to available context sources (e.g. by adding additional sensors) as
well as the required form of representation.
We have prototypically developed a set of services, which
enhance group interaction by taking into account the current
context of the users as well as the context of groups itself.
Important features are dynamic formation of groups, visualization
of location on a two-dimensional map as well as unobtrusively
integrated in an instant-messenger, asynchronous communication
by virtual post-its, which are bound to certain locations, and a
context-aware availability-management, which adapts the
availability-status of a user to his current situation.
To provide location information, we have implemented a
subsystem for automated acquisition of location- and
proximityinformation provided by various sensors, which provides a
technology-independent presentation of locations and spatial
proximities between users and merges this information using
sensor-independent fusion algorithms. A history of locations as
well as of spatial proximities is stored in a database, thus enabling
context history-based services.
6. REFERENCES
[1] Beer, W., Christian, V., Ferscha, A., Mehrmann, L.
Modeling Context-aware Behavior by Interpreted ECA
Rules. In Proceedings of the International Conference on
Parallel and Distributed Computing (EUROPAR"03).
(Klagenfurt, Austria, August 26-29, 2003). Springer Verlag,
LNCS 2790, 1064-1073.
[2] Brown, P.J., Bovey, J.D., Chen X. Context-Aware
Applications: From the Laboratory to the Marketplace.
IEEE Personal Communications, 4(5) (1997), 58-64.
[3] Chen, H., Kotz, D. A Survey of Context-Aware Mobile
Computing Research. Technical Report TR2000-381,
Computer Science Department, Dartmouth College,
Hanover, New Hampshire, November 2000.
[4] Dey, A. Providing Architectural Support for Building
Context-Aware Applications. Ph.D. Thesis, Department of
Computer Science, Georgia Institute of Technology,
Atlanta, November 2000.
[5] Svetlana Domnitcheva. Location Modeling: State of the Art
and Challenges. In Proceedings of the Workshop on
Location Modeling for Ubiquitous Computing. (Atlanta,
Georgia, United States, September 30, 2001). 13-19.
[6] Ferscha, A. Workspace Awareness in Mobile Virtual Teams.
In Proceedings of the IEEE 9th
International Workshop on
Enabling Technologies: Infrastructure for Collaborative
Enterprises (WETICE"00). (Gaithersburg, Maryland, March
14-16, 2000). IEEE Computer Society Press, 272-277.
[7] Ferscha, A. Coordination in Pervasive Computing
Environments. In Proceedings of the Twelfth International
IEEE Workshop on Enabling Technologies: Infrastructure
for Collaborative Enterprises (WETICE-2003). (June 9-11,
2003). IEEE Computer Society Press, 3-9.
[8] Leonhard, U. Supporting Location Awareness in Open
Distributed Systems. Ph.D. Thesis, Department of
Computing, Imperial College, London, May 1998.
[9] Ryan, N., Pascoe, J., Morse, D. Enhanced Reality
Fieldwork: the Context-Aware Archaeological Assistant.
Gaffney, V., Van Leusen, M., Exxon, S. (eds.) Computer
Applications in Archaeology (1997)
[10] Schilit, B.N., Theimer, M. Disseminating Active Map
Information to Mobile Hosts. IEEE Network, 8(5) (1994),
22-32.
[11] Schilit, B.N. A System Architecture for Context-Aware
Mobile Computing. Ph.D. Thesis, Columbia University,
Department of Computer Science, May 1995.
[12] Wang, B., Bodily, J., Gupta, S.K.S. Supporting Persistent
Social Groups in Ubiquitous Computing Environments
Using Context-Aware Ephemeral Group Service. In
Proceedings of the Second IEEE International Conference
on Pervasive Computing and Communications
(PerCom"04). (March 14-17, 2004). IEEE Computer Society
Press, 287-296.
[13] Pascoe, J. The Stick-e Note Architecture: Extending the
Interface Beyond the User. Proceedings of the 2nd
International Conference of Intelligent User Interfaces
(IUI"97). (Orlando, USA, 1997), 261-264.
[14] Dey, A., Abowd, G. CybreMinder: A Context-Aware System
for Supporting Re-minders. Proceedings of the 2nd
International Symposium on Handheld and Ubiquitous
Computing (HUC"00). (Bristol, UK, 2000), 172-186.
97 | software framework;xml configuration file;context awareness;location sense;event-condition-action;fifth contextdimension group-context;contextaware;group interaction;sensor fusion;mobility system |
train_C-56 | A Hierarchical Process Execution Support for Grid Computing | Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering. | 1. INTRODUCTION
Grid computing is a model for wide-area distributed and
parallel computing across heterogeneous networks in
multiple administrative domains. This research field aims to
promote sharing of resources and provides breakthrough
computing power over this wide network of virtual
organizations in a seamless manner [8]. Traditionally, as in Globus
[6], Condor-G [9] and Legion [10], there is a minimal
infrastructure that provides data resource sharing, computational
resource utilization management, and distributed execution.
Specifically, considering distributed execution, most of the
existing grid infrastructures supports execution of isolated
tasks, but they do not consider their task interdependencies
as in processes (workflows) [12]. This deficiency restricts
better scheduling algorithms, distributed execution
coordination and automatic execution recovery.
There are few proposed middleware infrastructures that
support process execution over the grid. In general, they
model processes by interconnecting their activities through
control and data dependencies. Among them, WebFlow
[1] emphasizes an architecture to construct distributed
processes; Opera-G [3] provides execution recovering and
steering, GridFlow [5] focuses on improved scheduling algorithms
that take advantage of activity dependencies, and SwinDew
[13] supports totally distributed execution on peer-to-peer
networks. However, such infrastructures contain
scheduling algorithms that are centralized by process [1, 3, 5], or
completely distributed, but difficult to monitor and control
[13].
In order to address such constraints, this paper proposes a
structured programming model for process description and a
hierarchical process execution infrastructure. The
programming model employs structured control flow to promote
controlled and contextualized activity execution.
Complementary, the support infrastructure, which executes a process
specification, takes advantage of the hierarchical structure
of a specified process in order to distribute and schedule
strong dependent activities as a unit, allowing a better
execution performance and fault-tolerance and providing
localized communication.
The programming model and the support infrastructure,
named X avantes, are under implementation in order to show
the feasibility of the proposed model and to demonstrate its
two major advantages: to promote widely distributed
process execution and scheduling, but in a controlled,
structured and localized way.
Next Section describes the programming model, and
Section 3, the support infrastructure for the proposed grid
computing model. Section 4 demonstrates how the support
infrastructure executes processes and distributes activities.
Related works are presented and compared to the proposed
model in Section 5. The last Section concludes this paper
encompassing the advantages of the proposed hierarchical
process execution support for the grid computing area and
lists some future works.
87 Middleware 2004 Companion
ProcessElement
Process Activity Controller
1
*
1
*
Figure 1: High-level framework of the programming
model
2. PROGRAMMING MODEL
The programming model designed for the grid computing
architecture is very similar to the specified to the Business
Process Execution Language (BPEL) [2]. Both describe
processes in XML [4] documents, but the former specifies
processes strictly synchronous and structured, and has more
constructs for structured parallel control. The rationale
behind of its design is the possibility of hierarchically distribute
the process control and coordination based on structured
constructs, differently from BPEL, which does not allow
hierarchical composition of processes.
In the proposed programming model, a process is a set of
interdependent activities arranged to solve a certain
problem. In detail, a process is composed of activities,
subprocesses, and controllers (see Figure 1). Activities represent
simple tasks that are executed on behalf of a process;
subprocesses are processes executed in the context of a
parent process; and controllers are control elements used to
specify the execution order of these activities and
subprocesses. Like structured languages, controllers can be nested
and then determine the execution order of other controllers.
Data are exchanged among process elements through
parameters. They are passed by value, in case of simple
objects, or by reference, if they are remote objects shared
among elements of the same controller or process. External
data can be accessed through data sources, such as relational
databases or distributed objects.
2.1 Controllers
Controllers are structured control constructs used to
define the control flow of processes. There are sequential and
parallel controllers.
The sequential controller types are: block, switch, for
and while. The block controller is a simple sequential
construct, and the others mimic equivalent structured
programming language constructs. Similarly, the parallel types are:
par, parswitch, parfor and parwhile. They extend the
respective sequential counterparts to allow parallel execution
of process elements.
All parallel controller types fork the execution of one or
more process elements, and then, wait for each execution to
finish. Indeed, they contain a fork and a join of execution.
Aiming to implement a conditional join, all parallel
controller types contain an exit condition, evaluated all time
that an element execution finishes, in order to determine
when the controller must end.
The parfor and parwhile are the iterative versions of
the parallel controller types. Both fork executions while
the iteration condition is true. This provides flexibility to
determine, at run-time, the number of process elements to
execute simultaneously.
When compared to workflow languages, the parallel
controller types represent structured versions of the workflow
control constructors, because they can nest other controllers
and also can express fixed and conditional forks and joins,
present in such languages.
2.2 Process Example
This section presents an example of a prime number search
application that receives a certain range of integers and
returns a set of primes contained in this range. The whole
computation is made by a process, which uses a parallel
controller to start and dispatch several concurrent activities
of the same type, in order to find prime numbers. The
portion of the XML document that describes the process and
activity types is shown below.
<PROCESS_TYPE NAME="FindPrimes">
<IN_PARAMETER TYPE="int" NAME="min"/>
<IN_PARAMETER TYPE="int" NAME="max"/>
<IN_PARAMETER TYPE="int" NAME="numPrimes"/>
<IN_PARAMETER TYPE="int" NAME="numActs"/>
<BODY>
<PRE_CODE>
setPrimes(new RemoteHashSet());
parfor.setMin(getMin());
parfor.setMax(getMax());
parfor.setNumPrimes(getNumPrimes());
parfor.setNumActs(getNumActs());
parfor.setPrimes(getPrimes());
parfor.setCounterBegin(0);
parfor.setCounterEnd(getNumActs()-1);
</PRE_CODE>
<PARFOR NAME="parfor">
<IN_PARAMETER TYPE="int" NAME="min"/>
<IN_PARAMETER TYPE="int" NAME="max"/>
<IN_PARAMETER TYPE="int" NAME="numPrimes"/>
<IN_PARAMETER TYPE="int" NAME="numActs"/>
<IN_PARAMETER
TYPE="RemoteCollection" NAME="primes"/>
<ITERATE>
<PRE_CODE>
int range=
(getMax()-getMin()+1)/getNumActs();
int minNum = range*getCounter()+getMin();
int maxNum = minNum+range-1;
if (getCounter() == getNumActs()-1)
maxNum = getMax();
findPrimes.setMin(minNum);
findPrimes.setMax(maxNum);
findPrimes.setNumPrimes(getNumPrimes());
findPrimes.setPrimes(getPrimes());
</PRE_CODE>
<ACTIVITY
TYPE="FindPrimes" NAME="findPrimes"/>
</ITERATE>
</PARFOR>
</BODY>
<OUT_PARAMETER
TYPE="RemoteCollection" NAME="primes"/>
</PROCESS_TYPE>
Middleware for Grid Computing 88
<ACTIVITY_TYPE NAME="FindPrimes">
<IN_PARAMETER TYPE="int" NAME="min"/>
<IN_PARAMETER TYPE="int" NAME="max"/>
<IN_PARAMETER TYPE="int" NAME="numPrimes"/>
<IN_PARAMETER
TYPE="RemoteCollection" NAME="primes"/>
<CODE>
for (int num=getMin(); num<=getMax(); num++) {
// stop, required number of primes was found
if (primes.size() >= getNumPrimes())
break;
boolean prime = true;
for (int i=2; i<num; i++) {
if (num % i == 0) {
prime = false;
break;
}
}
if (prime) {
primes.add(new Integer(num));
}
}
</CODE>
</ACTIVITY_TYPE>
Firstly, a process type that finds prime numbers, named
FindPrimes, is defined. It receives, through its input
parameters, a range of integers in which prime numbers have
to be found, the number of primes to be returned, and the
number of activities to be executed in order to perform this
work. At the end, the found prime numbers are returned as
a collection through its output parameter.
This process contains a PARFOR controller aiming to
execute a determined number of parallel activities. It iterates
from 0 to getNumActs() - 1, which determines the number
of activities, starting a parallel activity in each iteration. In
such case, the controller divides the whole range of numbers
in subranges of the same size, and, in each iteration, starts a
parallel activity that finds prime numbers in a specific
subrange. These activities receive a shared object by reference
in order to store the prime numbers just found and control
if the required number of primes has been reached.
Finally, it is defined the activity type, FindPrimes, used
to find prime numbers in each subrange. It receives, through
its input parameters, the range of numbers in which it has
to find prime numbers, the total number of prime numbers
to be found by the whole process, and, passed by reference,
a collection object to store the found prime numbers.
Between its CODE markers, there is a simple code to find prime
numbers, which iterates over the specified range and
verifies if the current integer is a prime. Additionally, in each
iteration, the code verifies if the required number of primes,
inserted in the primes collection by all concurrent activities,
has been reached, and exits if true.
The advantage of using controllers is the possibility of the
support infrastructure determines the point of execution the
process is in, allowing automatic recovery and monitoring,
and also the capability of instantiating and dispatching
process elements only when there are enough computing
resources available, reducing unnecessary overhead. Besides,
due to its structured nature, they can be easily composed
and the support infrastructure can take advantage of this
in order to distribute hierarchically the nested controllers to
Group Server
Group
Java Virtual Machine
RMI JDBC
Group Manager
Process Server
Java Virtual Machine
RMI JDBC
Process Coordinator
Worker
Java Virtual Machine
RMI
Activity Manager
Repository
Figure 2: Infrastructure architecture
different machines over the grid, allowing enhanced
scalability and fault-tolerance.
3. SUPPORT INFRASTRUCTURE
The support infrastructure comprises tools for
specification, and services for execution and monitoring of
structured processes in highly distributed, heterogeneous and
autonomous grid environments. It has services to monitor
availability of resources in the grid, to interpret processes
and schedule activities and controllers, and to execute
activities.
3.1 Infrastructure Architecture
The support infrastructure architecture is composed of
groups of machines and data repositories, which preserves
its administrative autonomy. Generally, localized machines
and repositories, such as in local networks or clusters, form
a group. Each machine in a group must have a Java Virtual
Machine (JVM) [11], and a Java Runtime Library, besides
a combination of the following grid support services: group
manager (GM), process coordinator (PC) and activity
manager (AM). This combination determines what kind of group
node it represents: a group server, a process server, or
simply a worker (see Figure 2).
In a group there are one or more group managers, but
only one acts as primary and the others, as replicas. They
are responsible to maintain availability information of group
machines. Moreover, group managers maintain references to
data resources of the group. They use group repositories to
persist and recover the location of nodes and their
availability.
To control process execution, there are one or more
process coordinators per group. They are responsible to
instantiate and execute processes and controllers, select resources,
and schedule and dispatch activities to workers. In order
to persist and recover process execution and data, and also
load process specification, they use group repositories.
Finally, in several group nodes there is an activity
manager. It is responsible to execute activities in the hosted
machine on behalf of the group process coordinators, and to
inform the current availability of the associated machine to
group managers. They also have pendent activity queues,
containing activities to be executed.
3.2 Inter-group Relationships
In order to model real grid architecture, the infrastructure
must comprise several, potentially all, local networks, like
Internet does. Aiming to satisfy this intent, local groups are
89 Middleware 2004 Companion
GM
GM
GM
GM
Figure 3: Inter-group relationships
connected to others, directly or indirectly, through its group
managers (see Figure 3).
Each group manager deals with requests of its group
(represented by dashed ellipses), in order to register local
machines and maintain correspondent availability.
Additionally, group managers communicate to group managers of
other groups. Each group manager exports coarse
availability information to group managers of adjacent groups and
also receives requests from other external services to
furnish detailed availability information. In this way, if there
are resources available in external groups, it is possible to
send processes, controllers and activities to these groups in
order to execute them in external process coordinators and
activity managers, respectively.
4. PROCESS EXECUTION
In the proposed grid architecture, a process is specified
in XML, using controllers to determine control flow;
referencing other processes and activities; and passing objects to
their parameters in order to define data flow. After specified,
the process is compiled in a set of classes, which represent
specific process, activity and controller types. At this time,
it can be instantiated and executed by a process coordinator.
4.1 Dynamic Model
To execute a specified process, it must be instantiated by
referencing its type on a process coordinator service of a
specific group. Also, the initial parameters must be passed
to it, and then it can be started.
The process coordinator carries out the process by
executing the process elements included in its body sequentially.
If the element is a process or a controller, the process
coordinator can choose to execute it in the same machine or to
pass it to another process coordinator in a remote machine,
if available. Else, if the element is an activity, it passes to
an activity manager of an available machine.
Process coordinators request the local group manager to
find available machines that contain the required service,
process coordinator or activity manager, in order to
execute a process element. Then, it can return a local
machine, a machine in another group or none, depending on
the availability of such resource in the grid. It returns an
external worker (activity manager machine) if there are no
available workers in the local group; and, it returns an
external process server (process coordinator machine), if there
are no available process servers or workers in the local group.
Obeying this rule, group managers try to find process servers
in the same group of the available workers.
Such procedure is followed recursively by all process
coGM
FindPrimes
Activity
AM
FindPrimes
Activity
AM
FindPrimes
Activity
AM
FindPrimes
Process
PC
Figure 4: FindPrimes process execution
ordinators that execute subprocesses or controllers of a
process. Therefore, because processes are structured by
nesting process elements, the process execution is automatically
distributed hierarchically through one or more grid groups
according to the availability and locality of computing
resources.
The advantage of this distribution model is wide area
execution, which takes advantage of potentially all grid
resources; and localized communication of process elements,
because strong dependent elements, which are under the
same controller, are placed in the same or near groups.
Besides, it supports easy monitoring and steering, due to its
structured controllers, which maintain state and control over
its inner elements.
4.2 Process Execution Example
Revisiting the example shown in Section 2.2, a process
type is specified to find prime numbers in a certain range of
numbers. In order to solve this problem, it creates a number
of activities using the parfor controller. Each activity, then,
finds primes in a determined part of the range of numbers.
Figure 4 shows an instance of this process type executing
over the proposed infrastructure. A FindPrimes process
instance is created in an available process coordinator (PC),
which begins executing the parfor controller. In each
iteration of this controller, the process coordinator requests
to the group manager (GM) an available activity manager
(AM) in order to execute a new instance of the FindPrimes
activity. If there is any AM available in this group or in an
external one, the process coordinator sends the activity class
and initial parameters to this activity manager and requests
its execution. Else, if no activity manager is available, then
the controller enters in a wait state until an activity manager
is made available, or is created.
In parallel, whenever an activity finishes, its result is sent
back to the process coordinator, which records it in the
parfor controller. Then, the controller waits until all
activities that have been started are finished, and it ends. At
this point, the process coordinator verifies that there is no
other process element to execute and finishes the process.
5. RELATED WORK
There are several academic and commercial products that
promise to support grid computing, aiming to provide
interfaces, protocols and services to leverage the use of widely
Middleware for Grid Computing 90
distributed resources in heterogeneous and autonomous
networks. Among them, Globus [6], Condor-G [9] and Legion
[10] are widely known. Aiming to standardize interfaces
and services to grid, the Open Grid Services Architecture
(OGSA) [7] has been defined.
The grid architectures generally have services that
manage computing resources and distribute the execution of
independent tasks on available ones. However, emerging
architectures maintain task dependencies and automatically
execute tasks in a correct order. They take advantage of
these dependencies to provide automatic recovery, and
better distribution and scheduling algorithms.
Following such model, WebFlow [1] is a process
specification tool and execution environment constructed over
CORBA that allows graphical composition of activities and
their distributed execution in a grid environment. Opera-G
[3], like WebFlow, uses a process specification language
similar to the data flow diagram and workflow languages, but
furnishes automatic execution recovery and limited steering
of process execution.
The previously referred architectures and others that
enact processes over the grid have a centralized coordination.
In order to surpass this limitation, systems like SwinDew [13]
proposed a widely distributed process execution, in which
each node knows where to execute the next activity or join
activities in a peer-to-peer environment.
In the specific area of activity distribution and scheduling,
emphasized in this work, GridFlow [5] is remarkable. It uses
a two-level scheduling: global and local. In the local level,
it has services that predict computing resource utilization
and activity duration. Based on this information, GridFlow
employs a PERT-like technique that tries to forecast the
activity execution start time and duration in order to better
schedule them to the available resources.
The architecture proposed in this paper, which
encompasses a programming model and an execution support
infrastructure, is widely decentralized, differently from WebFlow
and Opera-G, being more scalable and fault-tolerant. But,
like the latter, it is designed to support execution recovery.
Comparing to SwinDew, the proposed architecture
contains widely distributed process coordinators, which
coordinate processes or parts of them, differently from SwinDew
where each node has a limited view of the process: only the
activity that starts next. This makes easier to monitor and
control processes.
Finally, the support infrastructure breaks the process and
its subprocesses for grid execution, allowing a group to
require another group for the coordination and execution of
process elements on behalf of the first one. This is
different from GridFlow, which can execute a process in at most
two levels, having the global level as the only responsible to
schedule subprocesses in other groups. This can limit the
overall performance of processes, and make the system less
scalable.
6. CONCLUSION AND FUTURE WORK
Grid computing is an emerging research field that intends
to promote distributed and parallel computing over the wide
area network of heterogeneous and autonomous
administrative domains in a seamless way, similar to what Internet
does to the data sharing. There are several products that
support execution of independent tasks over grid, but only a
few supports the execution of processes with interdependent
tasks.
In order to address such subject, this paper proposes a
programming model and a support infrastructure that
allow the execution of structured processes in a widely
distributed and hierarchical manner. This support
infrastructure provides automatic, structured and recursive
distribution of process elements over groups of available machines;
better resource use, due to its on demand creation of
process elements; easy process monitoring and steering, due to
its structured nature; and localized communication among
strong dependent process elements, which are placed under
the same controller. These features contribute to better
scalability, fault-tolerance and control for processes execution
over the grid. Moreover, it opens doors for better scheduling
algorithms, recovery mechanisms, and also, dynamic
modification schemes.
The next work will be the implementation of a recovery
mechanism that uses the execution and data state of
processes and controllers to recover process execution. After
that, it is desirable to advance the scheduling algorithm to
forecast machine use in the same or other groups and to
foresee start time of process elements, in order to use this
information to pre-allocate resources and, then, obtain a
better process execution performance. Finally, it is
interesting to investigate schemes of dynamic modification of
processes over the grid, in order to evolve and adapt long-term
processes to the continuously changing grid environment.
7. ACKNOWLEDGMENTS
We would like to thank Paulo C. Oliveira, from the State
Treasury Department of Sao Paulo, for its deeply revision
and insightful comments.
8. REFERENCES
[1] E. Akarsu, G. C. Fox, W. Furmanski, and T. Haupt.
WebFlow: High-Level Programming Environment and
Visual Authoring Toolkit for High Performance
Distributed Computing. In Proceedings of
Supercom puting (SC98), 1998.
[2] T. Andrews and F. Curbera. Specification: Business
Process Execution Language for W eb Services V ersion
1.1. IBM DeveloperWorks, 2003. Available at
http://www-106.ibm.com/developerworks/library/wsbpel.
[3] W. Bausch. O PERA -G :A M icrokernelfor
Com putationalG rids. PhD thesis, Swiss Federal
Institute of Technology, Zurich, 2004.
[4] T. Bray and J. Paoli. Extensible M arkup Language
(X M L) 1.0. XML Core WG, W3C, 2004. Available at
http://www.w3.org/TR/2004/REC-xml-20040204.
[5] J. Cao, S. A. Jarvis, S. Saini, and G. R. Nudd.
GridFlow: Workflow Management for Grid
Computing. In Proceedings ofthe International
Sym posium on Cluster Com puting and the G rid
(CCG rid 2003), 2003.
[6] I. Foster and C. Kesselman. Globus: A
Metacomputing Infrastructure Toolkit. Intl.J.
Supercom puter A pplications, 11(2):115-128, 1997.
[7] I. Foster, C. Kesselman, J. M. Nick, and S. Tuecke.
The Physiology ofthe G rid: A n O pen G rid Services
A rchitecture for D istributed System s Integration.
91 Middleware 2004 Companion
Open Grid Service Infrastructure WG, Global Grid
Forum, 2002.
[8] I. Foster, C. Kesselman, and S. Tuecke. The Anatomy
of the Grid: Enabling Scalable Virtual Organization.
The Intl.JournalofH igh Perform ance Com puting
A pplications, 15(3):200-222, 2001.
[9] J. Frey, T. Tannenbaum, M. Livny, I. Foster, and
S. Tuecke. Condor-G: A Computational Management
Agent for Multi-institutional Grids. In Proceedings of
the Tenth Intl.Sym posium on H igh Perform ance
D istributed Com puting (H PD C-10). IEEE, 2001.
[10] A. S. Grimshaw and W. A. Wulf. Legion - A View
from 50,000 Feet. In Proceedings ofthe Fifth Intl.
Sym posium on H igh Perform ance D istributed
Com puting. IEEE, 1996.
[11] T. Lindholm and F. Yellin. The Java V irtualM achine
Specification. Sun Microsystems, Second Edition
edition, 1999.
[12] B. R. Schulze and E. R. M. Madeira. Grid Computing
with Active Services. Concurrency and Com putation:
Practice and Experience Journal, 5(16):535-542, 2004.
[13] J. Yan, Y. Yang, and G. K. Raikundalia. Enacting
Business Processes in a Decentralised Environment
with P2P-Based Workflow Support. In Proceedings of
the Fourth Intl.Conference on W eb-Age Inform ation
M anagem ent(W A IM 2003), 2003.
Middleware for Grid Computing 92 | distributed computing;parallel computing;distribute middleware;parallel execution;grid computing;process support;hierarchical process execution;grid architecture;process execution;distributed system;process description;distributed scheduling;scheduling algorithm;distributed application;distributed process;distributed execution |
train_C-57 | Congestion Games with Load-Dependent Failures: Identical Resources | We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions. | 1. INTRODUCTION
We study the effects of resource failures in congestion
settings. This study is motivated by a variety of situations
in multi-agent systems with unreliable components, such as
machines, computers etc. We define a model for congestion
games with load-dependent failures (CGLFs) which provides
simple and natural description of such situations. In this
model, we are given a finite set of identical resources (service
providers) where each element possesses a failure
probability describing the probability of unsuccessful completion of
its assigned tasks as a (nondecreasing) function of its
congestion. There is a fixed number of agents, each having
a task which can be carried out by any of the resources.
For reliability reasons, each agent may decide to assign his
task, simultaneously, to a number of resources. Thus, the
congestion on the resources is not known in advance, but
is strategy-dependent. Each resource is associated with a
cost, which is a (nonnegative) function of the congestion
experienced by this resource. The objective of each agent is to
maximize his own utility, which is the difference between his
benefit from successful task completion and the sum of the
costs over the set of resources he uses. The benefits of the
agents from successful completion of their tasks are allowed
to vary across the agents.
The resource cost function describes the cost suffered by
an agent for selecting that resource, as a function of the
number of agents who have selected it. Thus, it is natural
to assume that these functions are nonnegative. In addition,
in many real-life applications of our model the resource cost
functions have a special structure. In particular, they can
monotonically increase or decrease with the number of the
users, depending on the context. The former case is
motivated by situations where high congestion on a resource
causes longer delay in its assigned tasks execution and as
a result, the cost of utilizing this resource might be higher.
A typical example of such situation is as follows. Assume
we need to deliver an important package. Since there is no
guarantee that a courier will reach the destination in time,
we might send several couriers to deliver the same package.
The time required by each courier to deliver the package
increases with the congestion on his way. In addition, the
payment to a courier is proportional to the time he spends
in delivering the package. Thus, the payment to the courier
increases when the congestion increases. The latter case
(decreasing cost functions) describes situations where a group
of agents using a particular resource have an opportunity to
share its cost among the group"s members, or, the cost of
210
using a resource decreases with the number of users,
according to some marketing policy.
Our results
We show that CGLFs and, in particular, CGLFs with
nondecreasing cost functions, do not admit a
potential function. Therefore, the CGLF model can not be
reduced to congestion games. Nevertheless, if the
failure probabilities are constant (do not depend on the
congestion) then a potential function is guaranteed to
exist.
We show that CGLFs and, in particular, CGLFs with
decreasing cost functions, do not possess pure
strategy Nash equilibria. However, as we show in our main
result, there exists a pure strategy Nash
equilibrium in any CGLF with nondecreasing cost
functions.
Related work
Our model extends the well-known class of congestion games
[11]. In a congestion game, every agent has to choose from a
finite set of resources, where the utility (or cost) of an agent
from using a particular resource depends on the number of
agents using it, and his total utility (cost) is the sum of
the utilities (costs) obtained from the resources he uses. An
important property of these games is the existence of pure
strategy Nash equilibria. Monderer and Shapley [9]
introduced the notions of potential function and potential game
and proved that the existence of a potential function implies
the existence of a pure strategy Nash equilibrium. They
observed that Rosenthal [11] proved his theorem on
congestion games by constructing a potential function (hence,
every congestion game is a potential game). Moreover, they
showed that every finite potential game is isomorphic to a
congestion game; hence, the classes of finite potential games
and congestion games coincide.
Congestion games have been extensively studied and
generalized. In particular, Leyton-Brown and Tennenholtz [5]
extended the class of congestion games to the class of
localeffect games. In a local-effect game, each agent"s payoff is
effected not only by the number of agents who have chosen
the same resources as he has chosen, but also by the number
of agents who have chosen neighboring resources (in a given
graph structure). Monderer [8] dealt with another type of
generalization of congestion games, in which the resource
cost functions are player-specific (PS-congestion games). He
defined PS-congestion games of type q (q-congestion games),
where q is a positive number, and showed that every game
in strategic form is a q-congestion game for some q.
Playerspecific resource cost functions were discussed for the first
time by Milchtaich [6]. He showed that simple and
strategysymmetric PS-congestion games are not potential games,
but always possess a pure strategy Nash equilibrium.
PScongestion games were generalized to weighted congestion
games [6] (or, ID-congestion games [7]), in which the
resource cost functions are not only player-specific, but also
depend on the identity of the users of the resource.
Ackermann et al. [1] showed that weighted congestion games
admit pure strategy Nash equilibria if the strategy space of
each player consists of the bases of a matroid on the set of
resources.
Much of the work on congestion games has been inspired
by the fact that every such game has a pure strategy Nash
equilibrium. In particular, Fabrikant et al. [3] studied
the computational complexity of finding pure strategy Nash
equilibria in congestion games. Intensive study has also
been devoted to quantify the inefficiency of equilibria in
congestion games. Koutsoupias and Papadimitriou [4]
proposed the worst-case ratio of the social welfare achieved
by a Nash equilibrium and by a socially optimal strategy
profile (dubbed the price of anarchy) as a measure of the
performance degradation caused by lack of coordination.
Christodoulou and Koutsoupias [2] considered the price of
anarchy of pure equilibria in congestion games with linear
cost functions. Roughgarden and Tardos [12] used this
approach to study the cost of selfish routing in networks with
a continuum of users.
However, the above settings do not take into
consideration the possibility that resources may fail to execute their
assigned tasks. In the computer science context of
congestion games, where the alternatives of concern are machines,
computers, communication lines etc., which are obviously
prone to failures, this issue should not be ignored.
Penn, Polukarov and Tennenholtz were the first to
incorporate the issue of failures into congestion settings [10].
They introduced a class of congestion games with failures
(CGFs) and proved that these games, while not being
isomorphic to congestion games, always possess Nash equilibria
in pure strategies. The CGF-model significantly differs from
ours. In a CGF, the authors considered the delay associated
with successful task completion, where the delay for an agent
is the minimum of the delays of his successful attempts and
the aim of each agent is to minimize his expected delay. In
contrast with the CGF-model, in our model we consider the
total cost of the utilized resources, where each agent wishes
to maximize the difference between his benefit from a
successful task completion and the sum of his costs over the
resources he uses.
The above differences imply that CGFs and CGLFs
possess different properties. In particular, if in our model the
resource failure probabilities were constant and known in
advance, then a potential function would exist. This, however,
does not hold for CGFs; in CGFs, the failure probabilities
are constant but there is no potential function.
Furthermore, the procedures proposed by the authors in [10] for
the construction of a pure strategy Nash equilibrium are
not valid in our model, even in the simple, agent-symmetric
case, where all agents have the same benefit from successful
completion of their tasks.
Our work provides the first model of congestion settings
with resource failures, which considers the sum of
congestiondependent costs over utilized resources, and therefore, does
not extend the CGF-model, but rather generalizes the classic
model of congestion games. Moreover, it is the first model
to consider load-dependent failures in the above context.
211
Organization
The rest of the paper is organized as follows. In Section 2
we define our model. In Section 3 we present our results.
In 3.1 we show that CGLFs, in general, do not have pure
strategy Nash equilibria. In 3.2 we focus on CGLFs with
nondecreasing cost functions (nondecreasing CGLFs). We
show that these games do not admit a potential function.
However, in our main result we show the existence of pure
strategy Nash equilibria in nondecreasing CGLFs. Section
4 is devoted to a short discussion. Many of the proofs are
omitted from this conference version of the paper, and will
appear in the full version.
2. THE MODEL
The scenarios considered in this work consist of a finite set
of agents where each agent has a task that can be carried
out by any element of a set of identical resources (service
providers). The agents simultaneously choose a subset of
the resources in order to perform their tasks, and their aim
is to maximize their own expected payoff, as described in
the sequel.
Let N be a set of n agents (n ∈ N), and let M be a set
of m resources (m ∈ N). Agent i ∈ N chooses a
strategy σi ∈ Σi which is a (potentially empty) subset of the
resources. That is, Σi is the power set of the set of
resources: Σi = P(M). Given a subset S ⊆ N of the agents,
the set of strategy combinations of the members of S is
denoted by ΣS = ×i∈SΣi, and the set of strategy
combinations of the complement subset of agents is denoted by
Σ−S (Σ−S = ΣN S = ×i∈N SΣi). The set of pure strategy
profiles of all the agents is denoted by Σ (Σ = ΣN ).
Each resource is associated with a cost, c(·), and a
failure probability, f(·), each of which depends on the
number of agents who use this resource. We assume that the
failure probabilities of the resources are independent. Let
σ = (σ1, . . . , σn) ∈ Σ be a pure strategy profile. The
(m-dimensional) congestion vector that corresponds to σ is
hσ
= (hσ
e )e∈M , where hσ
e =
˛
˛{i ∈ N : e ∈ σi}
˛
˛. The
failure probability of a resource e is a monotone nondecreasing
function f : {1, . . . , n} → [0, 1) of the congestion
experienced by e. The cost of utilizing resource e is a function
c : {1, . . . , n} → R+ of the congestion experienced by e.
The outcome for agent i ∈ N is denoted by xi ∈ {S, F},
where S and F, respectively, indicate whether the task
execution succeeded or failed. We say that the execution of
agent"s i task succeeds if the task of agent i is successfully
completed by at least one of the resources chosen by him.
The benefit of agent i from his outcome xi is denoted by
Vi(xi), where Vi(S) = vi, a given (nonnegative) value, and
Vi(F) = 0.
The utility of agent i from strategy profile σ and his
outcome xi, ui(σ, xi), is the difference between his benefit from
the outcome (Vi(xi)) and the sum of the costs of the
resources he has used:
ui(σ, xi) = Vi(xi) −
X
e∈σi
c(hσ
e ) .
The expected utility of agent i from strategy profile σ, Ui(σ),
is, therefore:
Ui(σ) = 1 −
Y
e∈σi
f(hσ
e )
!
vi −
X
e∈σi
c(hσ
e ) ,
where 1 −
Q
e∈σi
f(hσ
e ) denotes the probability of successful
completion of agent i"s task. We use the convention thatQ
e∈∅ f(hσ
e ) = 1. Hence, if agent i chooses an empty set
σi = ∅ (does not assign his task to any resource), then his
expected utility, Ui(∅, σ−i), equals zero.
3. PURE STRATEGY NASH EQUILIBRIA
IN CGLFS
In this section we present our results on CGLFs. We
investigate the property of the (non-)existence of pure strategy
Nash equilibria in these games. We show that this class of
games does not, in general, possess pure strategy equilibria.
Nevertheless, if the resource cost functions are
nondecreasing then such equilibria are guaranteed to exist, despite the
non-existence of a potential function.
3.1 Decreasing Cost Functions
We start by showing that the class of CGLFs and, in
particular, the subclass of CGLFs with decreasing cost
functions, does not, in general, possess Nash equilibria in pure
strategies.
Consider a CGLF with two agents (N = {1, 2}) and two
resources (M = {e1, e2}). The cost function of each resource
is given by c(x) = 1
xx , where x ∈ {1, 2}, and the failure
probabilities are f(1) = 0.01 and f(2) = 0.26. The benefits
of the agents from successful task completion are v1 = 1.1
and v2 = 4. Below we present the payoff matrix of the game.
∅ {e1} {e2} {e1, e2}
∅ U1 = 0 U1 = 0 U1 = 0 U1 = 0
U2 = 0 U2 = 2.96 U2 = 2.96 U2 = 1.9996
{e1} U1 = 0.089 U1 = 0.564 U1 = 0.089 U1 = 0.564
U2 = 0 U2 = 2.71 U2 = 2.96 U2 = 2.7396
{e2} U1 = 0.089 U1 = 0.089 U1 = 0.564 U1 = 0.564
U2 = 0 U2 = 2.96 U2 = 2.71 U2 = 2.7396
{e1, e2} U1 = −0.90011 U1 = −0.15286 U1 = −0.15286 U1 = 0.52564
U2 = 0 U2 = 2.71 U2 = 2.71 U2 = 3.2296
Table 1: Example for non-existence of pure strategy Nash
equilibria in CGLFs.
It can be easily seen that for every pure strategy profile σ
in this game there exist an agent i and a strategy σi ∈ Σi
such that Ui(σ−i, σi) > Ui(σ). That is, every pure strategy
profile in this game is not in equilibrium.
However, if the cost functions in a given CGLF do not
decrease in the number of users, then, as we show in the
main result of this paper, a pure strategy Nash equilibrium
is guaranteed to exist.
212
3.2 Nondecreasing Cost Functions
This section focuses on the subclass of CGLFs with
nondecreasing cost functions (henceforth, nondecreasing CGLFs).
We show that nondecreasing CGLFs do not, in general,
admit a potential function. Therefore, these games are not
congestion games. Nevertheless, we prove that all such games
possess pure strategy Nash equilibria.
3.2.1 The (Non-)Existence of a Potential Function
Recall that Monderer and Shapley [9] introduced the
notions of potential function and potential game, where
potential game is defined to be a game that possesses a potential
function. A potential function is a real-valued function over
the set of pure strategy profiles, with the property that the
gain (or loss) of an agent shifting to another strategy while
the other agents" strategies are kept unchanged, equals to
the corresponding increment of the potential function. The
authors [9] showed that the classes of finite potential games
and congestion games coincide.
Here we show that the class of CGLFs and, in particular,
the subclass of nondecreasing CGLFs, does not admit a
potential function, and therefore is not included in the class of
congestion games. However, for the special case of constant
failure probabilities, a potential function is guaranteed to
exist. To prove these statements we use the following
characterization of potential games [9].
A path in Σ is a sequence τ = (σ0
→ σ1
→ · · · ) such
that for every k ≥ 1 there exists a unique agent, say agent
i, such that σk
= (σk−1
−i , σi) for some σi = σk−1
i in Σi. A
finite path τ = (σ0
→ σ1
→ · · · → σK
) is closed if σ0
= σK
.
It is a simple closed path if in addition σl
= σk
for every
0 ≤ l = k ≤ K − 1. The length of a simple closed path is
defined to be the number of distinct points in it; that is, the
length of τ = (σ0
→ σ1
→ · · · → σK
) is K.
Theorem 1. [9] Let G be a game in strategic form with
a vector U = (U1, . . . , Un) of utility functions. For a finite
path τ = (σ0
→ σ1
→ · · · → σK
), let U(τ) =
PK
k=1[Uik (σk
)−
Uik (σk−1
)], where ik is the unique deviator at step k. Then,
G is a potential game if and only if U(τ) = 0 for every
simple closed path τ of length 4.
Load-Dependent Failures
Based on Theorem 1, we present the following
counterexample that demonstrates the non-existence of a potential
function in CGLFs.
We consider the following agent-symmetric game G in
which two agents (N = {1, 2}) wish to assign a task to two
resources (M = {e1, e2}). The benefit from a successful task
completion of each agent equals v, and the failure
probability function strictly increases with the congestion. Consider
the simple closed path of length 4 which is formed by
α = (∅, {e2}) , β = ({e1}, {e2}) ,
γ = ({e1}, {e1, e2}) , δ = (∅, {e1, e2}) :
{e2} {e1, e2}
∅ U1 = 0 U1 = 0
U2 = (1 − f(1)) v − c(1) U2 =
`
1 − f(1)2
´
v − 2c(1)
{e1} U1 = (1 − f(1)) v − c(1) U1 = (1 − f(2)) v − c(2)
U2 = (1 − f(1)) v − c(1) U2 = (1 − f(1)f(2)) v − c(1) − c(2)
Table 2: Example for non-existence of potentials in CGLFs.
Therefore,
U1(α) − U1(β) + U2(β) − U2(γ) + U1(γ) − U1(δ)
+U2(δ) − U2(α) = v (1 − f(1)) (f(1) − f(2)) = 0.
Thus, by Theorem 1, nondecreasing CGLFs do not
admit potentials. As a result, they are not congestion games.
However, as presented in the next section, the special case
in which the failure probabilities are constant, always
possesses a potential function.
Constant Failure Probabilities
We show below that CGLFs with constant failure
probabilities always possess a potential function. This follows from
the fact that the expected benefit (revenue) of each agent in
this case does not depend on the choices of the other agents.
In addition, for each agent, the sum of the costs over his
chosen subset of resources, equals the payoff of an agent
choosing the same strategy in the corresponding congestion game.
Assume we are given a game G with constant failure
probabilities. Let τ = (α → β → γ → δ → α) be an arbitrary
simple closed path of length 4. Let i and j denote the active
agents (deviators) in τ and z ∈ Σ−{i,j} be a fixed
strategy profile of the other agents. Let α = (xi, xj, z), β =
(yi, xj, z), γ = (yi, yj, z), δ = (xi, yj, z), where xi, yi ∈ Σi
and xj, yj ∈ Σj. Then,
U(τ) = Ui(xi, xj, z) − Ui(yi, xj, z)
+Uj(yi, xj, z) − Uj(yi, yj, z)
+Ui(yi, yj, z) − Ui(xi, yj, z)
+Uj(xi, yj, z) − Uj(xi, xj, z)
=
1 − f|xi|
vi −
X
e∈xi
c(h
(xi,xj ,z)
e ) − . . .
−
1 − f|xj |
vj +
X
e∈xj
c(h
(xi,xj ,z)
e )
=
»
1 − f|xi|
vi − . . . −
1 − f|xj |
vj
−
» X
e∈xi
c(h
(xi,xj ,z)
e ) − . . . −
X
e∈xj
c(h
(xi,xj ,z)
e )
.
Notice that
»
1 − f|xi|
vi − . . . −
1 − f|xj |
vj
= 0, as
a sum of a telescope series. The remaining sum equals 0, by
applying Theorem 1 to congestion games, which are known
to possess a potential function. Thus, by Theorem 1, G is a
potential game.
213
We note that the above result holds also for the more
general settings with non-identical resources (having
different failure probabilities and cost functions) and general cost
functions (not necessarily monotone and/or nonnegative).
3.2.2 The Existence of a Pure Strategy Nash
Equilibrium
In the previous section, we have shown that CGLFs and,
in particular, nondecreasing CGLFs, do not admit a
potential function, but this fact, in general, does not contradict
the existence of an equilibrium in pure strategies. In this
section, we present and prove the main result of this
paper (Theorem 2) which shows the existence of pure strategy
Nash equilibria in nondecreasing CGLFs.
Theorem 2. Every nondecreasing CGLF possesses a Nash
equilibrium in pure strategies.
The proof of Theorem 2 is based on Lemmas 4, 7 and
8, which are presented in the sequel. We start with some
definitions and observations that are needed for their proofs.
In particular, we present the notions of A-, D- and S-stability
and show that a strategy profile is in equilibrium if and only
if it is A-, D- and S- stable. Furthermore, we prove the
existence of such a profile in any given nondecreasing CGLF.
Definition 3. For any strategy profile σ ∈ Σ and for any
agent i ∈ N, the operation of adding precisely one resource
to his strategy, σi, is called an A-move of i from σ.
Similarly, the operation of dropping a single resource is called a
D-move, and the operation of switching one resource with
another is called an S-move.
Clearly, if agent i deviates from strategy σi to strategy σi
by applying a single A-, D- or S-move, then max {|σi σi|,
|σi σi|} = 1, and vice versa, if max {|σi σi|, |σi σi|} =
1 then σi is obtained from σi by applying exactly one such
move. For simplicity of exposition, for any pair of sets A
and B, let µ(A, B) = max {|A B|, |B A|}.
The following lemma implies that any strategy profile, in
which no agent wishes unilaterally to apply a single A-,
Dor S-move, is a Nash equilibrium. More precisely, we show
that if there exists an agent who benefits from a unilateral
deviation from a given strategy profile, then there exists a
single A-, D- or S-move which is profitable for him as well.
Lemma 4. Given a nondecreasing CGLF, let σ ∈ Σ be a
strategy profile which is not in equilibrium, and let i ∈ N
such that ∃xi ∈ Σi for which Ui(σ−i, xi) > Ui(σ). Then,
there exists yi ∈ Σi such that Ui(σ−i, yi) > Ui(σ) and µ(yi, σi)
= 1.
Therefore, to prove the existence of a pure strategy Nash
equilibrium, it suffices to look for a strategy profile for which
no agent wishes to unilaterally apply an A-, D- or S-move.
Based on the above observation, we define A-, D- and
Sstability as follows.
Definition 5. A strategy profile σ is said to be A-stable
(resp., D-stable, S-stable) if there are no agents with a
profitable A- (resp., D-, S-) move from σ. Similarly, we
define a strategy profile σ to be DS-stable if there are no
agents with a profitable D- or S-move from σ.
The set of all DS-stable strategy profiles is denoted by
Σ0
. Obviously, the profile (∅, . . . , ∅) is DS-stable, so Σ0
is not empty. Our goal is to find a DS-stable profile for
which no profitable A-move exists, implying this profile is
in equilibrium. To describe how we achieve this, we define
the notions of light (heavy) resources and (nearly-) even
strategy profiles, which play a central role in the proof of
our main result.
Definition 6. Given a strategy profile σ, resource e is
called σ-light if hσ
e ∈ arg mine∈M hσ
e and σ-heavy otherwise.
A strategy profile σ with no heavy resources will be termed
even. A strategy profile σ satisfying |hσ
e − hσ
e | ≤ 1 for all
e, e ∈ M will be termed nearly-even.
Obviously, every even strategy profile is nearly-even. In
addition, in a nearly-even strategy profile, all heavy resources
(if exist) have the same congestion. We also observe that the
profile (∅, . . . , ∅) is even (and DS-stable), so the subset of
even, DS-stable strategy profiles is not empty.
Based on the above observations, we define two types of
an A-move that are used in the sequel. Suppose σ ∈ Σ0
is a nearly-even DS-stable strategy profile. For each agent
i ∈ N, let ei ∈ arg mine∈M σi hσ
e . That is, ei is a
lightest resource not chosen previously by i. Then, if there
exists any profitable A-move for agent i, then the A-move
with ei is profitable for i as well. This is since if agent i
wishes to unilaterally add a resource, say a ∈ M σi, then
Ui (σ−i, (σi ∪ {a})) > Ui(σ). Hence,
1 −
Y
e∈σi
f(hσ
e )f(hσ
a + 1)
!
vi −
X
e∈σi
c(hσ
e ) − c(hσ
a + 1)
> 1 −
Y
e∈σi
f(hσ
e )
!
vi −
X
e∈σi
c(hσ
e )
⇒ vi
Y
e∈σi
f(hσ
e ) >
c(hσ
a + 1)
1 − f(hσ
a + 1)
≥
c(hσ
ei
+ 1)
1 − f(hσ
ei
+ 1)
⇒ Ui (σ−i, (σi ∪ {ei})) > Ui(σ) .
If no agent wishes to change his strategy in this
manner, i.e. Ui(σ) ≥ Ui(σ−i, σi ∪{ei}) for all i ∈ N, then by the
above Ui(σ) ≥ Ui(σ−i, σi ∪{a}) for all i ∈ N and a ∈ M σi.
Hence, σ is A-stable and by Lemma 4, σ is a Nash
equilibrium strategy profile. Otherwise, let N(σ) denote the subset
of all agents for which there exists ei such that a unilateral
addition of ei is profitable. Let a ∈ arg minei : i∈N(σ) hσ
ei
. Let
also i ∈ N(σ) be the agent for which ei = a. If a is σ-light,
then let σ = (σ−i, σi ∪ {a}). In this case we say that σ is
obtained from σ by a one-step addition of resource a, and a
is called an added resource. If a is σ-heavy then there exists
a σ-light resource b and an agent j such that a ∈ σj and
b /∈ σj. Then let σ =
`
σ−{i,j}, σi ∪ {a}, (σj {a}) ∪ {b}
´
.
In this case we say that σ is obtained from σ by a two-step
addition of resource b, and b is called an added resource.
We notice that, in both cases, the congestion of each
resource in σ is the same as in σ, except for the added
resource, for which its congestion in σ increased by 1. Thus,
since the added resource is σ-light and σ is nearly-even, σ
is nearly-even. Then, the following lemma implies the
Sstability of σ .
214
Lemma 7. In a nondecreasing CGLF, every nearly-even
strategy profile is S-stable.
Coupled with Lemma 7, the following lemma shows that
if σ is a nearly-even and DS-stable strategy profile, and σ is
obtained from σ by a one- or two-step addition of resource
a, then the only potential cause for a non-DS-stability of σ
is the existence of an agent k ∈ N with σk = σk, who wishes
to drop the added resource a.
Lemma 8. Let σ be a nearly-even DS-stable strategy
profile of a given nondecreasing CGLF, and let σ be obtained
from σ by a one- or two-step addition of resource a. Then,
there are no profitable D-moves for any agent i ∈ N with
σi = σi. For an agent i ∈ N with σi = σi, the only possible
profitable D-move (if exists) is to drop the added resource a.
We are now ready to prove our main result - Theorem
2. Let us briefly describe the idea behind the proof. By
Lemma 4, it suffices to prove the existence of a strategy
profile which is A-, D- and S-stable. We start with the set
of even and DS-stable strategy profiles which is obviously
not empty. In this set, we consider the subset of strategy
profiles with maximum congestion and maximum sum of the
agents" utilities. Assuming on the contrary that every
DSstable profile admits a profitable A-move, we show the
existence of a strategy profile x in the above subset, such that a
(one-step) addition of some resource a to x results in a
DSstable strategy. Then by a finite series of one- or two-step
addition operations we obtain an even, DS-stable strategy
profile with strictly higher congestion on the resources,
contradicting the choice of x. The full proof is presented below.
Proof of Theorem 2: Let Σ1
⊆ Σ0
be the subset of
all even, DS-stable strategy profiles. Observe that since
(∅, . . . , ∅) is an even, DS-stable strategy profile, then Σ1
is not empty, and minσ∈Σ0
˛
˛{e ∈ M : e is σ−heavy}
˛
˛ = 0.
Then, Σ1
could also be defined as
Σ1
= arg min
σ∈Σ0
˛
˛{e ∈ M : e is σ−heavy}
˛
˛ ,
with hσ
being the common congestion.
Now, let Σ2
⊆ Σ1
be the subset of Σ1
consisting of all
those profiles with maximum congestion on the resources.
That is,
Σ2
= arg max
σ∈Σ1
hσ
.
Let UN (σ) =
P
i∈N Ui(σ) denotes the group utility of the
agents, and let Σ3
⊆ Σ2
be the subset of all profiles in Σ2
with maximum group utility. That is,
Σ3
= arg max
σ∈Σ2
X
i∈N
Ui(σ) = arg max
σ∈Σ2
UN (σ) .
Consider first the simple case in which maxσ∈Σ1 hσ
= 0.
Obviously, in this case, Σ1
= Σ2
= Σ3
= {x = (∅, . . . , ∅)}.
We show below that by performing a finite series of
(onestep) addition operations on x, we obtain an even,
DSstable strategy profile y with higher congestion, that is with
hy
> hx
= 0, in contradiction to x ∈ Σ2
. Let z ∈ Σ0
be
a nearly-even (not necessarily even) DS-stable profile such
that mine∈M hz
e = 0, and note that the profile x satisfies
the above conditions. Let N(z) be the subset of agents for
which a profitable A-move exists, and let i ∈ N(z).
Obviously, there exists a z-light resource a such that Ui(z−i, zi ∪
{a}) > Ui(z) (otherwise, arg mine∈M hz
e ⊆ zi, in
contradiction to mine∈M hz
e = 0). Consider the strategy profile
z = (z−i, zi ∪ {a}) which is obtained from z by a (one-step)
addition of resource a by agent i. Since z is nearly-even and
a is z-light, we can easily see that z is nearly-even. Then,
Lemma 7 implies that z is S-stable. Since i is the only agent
using resource a in z , by Lemma 8, no profitable D-moves
are available. Thus, z is a DS-stable strategy profile.
Therefore, since the number of resources is finite, there is a finite
series of one-step addition operations on x = (∅, . . . , ∅) that
leads to strategy profile y ∈ Σ1
with hy
= 1 > 0 = hx
, in
contradiction to x ∈ Σ2
.
We turn now to consider the other case where maxσ∈Σ1 hσ
≥ 1. In this case we select from Σ3
a strategy profile x,
as described below, and use it to contradict our contrary
assumption. Specifically, we show that there exists x ∈ Σ3
such that for all j ∈ N,
vjf(hx
)|xj |−1
≥
c(hx
+ 1)
1 − f(hx + 1)
. (1)
Let x be a strategy profile which is obtained from x by
a (one-step) addition of some resource a ∈ M by some
agent i ∈ N(x) (note that x is nearly-even). Then, (1)
is derived from and essentially equivalent to the inequality
Uj(x ) ≥ Uj(x−j, xj {a}), for all a ∈ xj. That is, after
performing an A-move with a by i, there is no profitable
D-move with a. Then, by Lemmas 7 and 8, x is DS-stable.
Following the same lines as above, we construct a procedure
that initializes at x and achieves a strategy profile y ∈ Σ1
with hy
> hx
, in contradiction to x ∈ Σ2
.
Now, let us confirm the existence of x ∈ Σ3
that
satisfies (1). Let x ∈ Σ3
and let M(x) be the subset of all
resources for which there exists a profitable (one-step)
addition. First, we show that (1) holds for all j ∈ N such that
xj ∩M(x) = ∅, that is, for all those agents with one of their
resources being desired by another agent.
Let a ∈ M(x), and let x be the strategy profile that is
obtained from x by the (one-step) addition of a by agent i.
Assume on the contrary that there is an agent j with a ∈ xj
such that
vjf(hx
)|xj |−1
<
c(hx
+ 1)
1 − f(hx + 1)
.
Let x = (x−j, xj {a}). Below we demonstrate that x
is a DS-stable strategy profile and, since x and x
correspond to the same congestion vector, we conclude that x
lies in Σ2
. In addition, we show that UN (x ) > UN (x),
contradicting the fact that x ∈ Σ3
.
To show that x ∈ Σ0
we note that x is an even strategy
profile, and thus no S-moves may be performed for x . In
addition, since hx
= hx
and x ∈ Σ0
, there are no profitable
D-moves for any agent k = i, j. It remains to show that
there are no profitable D-moves for agents i and j as well.
215
Since Ui(x ) > Ui(x), we get
vif(hx
)|xi|
>
c(hx
+ 1)
1 − f(hx + 1)
⇒ vif(hx
)|xi |−1
= vif(hx
)|xi|
>
c(hx
+ 1)
1 − f(hx + 1)
>
c(hx
)
1 − f(hx)
=
c(hx
)
1 − f(hx )
,
which implies Ui(x ) > Ui(x−i, xi {b}), for all b ∈ xi .
Thus, there are no profitable D-moves for agent i. By the
DS-stability of x, for agent j and for all b ∈ xj, we have
Uj(x) ≥ Uj(x−j, xj {b}) ⇒ vjf(hx
)|xj |−1
≥
c(hx
)
1 − f(hx)
.
Then,
vjf(hx
)|xj |−1
> vjf(hx
)|xj |
= vjf(hx
)|xj |−1
≥
c(hx
)
1 − f(hx)
=
c(hx
)
1 − f(hx )
⇒ Uj(x ) > Uj(x−j, xj {b}), for all b ∈ xi. Therefore, x
is DS-stable and lies in Σ2
.
To show that UN (x ), the group utility of x , satisfies
UN (x ) > UN (x), we note that hx
= hx
, and thus Uk(x ) =
Uk(x), for all k ∈ N {i, j}. Therefore, we have to show
that Ui(x ) + Uj(x ) > Ui(x) + Uj(x), or Ui(x ) − Ui(x) >
Uj(x) − Uj(x ). Observe that
Ui(x ) > Ui(x) ⇒ vif(hx
)|xi|
>
c(hx
+ 1)
1 − f(hx + 1)
and
Uj(x ) < Uj(x ) ⇒ vjf(hx
)|xj |−1
<
c(hx
+ 1)
1 − f(hx + 1)
,
which yields
vif(hx
)|xi|
> vjf(hx
)|xj |−1
.
Thus, Ui(x ) − Ui(x)
=
1 − f(hx
)|xi|+1
vi − (|xi| + 1) c(hx
)
−
h
1 − f(hx
)|xi|
vi − |xi|c(hx
)
i
= vif(hx
)|xi|
(1 − f(hx
)) − c(hx
)
> vjf(hx
)|xj |−1
(1 − f(hx
)) − c(hx
)
=
1 − f(hx
)|xj |
vj − |xj|c(hx
)
−
h
1 − f(hx
)|xj |−1
vj − (|xi| − 1) c(hx
)
i
= Uj(x) − Uj(x ) .
Therefore, x lies in Σ2
and satisfies UN (x ) > UN (x), in
contradiction to x ∈ Σ3
.
Hence, if x ∈ Σ3
then (1) holds for all j ∈ N such that
xj ∩M(x) = ∅. Now let us see that there exists x ∈ Σ3
such
that (1) holds for all the agents. For that, choose an agent
i ∈ arg mink∈N vif(hx
)|xk|
. If there exists a ∈ xi ∩ M(x)
then i satisfies (1), implying by the choice of agent i, that
the above obviously yields the correctness of (1) for any
agent k ∈ N. Otherwise, if no resource in xi lies in M(x),
then let a ∈ xi and a ∈ M(x). Since a ∈ xi, a /∈ xi,
and hx
a = hx
a , then there exists agent j such that a ∈ xj
and a /∈ xj. One can easily check that the strategy
profile x =
`
x−{i,j}, (xi {a}) ∪ {a }, (xj {a }) ∪ {a}
´
lies
in Σ3
. Thus, x satisfies (1) for agent i, and therefore, for
any agent k ∈ N.
Now, let x ∈ Σ3
satisfy (1). We show below that by
performing a finite series of one- and two-step addition
operations on x, we can achieve a strategy profile y that lies
in Σ1
, such that hy
> hx
, in contradiction to x ∈ Σ2
. Let
z ∈ Σ0
be a nearly-even (not necessarily even), DS-stable
strategy profile, such that
vi
Y
e∈zi {b}
f(hz
e) ≥
c(hz
b + 1)
1 − f(hz
b + 1)
, (2)
for all i ∈ N and for all z-light resource b ∈ zi. We note that
for profile x ∈ Σ3
⊆ Σ1
, with all resources being x-light,
conditions (2) and (1) are equivalent. Let z be obtained
from z by a one- or two-step addition of a z-light resource
a. Obviously, z is nearly-even. In addition, hz
e ≥ hz
e for
all e ∈ M, and mine∈M hz
e ≥ mine∈M hz
e. To complete the
proof we need to show that z is DS-stable, and, in addition,
that if mine∈M hz
e = mine∈M hz
e then z has property (2).
The DS-stability of z follows directly from Lemmas 7 and 8,
and from (2) with respect to z. It remains to prove property
(2) for z with mine∈M hz
e = mine∈M hz
e. Using (2) with
respect to z, for any agent k with zk = zk and for any
zlight resource b ∈ zk, we get
vk
Y
e∈zk
{b}
f(hz
e ) ≥ vk
Y
e∈zk {b}
f(hz
e)
≥
c(hz
b + 1)
1 − f(hz
b + 1)
=
c(hz
b + 1)
1 − f(hz
b + 1)
,
as required. Now let us consider the rest of the agents.
Assume z is obtained by the one-step addition of a by agent
i. In this case, i is the only agent with zi = zi. The required
property for agent i follows directly from Ui(z ) > Ui(z). In
the case of a two-step addition, let z =
`
z−{i,j}, zi ∪ {b},
(zj {b}) ∪ {a}), where b is a z-heavy resource. For agent
i, from Ui(z−i, zi ∪ {b}) > Ui(z) we get
1 −
Y
e∈zi
f(hz
e)f(hz
b + 1)
!
vi −
X
e∈zi
c(hz
e) − c(hz
b + 1)
> 1 −
Y
e∈zi
f(hz
e)
!
vi −
X
e∈zi
c(hz
e)
⇒ vi
Y
e∈zi
f(hz
e) >
c(hz
b + 1)
1 − f(hz
b + 1)
, (3)
and note that since hz
b ≥ hz
e for all e ∈ M and, in
particular, for all z -light resources, then
c(hz
b + 1)
1 − f(hz
b + 1)
≥
c(hz
e + 1)
1 − f(hz
e + 1)
, (4)
for any z -light resource e .
216
Now, since hz
e ≥ hz
e for all e ∈ M and b is z-heavy, then
vi
Y
e∈zi {e }
f(hz
e ) ≥ vi
Y
e∈zi {e }
f(hz
e)
= vi
Y
e∈(zi∪{b}) {e }
f(hz
e) ≥ vi
Y
e∈zi
f(hz
e) ,
for any z -light resource e . The above, coupled with (3)
and (4), yields the required. For agent j we just use (2)
with respect to z and the equality hz
b = hz
a . For any z -light
resource e ,
vj
Y
e∈zj {e }
f(hz
e ) ≥ vi
Y
e∈zi {e }
f(hz
e)
≥
c(hz
e + 1)
1 − f(hz
e + 1)
=
c(hz
e + 1)
1 − f(hz
e + 1)
.
Thus, since the number of resources is finite, there is a finite
series of one- and two-step addition operations on x that
leads to strategy profile y ∈ Σ1
with hy
> hx
, in
contradiction to x ∈ Σ2
. This completes the proof.
4. DISCUSSION
In this paper, we introduce and investigate congestion
settings with unreliable resources, in which the probability of a
resource"s failure depends on the congestion experienced by
this resource. We defined a class of congestion games with
load-dependent failures (CGLFs), which generalizes the
wellknown class of congestion games. We study the existence of
pure strategy Nash equilibria and potential functions in the
presented class of games. We show that these games do not,
in general, possess pure strategy equilibria. Nevertheless,
if the resource cost functions are nondecreasing then such
equilibria are guaranteed to exist, despite the non-existence
of a potential function.
The CGLF-model can be modified to the case where the
agents pay only for non-faulty resources they selected. Both
the model discussed in this paper and the modified one are
reasonable. In the full version we will show that the
modified model leads to similar results. In particular, we can
show the existence of a pure strategy equilibrium for
nondecreasing CGLFs also in the modified model.
In future research we plan to consider various extensions
of CGLFs. In particular, we plan to consider CGLFs where
the resources may have different costs and failure
probabilities, as well as CGLFs in which the resource failure
probabilities are mutually dependent. In addition, it is of
interest to develop an efficient algorithm for the computation of
pure strategy Nash equilibrium, as well as discuss the social
(in)efficiency of the equilibria.
5. REFERENCES
[1] H. Ackermann, H. R¨oglin, and B. V¨ocking. Pure nash
equilibria in player-specific and weighted congestion
games. In WINE-06, 2006.
[2] G. Christodoulou and E. Koutsoupias. The price of
anarchy of finite congestion games. In Proceedings of
the 37th Annual ACM Symposium on Theory and
Computing (STOC-05), 2005.
[3] A. Fabrikant, C. Papadimitriou, and K. Talwar. The
complexity of pure nash equilibria. In STOC-04, pages
604-612, 2004.
[4] E. Koutsoupias and C. Papadimitriou. Worst-case
equilibria. In Proceedings of the 16th Annual
Symposium on Theoretical Aspects of Computer
Science, pages 404-413, 1999.
[5] K. Leyton-Brown and M. Tennenholtz. Local-effect
games. In IJCAI-03, 2003.
[6] I. Milchtaich. Congestion games with player-specific
payoff functions. Games and Economic Behavior,
13:111-124, 1996.
[7] D. Monderer. Solution-based congestion games.
Advances in Mathematical Economics, 8:397-407,
2006.
[8] D. Monderer. Multipotential games. In IJCAI-07,
2007.
[9] D. Monderer and L. Shapley. Potential games. Games
and Economic Behavior, 14:124-143, 1996.
[10] M. Penn, M. Polukarov, and M. Tennenholtz.
Congestion games with failures. In Proceedings of the
6th ACM Conference on Electronic Commerce
(EC-05), pages 259-268, 2005.
[11] R. Rosenthal. A class of games possessing
pure-strategy nash equilibria. International Journal of
Game Theory, 2:65-67, 1973.
[12] T. Roughgarden and E. Tardos. How bad is selfish
routing. Journal of the ACM, 49(2):236-259, 2002.
217 | potential function;nash equilibrium;nondecreasing cost function;resource cost function;pure strategy nash equilibrium;load-dependent failure;load-dependent resource failure;identical resource;real-valued function;failure probability;localeffect game;congestion game |
train_C-58 | A Scalable Distributed Information Management System∗ | We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures. | 1. INTRODUCTION
The goal of this research is to design and build a Scalable
Distributed Information Management System (SDIMS) that aggregates
information about large-scale networked systems and that can serve
as a basic building block for a broad range of large-scale distributed
applications. Monitoring, querying, and reacting to changes in
the state of a distributed system are core components of
applications such as system management [15, 31, 37, 42], service
placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor
monitoring and control [20, 21], multicast tree formation [8, 9, 33,
36, 38], and naming and request routing [10, 11]. We therefore
speculate that a SDIMS in a networked system would provide a
distributed operating systems backbone and facilitate the
development and deployment of new distributed services.
For a large scale information system, hierarchical aggregation
is a fundamental abstraction for scalability. Rather than expose all
information to all nodes, hierarchical aggregation allows a node to
access detailed views of nearby information and summary views of
global information. In a SDIMS based on hierarchical aggregation,
different nodes can therefore receive different answers to the query
find a [nearby] node with at least 1 GB of free memory or find
a [nearby] copy of file foo. A hierarchical system that aggregates
information through reduction trees [21, 38] allows nodes to access
information they care about while maintaining system scalability.
To be used as a basic building block, a SDIMS should have
four properties. First, the system should be scalable: it should
accommodate large numbers of participating nodes, and it should
allow applications to install and monitor large numbers of data
attributes. Enterprise and global scale systems today might have tens
of thousands to millions of nodes and these numbers will increase
over time. Similarly, we hope to support many applications, and
each application may track several attributes (e.g., the load and
free memory of a system"s machines) or millions of attributes (e.g.,
which files are stored on which machines).
Second, the system should have flexibility to accommodate a
broad range of applications and attributes. For example,
readdominated attributes like numCPUs rarely change in value, while
write-dominated attributes like numProcesses change quite often.
An approach tuned for read-dominated attributes will consume high
bandwidth when applied to write-dominated attributes. Conversely,
an approach tuned for write-dominated attributes will suffer from
unnecessary query latency or imprecision for read-dominated
attributes. Therefore, a SDIMS should provide mechanisms to handle
different types of attributes and leave the policy decision of tuning
replication to the applications.
Third, a SDIMS should provide administrative isolation. In a
large system, it is natural to arrange nodes in an organizational or
an administrative hierarchy. A SDIMS should support
administraSession 10: Distributed Information Systems
379
tive isolation in which queries about an administrative domain"s
information can be satisfied within the domain so that the system can
operate during disconnections from other domains, so that an
external observer cannot monitor or affect intra-domain queries, and
to support domain-scoped queries efficiently.
Fourth, the system must be robust to node failures and
disconnections. A SDIMS should adapt to reconfigurations in a timely
fashion and should also provide mechanisms so that applications
can tradeoff the cost of adaptation with the consistency level in the
aggregated results when reconfigurations occur.
We draw inspiration from two previous works: Astrolabe [38]
and Distributed Hash Tables (DHTs).
Astrolabe [38] is a robust information management system.
Astrolabe provides the abstraction of a single logical aggregation tree
that mirrors a system"s administrative hierarchy. It provides a
general interface for installing new aggregation functions and provides
eventual consistency on its data. Astrolabe is robust due to its use
of an unstructured gossip protocol for disseminating information
and its strategy of replicating all aggregated attribute values for a
subtree to all nodes in the subtree. This combination allows any
communication pattern to yield eventual consistency and allows
any node to answer any query using local information. This high
degree of replication, however, may limit the system"s ability to
accommodate large numbers of attributes. Also, although the
approach works well for read-dominated attributes, an update at one
node can eventually affect the state at all nodes, which may limit
the system"s flexibility to support write-dominated attributes.
Recent research in peer-to-peer structured networks resulted in
Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]-a data
structure that scales with the number of nodes and that distributes
the read-write load for different queries among the participating
nodes. It is interesting to note that although these systems export
a global hash table abstraction, many of them internally make use
of what can be viewed as a scalable system of aggregation trees
to, for example, route a request for a given key to the right DHT
node. Indeed, rather than export a general DHT interface, Plaxton
et al."s [28] original application makes use of hierarchical
aggregation to allow nodes to locate nearby copies of objects. It seems
appealing to develop a SDIMS abstraction that exposes this internal
functionality in a general way so that scalable trees for aggregation
can be a basic system building block alongside the DHTs.
At a first glance, it might appear to be obvious that simply
fusing DHTs with Astrolabe"s aggregation abstraction will result in a
SDIMS. However, meeting the SDIMS requirements forces a
design to address four questions: (1) How to scalably map different
attributes to different aggregation trees in a DHT mesh? (2) How to
provide flexibility in the aggregation to accommodate different
application requirements? (3) How to adapt a global, flat DHT mesh
to attain administrative isolation property? and (4) How to provide
robustness without unstructured gossip and total replication?
The key contributions of this paper that form the foundation of
our SDIMS design are as follows.
1. We define a new aggregation abstraction that specifies both
attribute type and attribute name and that associates an
aggregation function with a particular attribute type. This
abstraction paves the way for utilizing the DHT system"s internal
trees for aggregation and for achieving scalability with both
nodes and attributes.
2. We provide a flexible API that lets applications control the
propagation of reads and writes and thus trade off update
cost, read latency, replication, and staleness.
3. We augment an existing DHT algorithm to ensure path
convergence and path locality properties in order to achieve
administrative isolation.
4. We provide robustness to node and network reconfigurations
by (a) providing temporal replication through lazy
reaggregation that guarantees eventual consistency and (b)
ensuring that our flexible API allows demanding applications gain
additional robustness by using tunable spatial replication of
data aggregates or by performing fast on-demand
reaggregation to augment the underlying lazy reaggregation or by
doing both.
We have built a prototype of SDIMS. Through simulations and
micro-benchmark experiments on a number of department machines
and PlanetLab [27] nodes, we observe that the prototype achieves
scalability with respect to both nodes and attributes through use
of its flexible API, inflicts an order of magnitude lower maximum
node stress than unstructured gossiping schemes, achieves isolation
properties at a cost of modestly increased read latency compared to
flat DHTs, and gracefully handles node failures.
This initial study discusses key aspects of an ongoing system
building effort, but it does not address all issues in building a SDIMS.
For example, we believe that our strategies for providing robustness
will mesh well with techniques such as supernodes [22] and other
ongoing efforts to improve DHTs [30] for further improving
robustness. Also, although splitting aggregation among many trees
improves scalability for simple queries, this approach may make
complex and multi-attribute queries more expensive compared to
a single tree. Additional work is needed to understand the
significance of this limitation for real workloads and, if necessary, to
adapt query planning techniques from DHT abstractions [16, 19]
to scalable aggregation tree abstractions.
In Section 2, we explain the hierarchical aggregation
abstraction that SDIMS provides to applications. In Sections 3 and 4, we
describe the design of our system for achieving the flexibility,
scalability, and administrative isolation requirements of a SDIMS. In
Section 5, we detail the implementation of our prototype system.
Section 6 addresses the issue of adaptation to the topological
reconfigurations. In Section 7, we present the evaluation of our
system through large-scale simulations and microbenchmarks on real
networks. Section 8 details the related work, and Section 9
summarizes our contribution.
2. AGGREGATION ABSTRACTION
Aggregation is a natural abstraction for a large-scale distributed
information system because aggregation provides scalability by
allowing a node to view detailed information about the state near it
and progressively coarser-grained summaries about progressively
larger subsets of a system"s data [38].
Our aggregation abstraction is defined across a tree spanning all
nodes in the system. Each physical node in the system is a leaf and
each subtree represents a logical group of nodes. Note that logical
groups can correspond to administrative domains (e.g., department
or university) or groups of nodes within a domain (e.g., 10
workstations on a LAN in CS department). An internal non-leaf node,
which we call virtual node, is simulated by one or more physical
nodes at the leaves of the subtree for which the virtual node is the
root. We describe how to form such trees in a later section.
Each physical node has local data stored as a set of (attributeType,
attributeName, value) tuples such as (configuration, numCPUs,
16), (mcast membership, session foo, yes), or (file stored, foo,
myIPaddress). The system associates an aggregation function ftype
with each attribute type, and for each level-i subtree Ti in the
system, the system defines an aggregate value Vi,type,name for each
(at380
tributeType, attributeName) pair as follows. For a (physical) leaf
node T0 at level 0, V0,type,name is the locally stored value for the
attribute type and name or NULL if no matching tuple exists. Then
the aggregate value for a level-i subtree Ti is the aggregation
function for the type, ftype computed across the aggregate values of
each of Ti"s k children:
Vi,type,name = ftype(V0
i−1,type,name,V1
i−1,type,name,...,Vk−1
i−1,type,name).
Although SDIMS allows arbitrary aggregation functions, it is
often desirable that these functions satisfy the hierarchical
computation property [21]: f(v1,...,vn)= f(f(v1,...,vs1 ), f(vs1+1,...,vs2 ),
..., f(vsk+1,...,vn)), where vi is the value of an attribute at node
i. For example, the average operation, defined as avg(v1,...,vn) =
1/n.∑n
i=0 vi, does not satisfy the property. Instead, if an attribute
stores values as tuples (sum,count), the attribute satisfies the
hierarchical computation property while still allowing the applications
to compute the average from the aggregate sum and count values.
Finally, note that for a large-scale system, it is difficult or
impossible to insist that the aggregation value returned by a probe
corresponds to the function computed over the current values at the
leaves at the instant of the probe. Therefore our system provides
only weak consistency guarantees - specifically eventual
consistency as defined in [38].
3. FLEXIBILITY
A major innovation of our work is enabling flexible aggregate
computation and propagation. The definition of the aggregation
abstraction allows considerable flexibility in how, when, and where
aggregate values are computed and propagated. While previous
systems [15, 29, 38, 32, 35, 46] implement a single static strategy,
we argue that a SDIMS should provide flexible computation and
propagation to efficiently support wide variety of applications with
diverse requirements. In order to provide this flexibility, we
develop a simple interface that decomposes the aggregation
abstraction into three pieces of functionality: install, update, and probe.
This definition of the aggregation abstraction allows our system
to provide a continuous spectrum of strategies ranging from lazy
aggregate computation and propagation on reads to aggressive
immediate computation and propagation on writes. In Figure 1, we
illustrate both extreme strategies and an intermediate strategy.
Under the lazy Update-Local computation and propagation strategy,
an update (or write) only affects local state. Then, a probe (or read)
that reads a level-i aggregate value is sent up the tree to the issuing
node"s level-i ancestor and then down the tree to the leaves. The
system then computes the desired aggregate value at each layer up
the tree until the level-i ancestor that holds the desired value.
Finally, the level-i ancestor sends the result down the tree to the
issuing node. In the other extreme case of the aggressive Update-All
immediate computation and propagation on writes [38], when an
update occurs, changes are aggregated up the tree, and each new
aggregate value is flooded to all of a node"s descendants. In this
case, each level-i node not only maintains the aggregate values for
the level-i subtree but also receives and locally stores copies of all
of its ancestors" level- j ( j > i) aggregation values. Also, a leaf
satisfies a probe for a level-i aggregate using purely local data. In an
intermediate Update-Up strategy, the root of each subtree maintains
the subtree"s current aggregate value, and when an update occurs,
the leaf node updates its local state and passes the update to its
parent, and then each successive enclosing subtree updates its
aggregate value and passes the new value to its parent. This strategy
satisfies a leaf"s probe for a level-i aggregate value by sending the
probe up to the level-i ancestor of the leaf and then sending the
aggregate value down to the leaf. Finally, notice that other strategies
exist. In general, an Update-Upk-Downj strategy aggregates up to
parameter description optional
attrType Attribute Type
aggrfunc Aggregation Function
up How far upward each update is
sent (default: all)
X
down How far downward each
aggregate is sent (default: none)
X
domain Domain restriction (default: none) X
expTime Expiry Time
Table 1: Arguments for the install operation
the kth level and propagates the aggregate values of a node at level
l (s.t. l ≤ k) downward for j levels.
A SDIMS must provide a wide range of flexible computation and
propagation strategies to applications for it to be a general
abstraction. An application should be able to choose a particular
mechanism based on its read-to-write ratio that reduces the bandwidth
consumption while attaining the required responsiveness and
precision. Note that the read-to-write ratio of the attributes that
applications install vary extensively. For example, a read-dominated
attribute like numCPUs rarely changes in value, while a
writedominated attribute like numProcesses changes quite often. An
aggregation strategy like Update-All works well for read-dominated
attributes but suffers high bandwidth consumption when applied for
write-dominated attributes. Conversely, an approach like
UpdateLocal works well for write-dominated attributes but suffers from
unnecessary query latency or imprecision for read-dominated
attributes.
SDIMS also allows non-uniform computation and propagation
across the aggregation tree with different up and down parameters
in different subtrees so that applications can adapt with the
spatial and temporal heterogeneity of read and write operations. With
respect to spatial heterogeneity, access patterns may differ for
different parts of the tree, requiring different propagation strategies
for different parts of the tree. Similarly with respect to temporal
heterogeneity, access patterns may change over time requiring
different strategies over time.
3.1 Aggregation API
We provide the flexibility described above by splitting the
aggregation API into three functions: Install() installs an aggregation
function that defines an operation on an attribute type and
specifies the update strategy that the function will use, Update() inserts
or modifies a node"s local value for an attribute, and Probe()
obtains an aggregate value for a specified subtree. The install
interface allows applications to specify the k and j parameters of the
Update-Upk-Downj strategy along with the aggregation function.
The update interface invokes the aggregation of an attribute on the
tree according to corresponding aggregation function"s aggregation
strategy. The probe interface not only allows applications to obtain
the aggregated value for a specified tree but also allows a probing
node to continuously fetch the values for a specified time, thus
enabling an application to adapt to spatial and temporal heterogeneity.
The rest of the section describes these three interfaces in detail.
3.1.1 Install
The Install operation installs an aggregation function in the
system. The arguments for this operation are listed in Table 1. The
attrType argument denotes the type of attributes on which this
aggregation function is invoked. Installed functions are soft state that
must be periodically renewed or they will be garbage collected at
expTime.
The arguments up and down specify the aggregate computation
381
Update Strategy On Update On Probe for Global Aggregate Value On Probe for Level-1 Aggregate Value
Update-Local
Update-Up
Update-All
Figure 1: Flexible API
parameter description optional
attrType Attribute Type
attrName Attribute Name
mode Continuous or One-shot (default:
one-shot)
X
level Level at which aggregate is sought
(default: at all levels)
X
up How far up to go and re-fetch the
value (default: none)
X
down How far down to go and
reaggregate (default: none)
X
expTime Expiry Time
Table 2: Arguments for the probe operation
and propagation strategy Update-Upk-Downj. The domain
argument, if present, indicates that the aggregation function should be
installed on all nodes in the specified domain; otherwise the
function is installed on all nodes in the system.
3.1.2 Update
The Update operation takes three arguments attrType, attrName,
and value and creates a new (attrType, attrName, value) tuple or
updates the value of an old tuple with matching attrType and
attrName at a leaf node.
The update interface meshes with installed aggregate
computation and propagation strategy to provide flexibility. In particular,
as outlined above and described in detail in Section 5, after a leaf
applies an update locally, the update may trigger re-computation
of aggregate values up the tree and may also trigger propagation
of changed aggregate values down the tree. Notice that our
abstraction associates an aggregation function with only an attrType
but lets updates specify an attrName along with the attrType. This
technique helps achieve scalability with respect to nodes and
attributes as described in Section 4.
3.1.3 Probe
The Probe operation returns the value of an attribute to an
application. The complete argument set for the probe operation is shown
in Table 2. Along with the attrName and the attrType arguments, a
level argument specifies the level at which the answers are required
for an attribute. In our implementation we choose to return results
at all levels k < l for a level-l probe because (i) it is inexpensive as
the nodes traversed for level-l probe also contain level k aggregates
for k < l and as we expect the network cost of transmitting the
additional information to be small for the small aggregates which we
focus and (ii) it is useful as applications can efficiently get several
aggregates with a single probe (e.g., for domain-scoped queries as
explained in Section 4.2).
Probes with mode set to continuous and with finite expTime
enable applications to handle spatial and temporal heterogeneity. When
node A issues a continuous probe at level l for an attribute, then
regardless of the up and down parameters, updates for the attribute
at any node in A"s level-l ancestor"s subtree are aggregated up to
level l and the aggregated value is propagated down along the path
from the ancestor to A. Note that continuous mode enables SDIMS
to support a distributed sensor-actuator mechanism where a
sensor monitors a level-i aggregate with a continuous mode probe and
triggers an actuator upon receiving new values for the probe.
The up and down arguments enable applications to perform
ondemand fast re-aggregation during reconfigurations, where a forced
re-aggregation is done for the corresponding levels even if the
aggregated value is available, as we discuss in Section 6. When
present, the up and down arguments are interpreted as described
in the install operation.
3.1.4 Dynamic Adaptation
At the API level, the up and down arguments in install API can be
regarded as hints, since they suggest a computation strategy but do
not affect the semantics of an aggregation function. A SDIMS
implementation can dynamically adjust its up/down strategies for an
attribute based on its measured read/write frequency. But a virtual
intermediate node needs to know the current up and down
propagation values to decide if the local aggregate is fresh in order to
answer a probe. This is the key reason why up and down need to be
statically defined at the install time and can not be specified in the
update operation. In dynamic adaptation, we implement a
leasebased mechanism where a node issues a lease to a parent or a child
denoting that it will keep propagating the updates to that parent or
child. We are currently evaluating different policies to decide when
to issue a lease and when to revoke a lease.
4. SCALABILITY
Our design achieves scalability with respect to both nodes and
attributes through two key ideas. First, it carefully defines the
aggregation abstraction to mesh well with its underlying scalable DHT
system. Second, it refines the basic DHT abstraction to form an
Autonomous DHT (ADHT) to achieve the administrative isolation
properties that are crucial to scaling for large real-world systems.
In this section, we describe these two ideas in detail.
4.1 Leveraging DHTs
In contrast to previous systems [4, 15, 38, 39, 45], SDIMS"s
aggregation abstraction specifies both an attribute type and attribute
name and associates an aggregation function with a type rather than
just specifying and associating a function with a name. Installing a
single function that can operate on many different named attributes
matching a type improves scalability for sparse attribute types
with large, sparsely-filled name spaces. For example, to construct
a file location service, our interface allows us to install a single
function that computes an aggregate value for any named file. A
subtree"s aggregate value for (FILELOC, name) would be the ID of
a node in the subtree that stores the named file. Conversely,
Astrolabe copes with sparse attributes by having aggregation functions
compute sets or lists and suggests that scalability can be improved
by representing such sets with Bloom filters [6]. Supporting sparse
names within a type provides at least two advantages. First, when
the value associated with a name is updated, only the state
associ382
001 010100
000
011 101
111
110
011 111 001 101 000 100 110010
L0
L1
L2
L3
Figure 2: The DHT tree corresponding to key 111 (DHTtree111)
and the corresponding aggregation tree.
ated with that name needs to be updated and propagated to other
nodes. Second, splitting values associated with different names
into different aggregation values allows our system to leverage
Distributed Hash Tables (DHTs) to map different names to different
trees and thereby spread the function"s logical root node"s load and
state across multiple physical nodes.
Given this abstraction, scalably mapping attributes to DHTs is
straightforward. DHT systems assign a long, random ID to each
node and define an algorithm to route a request for key k to a
node rootk such that the union of paths from all nodes forms a tree
DHTtreek rooted at the node rootk. Now, as illustrated in Figure 2,
by aggregating an attribute along the aggregation tree
corresponding to DHTtreek for k =hash(attribute type, attribute name),
different attributes will be aggregated along different trees.
In comparison to a scheme where all attributes are aggregated
along a single tree, aggregating along multiple trees incurs lower
maximum node stress: whereas in a single aggregation tree
approach, the root and the intermediate nodes pass around more
messages than leaf nodes, in a DHT-based multi-tree, each node acts as
an intermediate aggregation point for some attributes and as a leaf
node for other attributes. Hence, this approach distributes the onus
of aggregation across all nodes.
4.2 Administrative Isolation
Aggregation trees should provide administrative isolation by
ensuring that for each domain, the virtual node at the root of the
smallest aggregation subtree containing all nodes of that domain is
hosted by a node in that domain. Administrative isolation is
important for three reasons: (i) for security - so that updates and probes
flowing in a domain are not accessible outside the domain, (ii) for
availability - so that queries for values in a domain are not affected
by failures of nodes in other domains, and (iii) for efficiency - so
that domain-scoped queries can be simple and efficient.
To provide administrative isolation to aggregation trees, a DHT
should satisfy two properties:
1. Path Locality: Search paths should always be contained in
the smallest possible domain.
2. Path Convergence: Search paths for a key from different
nodes in a domain should converge at a node in that domain.
Existing DHTs support path locality [18] or can easily support it
by using the domain nearness as the distance metric [7, 17], but they
do not guarantee path convergence as those systems try to optimize
the search path to the root to reduce response latency. For example,
Pastry [32] uses prefix routing in which each node"s routing table
contains one row per hexadecimal digit in the nodeId space where
the ith row contains a list of nodes whose nodeIds differ from the
current node"s nodeId in the ith digit with one entry for each
possible digit value. Given a routing topology, to route a packet to
an arbitrary destination key, a node in Pastry forwards a packet to
the node with a nodeId prefix matching the key in at least one more
digit than the current node. If such a node is not known, the
current node uses an additional data structure, the leaf set containing
110XX
010XX
011XX
100XX
101XX
univ
dep1 dep2
key = 111XX
011XX 100XX 101XX 110XX 010XX
L1
L0
L2
Figure 3: Example shows how isolation property is violated
with original Pastry. We also show the corresponding
aggregation tree.
110XX
010XX
011XX
100XX
101XX
univ
dep1 dep2
key = 111XX
X
011XX 100XX 101XX 110XX 010XX
L0
L1
L2
Figure 4: Autonomous DHT satisfying the isolation property.
Also the corresponding aggregation tree is shown.
L immediate higher and lower neighbors in the nodeId space, and
forwards the packet to a node with an identical prefix but that is
numerically closer to the destination key in the nodeId space. This
process continues until the destination node appears in the leaf set,
after which the message is routed directly. Pastry"s expected
number of routing steps is logn, where n is the number of nodes, but
as Figure 3 illustrates, this algorithm does not guarantee path
convergence: if two nodes in a domain have nodeIds that match a key
in the same number of bits, both of them can route to a third node
outside the domain when routing for that key.
Simple modifications to Pastry"s route table construction and
key-routing protocols yield an Autonomous DHT (ADHT) that
satisfies the path locality and path convergence properties. As Figure 4
illustrates, whenever two nodes in a domain share the same prefix
with respect to a key and no other node in the domain has a longer
prefix, our algorithm introduces a virtual node at the boundary of
the domain corresponding to that prefix plus the next digit of the
key; such a virtual node is simulated by the existing node whose id
is numerically closest to the virtual node"s id. Our ADHT"s routing
table differs from Pastry"s in two ways. First, each node maintains
a separate leaf set for each domain of which it is a part. Second,
nodes use two proximity metrics when populating the routing tables
- hierarchical domain proximity is the primary metric and network
distance is secondary. Then, to route a packet to a global root for a
key, ADHT routing algorithm uses the routing table and the leaf set
entries to route to each successive enclosing domain"s root (the
virtual or real node in the domain matching the key in the maximum
number of digits). Additional details about the ADHT algorithm
are available in an extended technical report [44].
Properties. Maintaining a different leaf set for each
administrative hierarchy level increases the number of neighbors that each
node tracks to (2b)∗lgb n+c.l from (2b)∗lgb n+c in unmodified
Pastry, where b is the number of bits in a digit, n is the number of
nodes, c is the leaf set size, and l is the number of domain levels.
Routing requires O(lgbn + l) steps compared to O(lgbn) steps in
Pastry; also, each routing hop may be longer than in Pastry because
the modified algorithm"s routing table prefers same-domain nodes
over nearby nodes. We experimentally quantify the additional
routing costs in Section 7.
In a large system, the ADHT topology allows domains to
im383
A1 A2 B1
((B1.B.,1),
(B.,1),(.,1))
((B1.B.,1),
(B.,1),(.,1))
L2
L1
L0
((B1.B.,1),
(B.,1),(.,3))
((A1.A.,1),
(A.,2),(.,2))
((A1.A.,1),
(A.,1),(.,1))
((A2.A.,1),
(A.,1),(.,1))
Figure 5: Example for domain-scoped queries
prove security for sensitive attribute types by installing them only
within a specified domain. Then, aggregation occurs entirely within
the domain and a node external to the domain can neither observe
nor affect the updates and aggregation computations of the attribute
type. Furthermore, though we have not implemented this feature
in the prototype, the ADHT topology would also support
domainrestricted probes that could ensure that no one outside of a domain
can observe a probe for data stored within the domain.
The ADHT topology also enhances availability by allowing the
common case of probes for data within a domain to depend only on
a domain"s nodes. This, for example, allows a domain that becomes
disconnected from the rest of the Internet to continue to answer
queries for local data.
Aggregation trees that provide administrative isolation also
enable the definition of simple and efficient domain-scoped
aggregation functions to support queries like what is the average load
on machines in domain X? For example, consider an
aggregation function to count the number of machines in an example
system with three machines illustrated in Figure 5. Each leaf node
l updates attribute NumMachines with a value vl containing a set
of tuples of form (Domain, Count) for each domain of which the
node is a part. In the example, the node A1 with name A1.A.
performs an update with the value ((A1.A.,1),(A.,1),(.,1)). An
aggregation function at an internal virtual node hosted on node N with
child set C computes the aggregate as a set of tuples: for each
domain D that N is part of, form a tuple (D,∑c∈C(count|(D,count) ∈
vc)). This computation is illustrated in the Figure 5. Now a query
for NumMachines with level set to MAX will return the
aggregate values at each intermediate virtual node on the path to the
root as a set of tuples (tree level, aggregated value) from which
it is easy to extract the count of machines at each enclosing
domain. For example, A1 would receive ((2, ((B1.B.,1),(B.,1),(.,3))),
(1, ((A1.A.,1),(A.,2),(.,2))), (0, ((A1.A.,1),(A.,1),(.,1)))). Note that
supporting domain-scoped queries would be less convenient and
less efficient if aggregation trees did not conform to the system"s
administrative structure. It would be less efficient because each
intermediate virtual node will have to maintain a list of all values at
the leaves in its subtree along with their names and it would be less
convenient as applications that need an aggregate for a domain will
have to pick values of nodes in that domain from the list returned
by a probe and perform computation.
5. PROTOTYPE IMPLEMENTATION
The internal design of our SDIMS prototype comprises of two
layers: the Autonomous DHT (ADHT) layer manages the overlay
topology of the system and the Aggregation Management Layer
(AML) maintains attribute tuples, performs aggregations, stores
and propagates aggregate values. Given the ADHT construction
described in Section 4.2, each node implements an Aggregation
Management Layer (AML) to support the flexible API described in
Section 3. In this section, we describe the internal state and
operation of the AML layer of a node in the system.
local
MIB
MIBs
ancestor
reduction MIB
(level 1)MIBs
ancestor
MIB from
child 0X...
MIB from
child 0X...
Level 2
Level 1
Level 3
Level 0
1XXX...
10XX...
100X...
From parents0X..
To parent 0X...
−− aggregation functions
From parents
To parent 10XX...
1X..
1X..
1X..
To parent 11XX...
Node Id: (1001XXX)
1001X..
100X..
10X..
1X..
Virtual Node
Figure 6: Example illustrating the data structures and the
organization of them at a node.
We refer to a store of (attribute type, attribute name, value) tuples
as a Management Information Base or MIB, following the
terminology from Astrolabe [38] and SNMP [34]. We refer an (attribute
type, attribute name) tuple as an attribute key.
As Figure 6 illustrates, each physical node in the system acts as
several virtual nodes in the AML: a node acts as leaf for all attribute
keys, as a level-1 subtree root for keys whose hash matches the
node"s ID in b prefix bits (where b is the number of bits corrected
in each step of the ADHT"s routing scheme), as a level-i subtree
root for attribute keys whose hash matches the node"s ID in the
initial i ∗ b bits, and as the system"s global root for attribute keys
whose hash matches the node"s ID in more prefix bits than any
other node (in case of a tie, the first non-matching bit is ignored
and the comparison is continued [46]).
To support hierarchical aggregation, each virtual node at the root
of a level-i subtree maintains several MIBs that store (1) child MIBs
containing raw aggregate values gathered from children, (2) a
reduction MIB containing locally aggregated values across this raw
information, and (3) an ancestor MIB containing aggregate values
scattered down from ancestors. This basic strategy of maintaining
child, reduction, and ancestor MIBs is based on Astrolabe [38],
but our structured propagation strategy channels information that
flows up according to its attribute key and our flexible propagation
strategy only sends child updates up and ancestor aggregate results
down as far as specified by the attribute key"s aggregation
function. Note that in the discussion below, for ease of explanation, we
assume that the routing protocol is correcting single bit at a time
(b = 1). Our system, built upon Pastry, handles multi-bit correction
(b = 4) and is a simple extension to the scheme described here.
For a given virtual node ni at level i, each child MIB contains the
subset of a child"s reduction MIB that contains tuples that match
ni"s node ID in i bits and whose up aggregation function attribute is
at least i. These local copies make it easy for a node to recompute
a level-i aggregate value when one child"s input changes. Nodes
maintain their child MIBs in stable storage and use a simplified
version of the Bayou log exchange protocol (sans conflict detection
and resolution) for synchronization after disconnections [26].
Virtual node ni at level i maintains a reduction MIB of tuples
with a tuple for each key present in any child MIB containing the
attribute type, attribute name, and output of the attribute type"s
aggregate functions applied to the children"s tuples.
A virtual node ni at level i also maintains an ancestor MIB to
store the tuples containing attribute key and a list of aggregate
values at different levels scattered down from ancestors. Note that the
384
list for a key might contain multiple aggregate values for a same
level but aggregated at different nodes (see Figure 4). So, the
aggregate values are tagged not only with level information, but are
also tagged with ID of the node that performed the aggregation.
Level-0 differs slightly from other levels. Each level-0 leaf node
maintains a local MIB rather than maintaining child MIBs and a
reduction MIB. This local MIB stores information about the local
node"s state inserted by local applications via update() calls. We
envision various sensor programs and applications insert data into
local MIB. For example, one program might monitor local
configuration and perform updates with information such as total memory,
free memory, etc., A distributed file system might perform update
for each file stored on the local node.
Along with these MIBs, a virtual node maintains two other
tables: an aggregation function table and an outstanding probes
table. An aggregation function table contains the aggregation
function and installation arguments (see Table 1) associated with an
attribute type or an attribute type and name. Each aggregate
function is installed on all nodes in a domain"s subtree, so the aggregate
function table can be thought of as a special case of the ancestor
MIB with domain functions always installed up to a root within a
specified domain and down to all nodes within the domain. The
outstanding probes table maintains temporary information
regarding in-progress probes.
Given these data structures, it is simple to support the three API
functions described in Section 3.1.
Install The Install operation (see Table 1) installs on a domain an
aggregation function that acts on a specified attribute type.
Execution of an install operation for function aggrFunc on attribute type
attrType proceeds in two phases: first the install request is passed
up the ADHT tree with the attribute key (attrType, null) until it
reaches the root for that key within the specified domain. Then, the
request is flooded down the tree and installed on all intermediate
and leaf nodes.
Update When a level i virtual node receives an update for an
attribute from a child below: it first recomputes the level-i
aggregate value for the specified key, stores that value in its reduction
MIB and then, subject to the function"s up and domain parameters,
passes the updated value to the appropriate parent based on the
attribute key. Also, the level-i (i ≥ 1) virtual node sends the updated
level-i aggregate to all its children if the function"s down parameter
exceeds zero. Upon receipt of a level-i aggregate from a parent,
a level k virtual node stores the value in its ancestor MIB and, if
k ≥ i−down, forwards this aggregate to its children.
Probe A Probe collects and returns the aggregate value for a
specified attribute key for a specified level of the tree. As Figure 1
illustrates, the system satisfies a probe for a level-i aggregate value
using a four-phase protocol that may be short-circuited when
updates have previously propagated either results or partial results up
or down the tree. In phase 1, the route probe phase, the system
routes the probe up the attribute key"s tree to either the root of the
level-i subtree or to a node that stores the requested value in its
ancestor MIB. In the former case, the system proceeds to phase 2 and
in the latter it skips to phase 4. In phase 2, the probe scatter phase,
each node that receives a probe request sends it to all of its children
unless the node"s reduction MIB already has a value that matches
the probe"s attribute key, in which case the node initiates phase 3
on behalf of its subtree. In phase 3, the probe aggregation phase,
when a node receives values for the specified key from each of its
children, it executes the aggregate function on these values and
either (a) forwards the result to its parent (if its level is less than i)
or (b) initiates phase 4 (if it is at level i). Finally, in phase 4, the
aggregate routing phase the aggregate value is routed down to the
node that requested it. Note that in the extreme case of a function
installed with up = down = 0, a level-i probe can touch all nodes
in a level-i subtree while in the opposite extreme case of a
function installed with up = down = ALL, probe is a completely local
operation at a leaf.
For probes that include phases 2 (probe scatter) and 3 (probe
aggregation), an issue is how to decide when a node should stop
waiting for its children to respond and send up its current
aggregate value. A node stops waiting for its children when one of three
conditions occurs: (1) all children have responded, (2) the ADHT
layer signals one or more reconfiguration events that mark all
children that have not yet responded as unreachable, or (3) a watchdog
timer for the request fires. The last case accounts for nodes that
participate in the ADHT protocol but that fail at the AML level.
At a virtual node, continuous probes are handled similarly as
one-shot probes except that such probes are stored in the
outstanding probe table for a time period of expTime specified in the probe.
Thus each update for an attribute triggers re-evaluation of
continuous probes for that attribute.
We implement a lease-based mechanism for dynamic adaptation.
A level-l virtual node for an attribute can issue the lease for
levell aggregate to a parent or a child only if up is greater than l or it
has leases from all its children. A virtual node at level l can issue
the lease for level-k aggregate for k > l to a child only if down≥
k −l or if it has the lease for that aggregate from its parent. Now a
probe for level-k aggregate can be answered by level-l virtual node
if it has a valid lease, irrespective of the up and down values. We
are currently designing different policies to decide when to issue a
lease and when to revoke a lease and are also evaluating them with
the above mechanism.
Our current prototype does not implement access control on
install, update, and probe operations but we plan to implement
Astrolabe"s [38] certificate-based restrictions. Also our current
prototype does not restrict the resource consumption in executing the
aggregation functions; but, ‘techniques from research on resource
management in server systems and operating systems [2, 3] can be
applied here.
6. ROBUSTNESS
In large scale systems, reconfigurations are common. Our two
main principles for robustness are to guarantee (i) read availability
- probes complete in finite time, and (ii) eventual consistency -
updates by a live node will be visible to probes by connected nodes
in finite time. During reconfigurations, a probe might return a stale
value for two reasons. First, reconfigurations lead to incorrectness
in the previous aggregate values. Second, the nodes needed for
aggregation to answer the probe become unreachable. Our
system also provides two hooks that applications can use for improved
end-to-end robustness in the presence of reconfigurations: (1)
Ondemand re-aggregation and (2) application controlled replication.
Our system handles reconfigurations at two levels - adaptation at
the ADHT layer to ensure connectivity and adaptation at the AML
layer to ensure access to the data in SDIMS.
6.1 ADHT Adaptation
Our ADHT layer adaptation algorithm is same as Pastry"s
adaptation algorithm [32] - the leaf sets are repaired as soon as a
reconfiguration is detected and the routing table is repaired lazily. Note
that maintaining extra leaf sets does not degrade the fault-tolerance
property of the original Pastry; indeed, it enhances the resilience
of ADHTs to failures by providing additional routing links. Due
to redundancy in the leaf sets and the routing table, updates can be
routed towards their root nodes successfully even during failures.
385
Reconfig
reconfig
notices
DHT
partial
DHT
complete
DHT
ends
Lazy
Time
Data
3 7 81 2 4 5 6starts
Lazy
Data
starts
Lazy
Data
starts
Lazy
Data
repairrepair
reaggr reaggr reaggr reaggr
happens
Figure 7: Default lazy data re-aggregation time line
Also note that the administrative isolation property satisfied by our
ADHT algorithm ensures that the reconfigurations in a level i
domain do not affect the probes for level i in a sibling domain.
6.2 AML Adaptation
Broadly, we use two types of strategies for AML adaptation in
the face of reconfigurations: (1) Replication in time as a
fundamental baseline strategy, and (2) Replication in space as an
additional performance optimization that falls back on replication in
time when the system runs out of replicas. We provide two
mechanisms for replication in time. First, lazy re-aggregation propagates
already received updates to new children or new parents in a lazy
fashion over time. Second, applications can reduce the probability
of probe response staleness during such repairs through our flexible
API with appropriate setting of the down parameter.
Lazy Re-aggregation: The DHT layer informs the AML layer
about reconfigurations in the network using the following three
function calls - newParent, failedChild, and newChild. On
newParent(parent, prefix), all probes in the outstanding-probes table
corresponding to prefix are re-evaluated. If parent is not null, then
aggregation functions and already existing data are lazily transferred
in the background. Any new updates, installs, and probes for this
prefix are sent to the parent immediately. On failedChild(child,
prefix), the AML layer marks the child as inactive and any outstanding
probes that are waiting for data from this child are re-evaluated.
On newChild(child, prefix), the AML layer creates space in its data
structures for this child.
Figure 7 shows the time line for the default lazy re-aggregation
upon reconfiguration. Probes initiated between points 1 and 2 and
that are affected by reconfigurations are reevaluated by AML upon
detecting the reconfiguration. Probes that complete or start between
points 2 and 8 may return stale answers.
On-demand Re-aggregation: The default lazy aggregation
scheme lazily propagates the old updates in the system.
Additionally, using up and down knobs in the Probe API, applications can
force on-demand fast re-aggregation of updates to avoid staleness
in the face of reconfigurations. In particular, if an application
detects or suspects an answer as stale, then it can re-issue the probe
increasing the up and down parameters to force the refreshing of
the cached data. Note that this strategy will be useful only after the
DHT adaptation is completed (Point 6 on the time line in Figure 7).
Replication in Space: Replication in space is more
challenging in our system than in a DHT file location application because
replication in space can be achieved easily in the latter by just
replicating the root node"s contents. In our system, however, all internal
nodes have to be replicated along with the root.
In our system, applications control replication in space using up
and down knobs in the Install API; with large up and down values,
aggregates at the intermediate virtual nodes are propagated to more
nodes in the system. By reducing the number of nodes that have to
be accessed to answer a probe, applications can reduce the
probability of incorrect results occurring due to the failure of nodes that
do not contribute to the aggregate. For example, in a file location
application, using a non-zero positive down parameter ensures that
a file"s global aggregate is replicated on nodes other than the root.
0.1
1
10
100
1000
10000
0.0001 0.01 1 100 10000
Avg.numberofmessagesperoperation
Read to Write ratio
Update-All
Up=ALL, Down=9
Up=ALL, Down=6
Update-Up
Update-Local
Up=2, Down=0
Up=5, Down=0
Figure 8: Flexibility of our approach. With different UP and
DOWN values in a network of 4096 nodes for different
readwrite ratios.
Probes for the file location can then be answered without accessing
the root; hence they are not affected by the failure of the root.
However, note that this technique is not appropriate in some cases. An
aggregated value in file location system is valid as long as the node
hosting the file is active, irrespective of the status of other nodes
in the system; whereas an application that counts the number of
machines in a system may receive incorrect results irrespective of
the replication. If reconfigurations are only transient (like a node
temporarily not responding due to a burst of load), the replicated
aggregate closely or correctly resembles the current state.
7. EVALUATION
We have implemented a prototype of SDIMS in Java using the
FreePastry framework [32] and performed large-scale simulation
experiments and micro-benchmark experiments on two real
networks: 187 machines in the department and 69 machines on the
PlanetLab [27] testbed. In all experiments, we use static up and
down values and turn off dynamic adaptation. Our evaluation
supports four main conclusions. First, flexible API provides different
propagation strategies that minimize communication resources at
different read-to-write ratios. For example, in our simulation we
observe Update-Local to be efficient for read-to-write ratios
below 0.0001, Update-Up around 1, and Update-All above 50000.
Second, our system is scalable with respect to both nodes and
attributes. In particular, we find that the maximum node stress in
our system is an order lower than observed with an Update-All,
gossiping approach. Third, in contrast to unmodified Pastry which
violates path convergence property in upto 14% cases, our system
conforms to the property. Fourth, the system is robust to
reconfigurations and adapts to failures with in a few seconds.
7.1 Simulation Experiments
Flexibility and Scalability: A major innovation of our system
is its ability to provide flexible computation and propagation of
aggregates. In Figure 8, we demonstrate the flexibility exposed by the
aggregation API explained in Section 3. We simulate a system with
4096 nodes arranged in a domain hierarchy with branching factor
(bf) of 16 and install several attributes with different up and down
parameters. We plot the average number of messages per operation
incurred for a wide range of read-to-write ratios of the operations
for different attributes. Simulations with other sizes of networks
with different branching factors reveal similar results. This graph
clearly demonstrates the benefit of supporting a wide range of
computation and propagation strategies. Although having a small UP
386
1
10
100
1000
10000
100000
1e+06
1e+07
1 10 100 1000 10000 100000
MaximumNodeStress
Number of attributes installed
Gossip 256
Gossip 4096
Gossip 65536
DHT 256
DHT 4096
DHT 65536
Figure 9: Max node stress for a gossiping approach vs. ADHT
based approach for different number of nodes with increasing
number of sparse attributes.
value is efficient for attributes with low read-to-write ratios (write
dominated applications), the probe latency, when reads do occur,
may be high since the probe needs to aggregate the data from all
the nodes that did not send their aggregate up. Conversely,
applications that wish to improve probe overheads or latencies can increase
their UP and DOWN propagation at a potential cost of increase in
write overheads.
Compared to an existing Update-all single aggregation tree
approach [38], scalability in SDIMS comes from (1) leveraging DHTs
to form multiple aggregation trees that split the load across nodes
and (2) flexible propagation that avoids propagation of all updates
to all nodes. Figure 9 demonstrates the SDIMS"s scalability with
nodes and attributes. For this experiment, we build a simulator to
simulate both Astrolabe [38] (a gossiping, Update-All approach)
and our system for an increasing number of sparse attributes. Each
attribute corresponds to the membership in a multicast session with
a small number of participants. For this experiment, the session
size is set to 8, the branching factor is set to 16, the propagation
mode for SDIMS is Update-Up, and the participant nodes perform
continuous probes for the global aggregate value. We plot the
maximum node stress (in terms of messages) observed in both schemes
for different sized networks with increasing number of sessions
when the participant of each session performs an update operation.
Clearly, the DHT based scheme is more scalable with respect to
attributes than an Update-all gossiping scheme. Observe that at some
constant number of attributes, as the number of nodes increase in
the system, the maximum node stress increases in the gossiping
approach, while it decreases in our approach as the load of
aggregation is spread across more nodes. Simulations with other session
sizes (4 and 16) yield similar results.
Administrative Hierarchy and Robustness: Although the
routing protocol of ADHT might lead to an increased number of
hops to reach the root for a key as compared to original Pastry, the
algorithm conforms to the path convergence and locality properties
and thus provides administrative isolation property. In Figure 10,
we quantify the increased path length by comparisons with
unmodified Pastry for different sized networks with different branching
factors of the domain hierarchy tree. To quantify the path
convergence property, we perform simulations with a large number of
probe pairs - each pair probing for a random key starting from two
randomly chosen nodes. In Figure 11, we plot the percentage of
probe pairs for unmodified pastry that do not conform to the path
convergence property. When the branching factor is low, the
domain hierarchy tree is deeper resulting in a large difference between
0
1
2
3
4
5
6
7
10 100 1000 10000 100000
PathLength
Number of Nodes
ADHT bf=4
ADHT bf=16
ADHT bf=64
PASTRY bf=4,16,64
Figure 10: Average path length to root in Pastry versus ADHT
for different branching factors. Note that all lines
corresponding to Pastry overlap.
0
2
4
6
8
10
12
14
16
10 100 1000 10000 100000
Percentageofviolations
Number of Nodes
bf=4
bf=16
bf=64
Figure 11: Percentage of probe pairs whose paths to the root
did not conform to the path convergence property with Pastry.
U
pdate-All
U
pdate-U
p
U
pdate-Local
0
200
400
600
800
Latency(inms)
Average Latency
U
pdate-All
U
pdate-U
p
U
pdate-Local
0
1000
2000
3000
Latency(inms) Average Latency
(a) (b)
Figure 12: Latency of probes for aggregate at global root level
with three different modes of aggregate propagation on (a)
department machines, and (b) PlanetLab machines
Pastry and ADHT in the average path length; but it is at these small
domain sizes, that the path convergence fails more often with the
original Pastry.
7.2 Testbed experiments
We run our prototype on 180 department machines (some
machines ran multiple node instances, so this configuration has a
total of 283 SDIMS nodes) and also on 69 machines of the
PlanetLab [27] testbed. We measure the performance of our system with
two micro-benchmarks. In the first micro-benchmark, we install
three aggregation functions of types Update-Local, Update-Up, and
Update-All, perform update operation on all nodes for all three
aggregation functions, and measure the latencies incurred by probes
for the global aggregate from all nodes in the system. Figure 12
387
0
20
40
60
80
100
120
140
0 5 10 15 20 25
2700
2720
2740
2760
2780
2800
2820
2840
Latency(inms)
ValuesObserved
Time(in sec)
Values
latency
Node Killed
Figure 13: Micro-benchmark on department network showing
the behavior of the probes from a single node when failures are
happening at some other nodes. All 283 nodes assign a value of
10 to the attribute.
10
100
1000
10000
100000
0 50 100 150 200 250 300 350 400 450 500
500
550
600
650
700
Latency(inms)
ValuesObserved
Time(in sec)
Values
latency
Node Killed
Figure 14: Probe performance during failures on 69 machines
of PlanetLab testbed
shows the observed latencies for both testbeds. Notice that the
latency in Update-Local is high compared to the Update-UP policy.
This is because latency in Update-Local is affected by the presence
of even a single slow machine or a single machine with a high
latency network connection.
In the second benchmark, we examine robustness. We install one
aggregation function of type Update-Up that performs sum
operation on an integer valued attribute. Each node updates the attribute
with the value 10. Then we monitor the latencies and results
returned on the probe operation for global aggregate on one chosen
node, while we kill some nodes after every few probes. Figure 13
shows the results on the departmental testbed. Due to the nature
of the testbed (machines in a department), there is little change in
the latencies even in the face of reconfigurations. In Figure 14, we
present the results of the experiment on PlanetLab testbed. The
root node of the aggregation tree is terminated after about 275
seconds. There is a 5X increase in the latencies after the death of the
initial root node as a more distant node becomes the root node after
repairs. In both experiments, the values returned on probes start
reflecting the correct situation within a short time after the failures.
From both the testbed benchmark experiments and the
simulation experiments on flexibility and scalability, we conclude that (1)
the flexibility provided by SDIMS allows applications to tradeoff
read-write overheads (Figure 8), read latency, and sensitivity to
slow machines (Figure 12), (2) a good default aggregation
strategy is Update-Up which has moderate overheads on both reads and
writes (Figure 8), has moderate read latencies (Figure 12), and is
scalable with respect to both nodes and attributes (Figure 9), and
(3) small domain sizes are the cases where DHT algorithms fail to
provide path convergence more often and SDIMS ensures path
convergence with only a moderate increase in path lengths (Figure 11).
7.3 Applications
SDIMS is designed as a general distributed monitoring and
control infrastructure for a broad range of applications. Above, we
discuss some simple microbenchmarks including a multicast
membership service and a calculate-sum function. Van Renesse et al. [38]
provide detailed examples of how such a service can be used for a
peer-to-peer caching directory, a data-diffusion service, a
publishsubscribe system, barrier synchronization, and voting.
Additionally, we have initial experience using SDIMS to construct two
significant applications: the control plane for a large-scale distributed
file system [12] and a network monitor for identifying heavy
hitters that consume excess resources.
Distributed file system control: The PRACTI (Partial
Replication, Arbitrary Consistency, Topology Independence) replication
system provides a set of mechanisms for data replication over which
arbitrary control policies can be layered. We use SDIMS to provide
several key functions in order to create a file system over the
lowlevel PRACTI mechanisms.
First, nodes use SDIMS as a directory to handle read misses.
When a node n receives an object o, it updates the (ReadDir, o)
attribute with the value n; when n discards o from its local store,
it resets (ReadDir, o) to NULL. At each virtual node, the ReadDir
aggregation function simply selects a random non-null child value
(if any) and we use the Update-Up policy for propagating updates.
Finally, to locate a nearby copy of an object o, a node n1 issues a
series of probe requests for the (ReadDir, o) attribute, starting with
level = 1 and increasing the level value with each repeated probe
request until a non-null node ID n2 is returned. n1 then sends a
demand read request to n2, and n2 sends the data if it has it.
Conversely, if n2 does not have a copy of o, it sends a nack to n1,
and n1 issues a retry probe with the down parameter set to a value
larger than used in the previous probe in order to force on-demand
re-aggregation, which will yield a fresher value for the retry.
Second, nodes subscribe to invalidations and updates to interest
sets of files, and nodes use SDIMS to set up and maintain
perinterest-set network-topology-sensitive spanning trees for
propagating this information. To subscribe to invalidations for interest
set i, a node n1 first updates the (Inval, i) attribute with its
identity n1, and the aggregation function at each virtual node selects
one non-null child value. Finally, n1 probes increasing levels of the
the (Inval, i) attribute until it finds the first node n2 = n1; n1 then
uses n2 as its parent in the spanning tree. n1 also issues a
continuous probe for this attribute at this level so that it is notified of any
change to its spanning tree parent. Spanning trees for streams of
pushed updates are maintained in a similar manner.
In the future, we plan to use SDIMS for at least two additional
services within this replication system. First, we plan to use SDIMS
to track the read and write rates to different objects; prefetch
algorithms will use this information to prioritize replication [40, 41].
Second, we plan to track the ranges of invalidation sequence
numbers seen by each node for each interest set in order to augment
the spanning trees described above with additional hole filling to
allow nodes to locate specific invalidations they have missed.
Overall, our initial experience with using SDIMS for the
PRACTII replication system suggests that (1) the general aggregation
interface provided by SDIMS simplifies the construction of
distributed applications-given the low-level PRACTI mechanisms,
388
we were able to construct a basic file system that uses SDIMS for
several distinct control tasks in under two weeks and (2) the weak
consistency guarantees provided by SDIMS meet the requirements
of this application-each node"s controller effectively treats
information from SDIMS as hints, and if a contacted node does not have
the needed data, the controller retries, using SDIMS on-demand
reaggregation to obtain a fresher hint.
Distributed heavy hitter problem: The goal of the heavy hitter
problem is to identify network sources, destinations, or protocols
that account for significant or unusual amounts of traffic. As noted
by Estan et al. [13], this information is useful for a variety of
applications such as intrusion detection (e.g., port scanning), denial of
service detection, worm detection and tracking, fair network
allocation, and network maintenance. Significant work has been done
on developing high-performance stream-processing algorithms for
identifying heavy hitters at one router, but this is just a first step;
ideally these applications would like not just one router"s views of
the heavy hitters but an aggregate view.
We use SDIMS to allow local information about heavy hitters
to be pooled into a view of global heavy hitters. For each
destination IP address IPx, a node updates the attribute (DestBW,IPx)
with the number of bytes sent to IPx in the last time window. The
aggregation function for attribute type DestBW is installed with the
Update-UP strategy and simply adds the values from child nodes.
Nodes perform continuous probe for global aggregate of the
attribute and raise an alarm when the global aggregate value goes
above a specified limit. Note that only nodes sending data to a
particular IP address perform probes for the corresponding attribute.
Also note that techniques from [25] can be extended to hierarchical
case to tradeoff precision for communication bandwidth.
8. RELATED WORK
The aggregation abstraction we use in our work is heavily
influenced by the Astrolabe [38] project. Astrolabe adopts a
PropagateAll and unstructured gossiping techniques to attain robustness [5].
However, any gossiping scheme requires aggressive replication of
the aggregates. While such aggressive replication is efficient for
read-dominated attributes, it incurs high message cost for attributes
with a small read-to-write ratio. Our approach provides a
flexible API for applications to set propagation rules according to their
read-to-write ratios. Other closely related projects include
Willow [39], Cone [4], DASIS [1], and SOMO [45]. Willow, DASIS
and SOMO build a single tree for aggregation. Cone builds a tree
per attribute and requires a total order on the attribute values.
Several academic [15, 21, 42] and commercial [37] distributed
monitoring systems have been designed to monitor the status of
large networked systems. Some of them are centralized where all
the monitoring data is collected and analyzed at a central host.
Ganglia [15, 23] uses a hierarchical system where the attributes are
replicated within clusters using multicast and then cluster
aggregates are further aggregated along a single tree. Sophia [42] is
a distributed monitoring system designed with a declarative logic
programming model where the location of query execution is both
explicit in the language and can be calculated during evaluation.
This research is complementary to our work. TAG [21] collects
information from a large number of sensors along a single tree.
The observation that DHTs internally provide a scalable forest
of reduction trees is not new. Plaxton et al."s [28] original paper
describes not a DHT, but a system for hierarchically aggregating and
querying object location data in order to route requests to nearby
copies of objects. Many systems-building upon both Plaxton"s
bit-correcting strategy [32, 46] and upon other strategies [24, 29,
35]-have chosen to hide this power and export a simple and
general distributed hash table abstraction as a useful building block for
a broad range of distributed applications. Some of these systems
internally make use of the reduction forest not only for routing but
also for caching [32], but for simplicity, these systems do not
generally export this powerful functionality in their external interface.
Our goal is to develop and expose the internal reduction forest of
DHTs as a similarly general and useful abstraction.
Although object location is a predominant target application for
DHTs, several other applications like multicast [8, 9, 33, 36] and
DNS [11] are also built using DHTs. All these systems implicitly
perform aggregation on some attribute, and each one of them must
be designed to handle any reconfigurations in the underlying DHT.
With the aggregation abstraction provided by our system, designing
and building of such applications becomes easier.
Internal DHT trees typically do not satisfy domain locality
properties required in our system. Castro et al. [7] and Gummadi et
al. [17] point out the importance of path convergence from the
perspective of achieving efficiency and investigate the performance of
Pastry and other DHT algorithms, respectively. SkipNet [18]
provides domain restricted routing where a key search is limited to the
specified domain. This interface can be used to ensure path
convergence by searching in the lowest domain and moving up to the next
domain when the search reaches the root in the current domain.
Although this strategy guarantees path convergence, it loses the
aggregation tree abstraction property of DHTs as the domain constrained
routing might touch a node more than once (as it searches forward
and then backward to stay within a domain).
9. CONCLUSIONS
This paper presents a Scalable Distributed Information
Management System (SDIMS) that aggregates information in large-scale
networked systems and that can serve as a basic building block
for a broad range of applications. For large scale systems,
hierarchical aggregation is a fundamental abstraction for scalability.
We build our system by extending ideas from Astrolabe and DHTs
to achieve (i) scalability with respect to both nodes and attributes
through a new aggregation abstraction that helps leverage DHT"s
internal trees for aggregation, (ii) flexibility through a simple API
that lets applications control propagation of reads and writes, (iii)
administrative isolation through simple augmentations of current
DHT algorithms, and (iv) robustness to node and network
reconfigurations through lazy reaggregation, on-demand reaggregation,
and tunable spatial replication.
Acknowlegements
We are grateful to J.C. Browne, Robert van Renessee, Amin
Vahdat, Jay Lepreau, and the anonymous reviewers for their helpful
comments on this work.
10. REFERENCES
[1] K. Albrecht, R. Arnold, M. Gahwiler, and R. Wattenhofer.
Join and Leave in Peer-to-Peer Systems: The DASIS
approach. Technical report, CS, ETH Zurich, 2003.
[2] G. Back, W. H. Hsieh, and J. Lepreau. Processes in KaffeOS:
Isolation, Resource Management, and Sharing in Java. In
Proc. OSDI, Oct 2000.
[3] G. Banga, P. Druschel, and J. Mogul. Resource Containers:
A New Facility for Resource Management in Server
Systems. In OSDI99, Feb. 1999.
[4] R. Bhagwan, P. Mahadevan, G. Varghese, and G. M. Voelker.
Cone: A Distributed Heap-Based Approach to Resource
Selection. Technical Report CS2004-0784, UCSD, 2004.
389
[5] K. P. Birman. The Surprising Power of Epidemic
Communication. In Proceedings of FuDiCo, 2003.
[6] B. Bloom. Space/time tradeoffs in hash coding with
allowable errors. Comm. of the ACM, 13(7):422-425, 1970.
[7] M. Castro, P. Druschel, Y. C. Hu, and A. Rowstron.
Exploiting Network Proximity in Peer-to-Peer Overlay
Networks. Technical Report MSR-TR-2002-82, MSR.
[8] M. Castro, P. Druschel, A.-M. Kermarrec, A. Nandi,
A. Rowstron, and A. Singh. SplitStream: High-bandwidth
Multicast in a Cooperative Environment. In SOSP, 2003.
[9] M. Castro, P. Druschel, A.-M. Kermarrec, and A. Rowstron.
SCRIBE: A Large-scale and Decentralised Application-level
Multicast Infrastructure. IEEE JSAC (Special issue on
Network Support for Multicast Communications), 2002.
[10] J. Challenger, P. Dantzig, and A. Iyengar. A scalable and
highly available system for serving dynamic data at
frequently accessed web sites. In In Proceedings of
ACM/IEEE, Supercomputing "98 (SC98), Nov. 1998.
[11] R. Cox, A. Muthitacharoen, and R. T. Morris. Serving DNS
using a Peer-to-Peer Lookup Service. In IPTPS, 2002.
[12] M. Dahlin, L. Gao, A. Nayate, A. Venkataramani,
P. Yalagandula, and J. Zheng. PRACTI replication for
large-scale systems. Technical Report TR-04-28, The
University of Texas at Austin, 2004.
[13] C. Estan, G. Varghese, and M. Fisk. Bitmap algorithms for
counting active flows on high speed links. In Internet
Measurement Conference 2003, 2003.
[14] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat.
SHARP: An architecture for secure resource peering. In
Proc. SOSP, Oct. 2003.
[15] Ganglia: Distributed Monitoring and Execution System.
http://ganglia.sourceforge.net.
[16] S. Gribble, A. Halevy, Z. Ives, M. Rodrig, and D. Suciu.
What Can Peer-to-Peer Do for Databases, and Vice Versa? In
Proceedings of the WebDB, 2001.
[17] K. Gummadi, R. Gummadi, S. D. Gribble, S. Ratnasamy,
S. Shenker, and I. Stoica. The Impact of DHT Routing
Geometry on Resilience and Proximity. In SIGCOMM, 2003.
[18] N. J. A. Harvey, M. B. Jones, S. Saroiu, M. Theimer, and
A. Wolman. SkipNet: A Scalable Overlay Network with
Practical Locality Properties. In USITS, March 2003.
[19] R. Huebsch, J. M. Hellerstein, N. Lanham, B. T. Loo,
S. Shenker, and I. Stoica. Querying the Internet with PIER.
In Proceedings of the VLDB Conference, May 2003.
[20] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed
diffusion: a scalable and robust communication paradigm for
sensor networks. In MobiCom, 2000.
[21] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and
W. Hong. TAG: a Tiny AGgregation Service for ad-hoc
Sensor Networks. In OSDI, 2002.
[22] D. Malkhi. Dynamic Lookup Networks. In FuDiCo, 2002.
[23] M. L. Massie, B. N. Chun, and D. E. Culler. The ganglia
distributed monitoring system: Design, implementation, and
experience. In submission.
[24] P. Maymounkov and D. Mazieres. Kademlia: A Peer-to-peer
Information System Based on the XOR Metric. In
Proceesings of the IPTPS, March 2002.
[25] C. Olston and J. Widom. Offering a precision-performance
tradeoff for aggregation queries over replicated data. In
VLDB, pages 144-155, Sept. 2000.
[26] K. Petersen, M. Spreitzer, D. Terry, M. Theimer, and
A. Demers. Flexible Update Propagation for Weakly
Consistent Replication. In Proc. SOSP, Oct. 1997.
[27] Planetlab. http://www.planet-lab.org.
[28] C. G. Plaxton, R. Rajaraman, and A. W. Richa. Accessing
Nearby Copies of Replicated Objects in a Distributed
Environment. In ACM SPAA, 1997.
[29] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and
S. Shenker. A Scalable Content Addressable Network. In
Proceedings of ACM SIGCOMM, 2001.
[30] S. Ratnasamy, S. Shenker, and I. Stoica. Routing Algorithms
for DHTs: Some Open Questions. In IPTPS, March 2002.
[31] T. Roscoe, R. Mortier, P. Jardetzky, and S. Hand. InfoSpect:
Using a Logic Language for System Health Monitoring in
Distributed Systems. In Proceedings of the SIGOPS
European Workshop, 2002.
[32] A. Rowstron and P. Druschel. Pastry: Scalable, Distributed
Object Location and Routing for Large-scale Peer-to-peer
Systems. In Middleware, 2001.
[33] S.Ratnasamy, M.Handley, R.Karp, and S.Shenker.
Application-level Multicast using Content-addressable
Networks. In Proceedings of the NGC, November 2001.
[34] W. Stallings. SNMP, SNMPv2, and CMIP. Addison-Wesley,
1993.
[35] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and
H. Balakrishnan. Chord: A scalable Peer-To-Peer lookup
service for internet applications. In ACM SIGCOMM, 2001.
[36] S.Zhuang, B.Zhao, A.Joseph, R.Katz, and J.Kubiatowicz.
Bayeux: An Architecture for Scalable and Fault-tolerant
Wide-Area Data Dissemination. In NOSSDAV, 2001.
[37] IBM Tivoli Monitoring.
www.ibm.com/software/tivoli/products/monitor.
[38] R. VanRenesse, K. P. Birman, and W. Vogels. Astrolabe: A
Robust and Scalable Technology for Distributed System
Monitoring, Management, and Data Mining. TOCS, 2003.
[39] R. VanRenesse and A. Bozdog. Willow: DHT, Aggregation,
and Publish/Subscribe in One Protocol. In IPTPS, 2004.
[40] A. Venkataramani, P. Weidmann, and M. Dahlin. Bandwidth
constrained placement in a wan. In PODC, Aug. 2001.
[41] A. Venkataramani, P. Yalagandula, R. Kokku, S. Sharif, and
M. Dahlin. Potential costs and benefits of long-term
prefetching for content-distribution. Elsevier Computer
Communications, 25(4):367-375, Mar. 2002.
[42] M. Wawrzoniak, L. Peterson, and T. Roscoe. Sophia: An
Information Plane for Networked Systems. In HotNets-II,
2003.
[43] R. Wolski, N. Spring, and J. Hayes. The network weather
service: A distributed resource performance forecasting
service for metacomputing. Journal of Future Generation
Computing Systems, 15(5-6):757-768, Oct 1999.
[44] P. Yalagandula and M. Dahlin. SDIMS: A scalable
distributed information management system. Technical
Report TR-03-47, Dept. of Computer Sciences, UT Austin,
Sep 2003.
[45] Z. Zhang, S.-M. Shi, and J. Zhu. SOMO: Self-Organized
Metadata Overlay for Resource Management in P2P DHT. In
IPTPS, 2003.
[46] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph. Tapestry:
An Infrastructure for Fault-tolerant Wide-area Location and
Routing. Technical Report UCB/CSD-01-1141, UC
Berkeley, Apr. 2001.
390 | autonomous dht;temporal heterogeneity;administrative isolation;distribute hash table;lazy re-aggregation;write-dominated attribute;large-scale networked system;update-upk-downj strategy;information management system;freepastry framework;tunable spatial replication;distributed hash table;aggregation management layer;distributed operating system backbone;availability;read-dominated attribute;network system monitor;virtual node;networked system monitoring;eventual consistency |
train_C-61 | Authority Assignment in Distributed Multi-Player Proxy-based Games | We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics. | 1. INTRODUCTION
In Massively Multi-player On-line Games (MMOG), game
clients who are positioned across the Internet connect to
a game server to interact with other clients in order to be
part of the game. In current architectures, these
interactions are direct in that the game clients and the servers
exchange game messages with each other. In addition, current
MMOGs delegate all authority to the game server to make
decisions about the results pertaining to the actions that
game clients take and also to decide upon the result of other
game related events. Such centralized authority has been
implemented with the claim that this improves the security
and consistency required in a gaming environment.
A number of works have shown the effect of network latency
on distributed multi-player games [1, 2, 3, 4]. It has been
shown that network latency has real impact on practical
game playing experience [3, 5]. Some types of games can
function quite well even in the presence of large delays. For
example, [4] shows that in a modern RPG called Everquest
2, the breakpoint of the game when adding artificial
latency was 1250ms. This is accounted to the fact that the
combat system used in Everquest 2 is queueing based and
has very low interaction. For example, a player queues up
4 or 5 spells they wish to cast, each of these spells take 1-2
seconds to actually perform, giving the server plenty of time
to validate these actions. But there are other games such as
FPS games that break even in the presence of moderate
network latencies [3, 5]. Latency compensation techniques have
been proposed to alleviate the effect of latency [1, 6, 7] but
it is obvious that if MMOGs are to increase in
interactivity and speed, more architectures will have to be developed
that address responsiveness, accuracy and consistency of the
gamestate.
In this paper, we propose two important features that would
make game playing within MMOGs more responsive for
movement and scalable. First, we propose that centralized
server-based architectures be made hierarchical through the
introduction of communication proxies so that game updates
made by clients that are time sensitive, such as movement,
can be more efficiently distributed to other players within
their game-space. Second, we propose that assignment of
authority in terms of who makes the decision on client
actions such as object pickups and hits, and collisions between
players, be distributed between the clients and the servers in
order to distribute the computing load away from the central
server. In order to move towards more complex real-time
networked games, we believe that definitions of authority
must be refined.
Most currently implemented MMOGs have game servers
that have almost absolute authority. We argue that there is
no single consistent view of the virtual game space that can
be maintained on any one component within a network that
has significant latency, such as the one that many MMOG
players would experience. We believe that in most cases, the
client with the most accurate view of an entity is the best
suited to make decisions for that entity when the causality
of that action will not immediately affect any other
players. In this paper we define what it means to have authority
within the context of events and objects in a virtual game
space. We then show the benefits of delegating authority
for different actions and game events between the clients
and server.
In our model, the game space consists of game clients
(representing the players) and objects that they control. We
divide the client actions and game events (we will
collectively refer to these as events) such as collisions, hits etc.
into three different categories, a) events for which the game
client has absolute authority, b) events for which the game
server has absolute authority, and c) events for which the
authority changes dynamically from client to the server and
vice-versa. Depending on who has the authority, that
entity will make decisions on the events that happen within a
game space. We propose that authority for all decisions that
pertain to a single player or object in the game that neither
affects the other players or objects, nor are affected by the
actions of other players be delegated to that player"s game
client. These type of decisions would include collision
detection with static objects within the virtual game space and
hit detection with linear path bullets (whose trajectory is
fixed and does not change with time) fired by other players.
Authority for decisions that could be affected by two or more
players should be delegated to the impartial central server,
in some cases, to ensure that no conflicts occur and in other
cases can be delegated to the clients responsible for those
players. For example, collision detection of two players that
collide with each other and hit detection of non-linear
bullets (that changes trajectory with time) should be delegated
to the server. Decision on events such as item pickup (for
example, picking up items in a game to accumulate points)
should be delegated to a server if there are multiple
players within close proximity of an item and any one of the
players could succeed in picking the item; for item pick-up
contention where the client realizes that no other player,
except its own player, is within a certain range of the item,
the client could be delegated the responsibility to claim the
item. The client"s decision can always be accurately verified
by the server.
In summary, we argue that while current authority models
that only delegate responsibility to the server to make
authoritative decisions on events is more secure than allowing
the clients to make the decisions, these types of models add
undesirable delays to events that could very well be decided
by the clients without any inconsistency being introduced
into the game. As networked games become more complex,
our architecture will become more applicable. This
architecture is applicable for massively multiplayer games where
the speed and accuracy of game-play are a major concern
while consistency between player game-states is still desired.
We propose that a mixed authority assignment mechanism
such as the one outlined above be implemented in high
interaction MMOGs.
Our paper has the following contributions. First we propose
an architecture that uses communication proxies to enable
clients to connect to the game server. A communication
proxy in the proposed architecture maintains information
only about portions of the game space that are relevant to
clients connected to it and is able to process the movement
information of objects and players within these portions.
In addition, it is capable of multicasting this information
only to a relevant subset of other communication proxies.
These functionalities of a communication proxy leads to a
decrease in latency of event update and subsequently, better
game playing experience. Second, we propose a mixed
authority assignment mechanism as described above that
improves game playing experience. Third, we implement the
proposed mixed authority assignment mechanism within a
MMOG called RPGQuest [8] to validate its viability within
MMOGs.
In Section 2, we describe the proxy-based game
architecture in more detail and illustrate its advantages. In
Section 3, we provide a generic description of the mixed
authority assignment mechanism and discuss how it improves
game playing experience. In Section 4, we show the
feasibility of implementing the proposed mixed authority
assignment mechanism within existing MMOGs by describing a
proof-of-concept implementation within an existing MMOG
called RPGQuest. Section 5 discusses related work. In
Section 6, we present our conclusions and discuss future work.
2. PROXY-BASED GAME ARCHITECTURE
Massively Multi-player Online Games (MMOGs) usually
consist of a large game space in which the players and
different game objects reside and move around and interact with
each-other. State information about the whole game space
could be kept in a single central server which we would
refer to as a Central-Server Architecture. But to alleviate
the heavy demand on the processing for handling the large
player population and the objects in the game in real-time, a
MMOG is normally implemented using a distributed server
architecture where the game space is further sub-divided
into regions so that each region has relatively smaller
number of players and objects that can be handled by a single
server. In other words, the different game regions are hosted
by different servers in a distributed fashion. When a player
moves out of one game region to another adjacent one, the
player must communicate with a different server (than it was
currently communicating with) hosting the new region. The
servers communicate with one another to hand off a player
or an object from one region to another. In this model, the
player on the client machine has to establish multiple
gaming sessions with different servers so that it can roam in the
entire game space.
We propose a communication proxy based architecture where
a player connects to a (geographically) nearby proxy instead
of connecting to a central server in the case of a
centralserver architecture or to one of the servers in case of
dis2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
tributed server architecture. In the proposed architecture,
players who are close by geographically join a particular
proxy. The proxy then connects to one or more game servers,
as needed by the set of players that connect to it and
maintains persistent transport sessions with these server. This
alleviates the problem of each player having to connect
directly to multiple game servers, which can add extra
connection setup delay. Introduction of communication proxies
also mitigates the overhead of a large number of transport
sessions that must be managed and reduces required network
bandwidth [9] and processing at the game servers both with
central server and distributed server architectures. With
central server architectures, communication proxies reduce
the overhead at the server by not requiring the server to
terminate persistent transport sessions from every one of the
clients. With distributed-server architectures, additionally,
communication proxies eliminate the need for the clients to
maintain persistent transport sessions to every one of the
servers. Figure 1 shows the proposed architecture.
Figure 1: Architecture of the gaming environment.
Note that the communication proxies need not be cognizant
of the game. They host a number of players and inform the
servers which players are hosted by the proxy in question.
Also note that the players hosted by a proxy may not be in
the same game space. That is, a proxy hosts players that
are geographically close to it, but the players themselves
can reside in different parts of the game space. The proxy
communicates with the servers responsible for maintaining
the game spaces subscribed by the different players. The
proxies communicate with one another in a peer-to-peer to
fashion. The responsiveness of the game can be improved
for updates that do not need to wait on processing at a
central authority. In this way, information about players can
be disseminated faster before even the game server gets to
know about it. This definitely improves the responsiveness
of the game. However, it ignores consistency that is critical
in MMORPGs. The notion that an architecture such as this
one can still maintain temporal consistency will be discussed
in detail in Section 3.
Figure 2 shows and example of the working principle of the
proposed architecture. Assume that the game space is
divided into 9 regions and there are three servers responsible
for managing the regions. Server S1 owns regions 1 and 2,
S2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and
9.
Figure 2: An example.
There are four communication proxies placed in
geographically distant locations. Players a, b, c join proxy P1, proxy P2
hosts players d, e, f, players g, h are with proxy P3, whereas
players i, j, k, l are with proxy P4. Underneath each player,
the figure shows which game region the player is located
currently. For example, players a, b, c are in regions 1, 2, 6,
respectively. Therefore, proxy P1 must communicate with
servers S1 and S3. The reader can verify the rest of the links
between the proxies and the servers.
Players can move within the region and between regions.
Player movement within a region will be tracked by the
proxy hosting the player and this movement information
(for example, the player"s new coordinates) will be
multicast to a subset of other relevant communication proxies
directly. At the same time, this information will be sent
to the server responsible for that region with the indication
that this movement has already been communicated to all
the other relevant communication proxies (so that the server
does not have to relay this information to all the proxies).
For example, if player a moves within region 1, this
information will be communicated by proxy P1 to server S1 and
multicast to proxies P3 and P4. Note that proxies that do
not keep state information about this region at this point
in time (because they do not have any clients within that
region) such as P2 do not have to receive this movement
information.
If a player is at the boundary of a region and moves into
a new region, there are two possibilities. The first
possibility is that the proxy hosting the player can identify the
region into which the player is moving (based on the
trajectory information) because it is also maintaining state
information about the new region at that point in time. In
this case, the proxy can update movement information
directly at the other relevant communication proxies and also
send information to the appropriate server informing of the
movement (this may require handoff between servers as we
will describe). Consider the scenario where player a is at
the boundary of region 1 and proxy P1 can identify that the
player is moving into region 2. Because proxy P1 is currently
keeping state information about region 2, it can inform all
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3
the other relevant communication proxies (in this example,
no other proxy maintains information about region 2 at this
point and so no update needs to be sent to any of the other
proxies) about this movement and then inform the server
independently. In this particular case, server S1 is responsible
for region 2 as well and so no handoff between servers would
be needed. Now consider another scenario where player j
moves from region 9 to region 8 and that proxy P4 is able
to identify this movement. Again, because proxy P4
maintains state information about region 8, it can inform any
other relevant communication proxies (again, none in this
example) about this movement. But now, regions 9 and 8
are managed by different servers (servers S3 and S2
respectively) and thus a hand-off between these servers is needed.
We propose that in this particular scenario, the handoff be
managed by the proxy P4 itself. When the proxy sends
movement update to server S3 (informing the server that
the player is moving out of its region), it would also send
a message to server S2 informing the server of the presence
and location of the player in one of its region.
In the intra-region and inter-region scenarios described above,
the proxy is able to manage movement related information,
update only the relevant communication proxies about the
movement, update the servers with the movement and
enable handoff of a player between the servers if needed. In
this way, the proxy performs movement updates without
involving the servers in any way in this time-critical function
thereby speeding up the game and improving game
playing experience for the players. We consider this the fast
path for movement update. We envision the proxies to be
just communication proxies in that they do not know about
the workings of specific games. They merely process
movement information of players and objects and communicate
this information to the other proxies and the servers. If the
proxies are made more intelligent in that they understand
more of the game logic, it is possible for them to quickly
check on claims made by the clients and mitigate cheating.
The servers could perform the same functionality but with
more delay. Even without being aware of game logic, the
proxies can provide additional functionalities such as
timestamping messages to make the game playing experience
more accurate [10] and fair [11].
The second possibility that should be considered is when
players move between regions. It is possible that a player
moves from one region to another but the proxy that is
hosting the player is not able to determine the region into
which the player is moving, a) the proxy does not
maintain state information about all the regions into which the
player could potentially move, or b) the proxy is not able
to determine which region the player may move into (even if
maintains state information about all these regions). In this
case, we propose that the proxy be not responsible for
making the movement decision, but instead communicate the
movement indication to the server responsible for the region
within which the player is currently located. The server will
then make the movement decision and then a) inform all
the proxies including the proxy hosting the player, and b)
initiate handoff with another server if the player moves into
a region managed by another server. We consider this the
slow path for movement update in that the servers need
to be involved in determining the new position of the player.
In the example, assume that player a moves from region 1
to region 4. Proxy P1 does not maintain state information
about region 4 and thus would pass the movement
information to server S1. The server will identify that the player
has moved into region 4 and would inform proxy P1 as well
as proxy P2 (which is the only other proxy that maintains
information about region 4 at this point in time). Server S1
will also initiate a handoff of player a with server S2. Proxy
P1 will now start maintaining state information about
region 4 because one of its hosted players, player a has moved
into this region. It will do so by requesting and receiving
the current state information about region 4 from server S2
which is responsible for this region.
Thus, a proxy architecture allows us to make use of faster
movement updates through the fast path through a proxy if
and when possible as opposed to conventional server-based
architectures that always have to use the slow path through
the server for movement updates. By selectively maintaining
relevant regional game state information at the proxies, we
are able to achieve this capability in our architecture without
the need for maintaining the complete game state at every
proxy.
3. ASSIGNMENT OF AUTHORITY
As a MMOG is played, the players and the game objects that
are part of the game, continually change their state. For
example, consider a player who owns a tank in a battlefield
game. Based on action of the player, the tank changes its
position in the game space, the amount of ammunition the
tank contains changes as it fires at other tanks, the tank
collects bonus firing power based on successful hits, etc.
Similarly objects in the battlefield, such as flags, buildings etc.
change their state when a flag is picked up by a player (i.e.
tank) or a building is destroyed by firing at it. That is,
some decision has to be made on the state of each player
and object as the game progresses. Note that the state of
a player and/or object can contain several parameters (e.g.,
position, amount of ammunition, fuel storage, points
collected, etc), and if any of the parameters changes, the state
of the player/object changes.
In a client-server based game, the server controls all the
players and the objects. When a player at a client machine
makes a move, the move is transmitted to the server over
the network. The server then analyzes the move, and if
the move is a valid one, changes the state of the player at
the server and informs the client of the change. The client
subsequently updates the state of the player and renders
the player at the new location. In this case the authority to
change the state of the player resides with the server entirely
and the client simply follows what the server instructs it to
do.
Most of the current first person shooter (FPS) games and
role playing games (RPG) fall under this category. In
current FPS games, much like in RPG games, the client is not
trusted. All moves and actions that it makes are validated.
If a client detects that it has hit another player with a bullet,
it proceeds assuming that it is a hit. Meanwhile, an update
is sent to the server and the server will send back a message
either affirming or denying that the player was hit. If the
remote player was not hit, then the client will know that it
4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
did not actually make the shot. If it did make the hit, an
update will also be sent from the server to the other clients
informing them that the other player was hit. A difference
that occurs in some RPGs is that they use very dumb client
programs. Some RPGs do not maintain state information
at the client and therefore, cannot predict anything such as
hits at the client. State information is not maintained
because the client is not trusted with it. In RPGs, a cheating
player with a hacked game client can use state information
stored at the client to gain an advantage and find things
such as hidden treasure or monsters lurking around the
corner. This is a reason why most MMORPGs do not send a
lot of state information to the client and causes the game
to be less responsive and have lower interaction game-play
than FPS games.
In a peer-to-peer game, each peer controls the player and
object that it owns. When a player makes a move, the
peer machine analyzes the move and if it is a valid one,
changes the state of the player and places the player in new
position. Afterwards, the owner peer informs all other peers
about the new state of the player and the rest of the peers
update the state of the player. In this scenario, the authority
to change the state of the player is given to the owning peer
and all other peers simply follow the owner.
For example, Battle Zone Flag (BzFlag) [12] is a
multiplayer client-server game where the client has all authority
for making decisions. It was built primarily with LAN play
in mind and cheating as an afterthought. Clients in BzFlag
are completely authoritative and when they detect that they
were hit by a bullet, they send an update to the server which
simply forwards the message along to all other players. The
server does no sort of validation.
Each of the above two traditional approaches has its own set
of advantages and disadvantages. The first approach, which
we will refer to as server authoritative henceforth, uses a
centralized method to assign authority. While a centralized
approach can keep the state of the game (i.e., state of all the
players and objects) consistent across any number of client
machines, it suffers from delayed response in game-play as
any move that a player at the client machine makes must go
through one round-trip delay to the server before it can take
effect on the client"s screen. In addition to the round-trip
delay, there is also queuing delay in processing the state change
request at the server. This can result in additional
processing delay, and can also bring in severe scalability problems
if there are large number of clients playing the game. One
definite advantage of the server authoritative approach is
that it can easily detect if a client is cheating and can take
appropriate action to prevent cheating.
The peer-to-peer approach, henceforth referred to as client
authoritative, can make games very responsive. However,
it can make the game state inconsistent for a few players
and tie break (or roll back) has to be performed to bring the
game back to a consistent state. Neither tie break nor roll
back is a desirable feature of online gaming. For example,
assume that for a game, the goal of each player is to collect
as many flags as possible from the game space (e.g. BzFlag).
When two players in proximity try to collect the same flag
at the same time, depending on the algorithm used at the
client-side, both clients may determine that it is the winner,
although in reality only one player can pick the flag up. Both
players will see on their screen that it is the winner. This
makes the state of the game inconsistent. Ways to recover
from this inconsistency are to give the flag to only one player
(using some tie break rule) or roll the game back so that the
players can try again. Neither of these two approaches is
a pleasing experience for online gaming. Another problem
with client authoritative approach is that of cheating by
clients as there is no cross checking of the validation of the
state changes authorized by the owner client.
We propose to use a hybrid approach to assign the authority
dynamically between the client and the server. That is, we
assign the authority to the client to make the game
responsive, and use the server"s authority only when the client"s
individual authoritative decisions can make the game state
inconsistent. By moving the authority of time critical
updates to the client, we avoid the added delay caused by
requiring the server to validate these updates. For example,
in the flag pickup game, the clients will be given the
authority to pickup flags only when other players are not within
a range that they could imminently pickup a flag. Only
when two or more players are close by so that more than
one player may claim to have picked up a flag, the authority
for movement and flag pickup would go to the central server
so that the game state does not become inconsistent. We
believe that in a large game-space where a player is often
in a very wide open and sparsely populated area such as
those often seen in the game Second Life [13], this hybrid
architecture would be very beneficial because of the long
periods that the client would have authority to send movement
updates for itself. This has two advantages over the
centralauthority approach, it distributes the processing load down
to the clients for the majority of events and it allows for a
more responsive game that does not need to wait on a server
for validation.
We believe that our notion of authority can be used to
develop a globally consistent state model of the evolution of
a game. Fundamentally, the consistent state of the system
is the one that is defined by the server. However, if local
authority is delegated to the client, in this case, the client"s
state is superimposed on the server"s state to determine the
correct global state. For example, if the client is
authoritative with respect to movement of a player, then the
trajectory of the player is the true trajectory and must
replace the server"s view of the player"s trajectory. Note that
this could be problematic and lead to temporal
inconsistency only if, for example, two or more entities are moving
in the same region and can interact with each other. In
this situation, the client authority must revert to the server
and the sever would then make decisions. Thus, the client
is only authoritative in situations where there is no
potential to imminently interact with other players. We believe
that in complex MMOGs, when allowing more rapid
movement, it will still be the case that local authority is possible
for significant spans of game time. Note that it might also
be possible to minimize the occurrences of the Dead Man
Shooting problem described in [14]. This could be done by
allowing the client to be authoritative for more actions such
as its player"s own death and disallowing other players from
making preemptive decisions based on a remote player.
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5
One reason why the client-server based architecture has gained
popularity is due to belief that the fastest route to the other
clients is through the server. While this may be true, we aim
to create a new architecture where decisions do not always
have to be made at the game server and the fastest route to
a client is actually through a communication proxy located
close to the client. That is, the shortest distance in our
architecture is not through the game server but through the
communication proxy. After a client makes an action such
as movement, it will simultaneously distribute it directly to
the clients and the game server by way of the
communications proxy. We note that our architecture however is not
practical for a game where game players setup their own
servers in an ad-hoc fashion and do not have access to
proxies at the various ISPs. This proxy and distributed authority
architecture can be used to its full potential only when the
proxies can be placed at strategic places within the main
ISPs and evenly distributed geographically.
Our game architecture does not assume that the client is
not to be trusted. We are designing our architecture on the
fact that there will be sufficient cheat deterring and
detection mechanisms present so that it will be both undesirable
and very difficult to cheat [15]. In our proposed approach,
we can make the games cheat resilient by using the
proxybased architecture when client authoritative decisions take
place. In order to achieve this, the proxies have to be game
cognizant so that decisions made by a client can be cross
checked by a proxy that the client connects to. For
example, assume that in a game a plane controlled by a client
moves in the game space. It is not possible for the plane to
go through a building unharmed. In a client authoritative
mode, it is possible for the client to cheat by maneuvering
the plane through a building and claiming the plane to be
unharmed. However, when such move is published by the
client, the proxy, being aware of the game space that the
plane is in, can quickly check that the client has misused
the authority and then can block such move. This allows us
to distribute authority to make decisions about the clients.
In the following section we use a multiplayer game called
RPGQuest to implement different authoritative schemes and
discuss our experience with the implementation. Our
implementation shows the viability of our proposed solution.
4. IMPLEMENTATION EXPERIENCE
We have experimented with the authority assignment
mechanism described in the last section by implementing the
mechanisms in a game called RPGQuest. A screen shot from
this game is shown in Figure 3. The purpose of the
implementation is to test its feasibility in a real game. RPGQuest
is a basic first person game where the player can move
around a three dimensional environment. Objects are placed
within the game world and players gain points for each
object that is collected. The game clients connect to a game
server which allows many players to coexist in the same
game world. The basic functionality of this game is
representative of current online first person shooter and role playing
games. The game uses the DirectX 8 graphics API and
DirectPlay networking API. In this section we will discuss the
three different versions of the game that we experimented
with.
Figure 3: The RPGQuest Game.
The first version of the game, which is the original
implementation of RPGQuest, was created with a completely
authoritative server and a non-authoritative client. Authority
given to the server includes decisions of when a player
collides with static objects and other players and when a player
picks up an object. This version of the game performs well
up to 100ms round-trip latency between the client and the
server. There is little lag between the time player hits a
wall and the time the server corrects the player"s position.
However, as more latency is induced between the client and
server, the game becomes increasingly difficult to play. With
the increased latency, the messages coming from the server
correcting the player when it runs into a wall are not
received fast enough. This causes the player to pass through
the wall for the period that it is waiting for the server to
resolve the collision.
When studying the source code of the original version of
the RPGQuest game, there is a substantial delay that is
unavoidable each time an action must be validated by the
server. Whenever a movement update is sent to the server,
the client must then wait whatever the round trip delay is,
plus some processing time at the server in order to receive
its validated or corrected position. This is obviously
unacceptable in any game where movement or any other rapidly
changing state information must be validated and
disseminated to the other clients rapidly.
In order to get around this problem, we developed a second
version of the game, which gives all authority to the client.
The client was delegated the authority to validate its own
movement and the authority to pick up objects without
validation from the server. In this version of the game when
a player moves around the game space, the client validates
that the player"s new position does not intersect with any
walls or static objects. A position update is then sent to the
server which then immediately forwards the update to the
other clients within the region. The update does not have
to go through any extra processing or validation.
This game model of complete authority given to the client
is beneficial with respect to movement. When latencies of
6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
100ms and up are induced into the link between the client
and server, the game is still playable since time critical
aspects of the game like movement do not have to wait on a
reply from the server. When a player hits a wall, the
collision is processed locally and does not have to wait on the
server to resolve the collision.
Although game playing experience with respect to
responsiveness is improved when the authority for movement is
given to the client, there are still aspects of games that do
not benefit from this approach. The most important of these
is consistency. Although actions such as movement are time
critical, other actions are not as time critical, but instead
require consistency among the player states. An example of
a game aspect that requires consistency is picking up objects
that should only be possessed by a single player.
In our client authoritative version of RPGQuest clients send
their own updates to all other players whenever they pick up
an object. From our tests we have realized this is a problem
because when there is a realistic amount of latency between
the client and server, it is possible for two players to pick
up the same object at the same time. When two players
attempt to pick up an object at physical times which are
close to each other, the update sent by the player who picked
up the object first will not reach the second player in time
for it to see that the object has already been claimed. The
two players will now both think that they own the object.
This is why a server is still needed to be authoritative in this
situation and maintain consistency throughout the players.
These two versions of the RPGQuest game has showed us
why it is necessary to mix the two absolute models of
authority. It is better to place authority on the client for quickly
changing actions such as movement. It is not desirable to
have to wait for server validation on a movement that could
change before the reply is even received. It is also sometimes
necessary to place consistency over efficiency in aspects of
the game that cannot tolerate any inconsistencies such as
object ownership. We believe that as the interactivity of
games increases, our architecture of mixed authority that
does not rely on server validation will be necessary.
To test the benefits and show the feasibility of our
architecture of mixed authority, we developed a third version of
the RPGQuest game that distributed authority for
different actions between the client and server. In this version,
in the interest of consistency, the server remained
authoritative for deciding who picked up an object. The client
was given full authority to send positional updates to other
clients and verify its own position without the need to
verify its updates with the server. When the player tries to
move their avatar, the client verifies that the move will not
cause it to move through a wall. A positional update is then
sent to the server which then simply forwards it to the other
clients within the region. This eliminates any extra
processing delay that would occur at the server and is also a more
accurate means of verification since the client has a more
accurate view of its own state than the server.
This version of the RPGQuest game where authority is
distributed between the client and server is an improvement
from the server authoritative version. The client has no
delay in waiting for an update for its own position and other
clients do not have to wait on the server to verify the update.
The inconsistencies where two clients can pick up the same
object in the client authoritative architecture are not present
in this version of the client. However, the benefits of mixed
authority will not truly be seen until an implementation of
our communication proxy is integrated into the game. With
the addition of the communication proxy, after the client
verifies its own positional updates it will be able to send the
update to all clients within its region through a low latency
link instead of having to first go through the game server
which could possibly be in a very remote location.
The coding of the different versions of the game was very
simple. The complexity of the client increased very slightly
in the client authoritative and hybrid models. The
original dumb clients of RPGQuest know the position of other
players; it is not just sent a screen snapshot from the server.
The server updates each client with the position of all nearby
clients. The dumb clients use client side prediction to fill
in the gaps between the updates they receive. The only
extra processing the client has to do in the hybrid architecture
is to compare its current position to the positions of all
objects (walls, boxes, etc.) in its area. This obviously means
that each client will have to already have downloaded the
locations of all static objects within its current region.
5. RELATED WORK
It has been noted that in addition to latency, bandwidth
requirements also dictate the type of gaming architecture to
be used. In [16], different types of architectures are
studied with respect to bandwidth efficiencies and latency. It is
pointed out that Central Server architectures are not
scalable because of bandwidth requirements at the server but
the overhead for consistency checks are limited as they are
performed at the server. A Peer-to-Peer architecture, on the
other hand, is scalable but there is a significant overhead
for consistency checks as this is required at every player.
The paper proposes a hybrid architecture which is
Peer-toPeer in terms of message exchange (and thereby is scalable)
where a Central Server is used for off-line consistency checks
(thereby mitigating consistency check overhead). The paper
provides an implementation example of BZFlag which is a
peer-to-peer game which is modified to transfer all
authority to a central server. In essence, this paper advocates an
authority architecture which is server based even for
peerto-peer games, but does not consider division of authority
between a client and a server to minimize latency which
could affect game playing experience even with the type of
latency found in server based games (where all authority is
with the server).
There is also previous work that has suggested that proxy
based architectures be used to alleviate the latency
problem and in addition use proxies to provide congestion
control and cheat-proof mechanisms in distributed multi-player
games [17]. In [18], a proxy server-network architecture is
presented that is aimed at improving scalability of
multiplayer games and lowering latency in server-client data
transmission. The main goal of this work is to improve scalability
of First-Person Shooter (FPS) and RPG games. The further
objective is to improve the responsiveness MMOGs by
providing low latency communications between the client and
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7
server. The architecture uses interconnected proxy servers
that each have a full view of the global game state. Proxy
servers are located at various different ISPs. It is mentioned
in this work that dividing the game space among multiple
games servers such as the federated model presented in [19]
is inefficient for a relatively fast game flow and that the
proposed architecture alleviates this problem because users
do not have to connect to a different server whenever they
cross the server boundary. This architecture still requires all
proxies to be aware of the overall game state over the whole
game space unlike our work where we require the proxies
to maintain only partial state information about the game
space.
Fidelity based agent architectures have been proposed in [20,
21]. These works propose a distributed client-server
architecture for distributed interactive simulations where
different servers are responsible for different portions of the game
space. When an object moves from one portion to another,
there is a handoff from one server to another. Although
these works propose an architecture where different portions
of the simulation space are managed by different servers,
they do not address the issue of decreasing the bandwidth
required through the use of communication proxies.
Our work differs from the above discussed previous works by
proposing a) a distributed proxy-based architecture to
decrease bandwidth requirements at the clients and the servers
without requiring the proxies to keep state information about
the whole game space, b) a dynamic authority assignment
technique to reduce latency (by performing consistency checks
locally at the client whenever possible) by splitting the
authority between the clients and servers on a per object basis,
and c) proposing that cheat detection can be built into the
proxies if they are provided more information about the
specific game instead of using them purely as communication
proxies (although this idea has not been implemented yet
and is part of our future work).
6. CONCLUSIONS AND FUTURE WORK
In this paper, we first proposed a proxy-based
architecture for MMOGs that enables MMOGs to scale to a large
number of users by mitigating the need for a large
number of transport sessions to be maintained and decreasing
both bandwidth overhead and latency of event update.
Second, we proposed a mixed authority assignment mechanism
that divides authority for making decisions on actions and
events within the game between the clients and server and
argued how such an authority assignment leads to better
game playing experience without sacrificing the consistency
of the game. Third, to validate the viability of the mixed
authority assignment mechanism, we implemented it within
a MMOG called RPGQuest and described our
implementation experience.
In future work, we propose to implement the
communications proxy architecture described in this paper and
integrate the mixed authority mechanism within this
architecture. We propose to evaluate the benefits of the proxy-based
architecture in terms of scalability, accuracy and
responsiveness. We also plan to implement a version of the RPGQuest
game with dynamic assignment of authority to allow players
the authority to pickup objects when no other players are
near. As discussed earlier, this will allow for a more efficient
and responsive game in certain situations and alleviate some
of the processing load from the server.
Also, since so much trust is put into the clients of our
architecture, it will be necessary to integrate into the
architecture many of the cheat detection schemes that have been
proposed in the literature. Software such as Punkbuster [22]
and a reputation system like those proposed by [23] and [15]
would be integral to the operation of an architecture such as
ours which has a lot of trust placed on the client. We further
propose to make the proxies in our architecture more game
cognizant so that cheat detection mechanisms can be built
into the proxies themselves.
7. REFERENCES
[1] Y. W. Bernier. Latency Compensation Methods in
Client/Server In-game Protocol Design and
Optimization. In Proc. of Game Developers
Conference"01, 2001.
[2] Lothar Pantel and Lars C. Wolf. On the impact of
delay on real-time multiplayer games. In NOSSDAV
"02: Proceedings of the 12th international workshop on
Network and operating systems support for digital
audio and video, pages 23-29, New York, NY, USA,
2002. ACM Press.
[3] G. Armitage. Sensitivity of Quake3 Players to Network
Latency. In Proc. of IMW2001, Workshop Poster
Session, November 2001. http://www.geocities.com/
gj armitage/q3/quake-results.html.
[4] Tobias Fritsch, Hartmut Ritter, and Jochen Schiller.
The effect of latency and network limitations on
mmorpgs: a field study of everquest2. In NetGames
"05: Proceedings of 4th ACM SIGCOMM workshop on
Network and system support for games, pages 1-9,
New York, NY, USA, 2005. ACM Press.
[5] Tom Beigbeder, Rory Coughlan, Corey Lusher, John
Plunkett, Emmanuel Agu, and Mark Claypool. The
effects of loss and latency on user performance in
unreal tournament 2003. In NetGames "04:
Proceedings of 3rd ACM SIGCOMM workshop on
Network and system support for games, pages
144-151, New York, NY, USA, 2004. ACM Press.
[6] Y. Lin, K. Guo, and S. Paul. Sync-MS: Synchronized
Messaging Service for Real-Time Multi-Player
Distributed Games. In Proc. of 10th IEEE
International Conference on Network Protocols
(ICNP), Nov 2002.
[7] Katherine Guo, Sarit Mukherjee, Sampath
Rangarajan, and Sanjoy Paul. A fair message
exchange framework for distributed multi-player
games. In NetGames "03: Proceedings of the 2nd
workshop on Network and system support for games,
pages 29-41, New York, NY, USA, 2003. ACM Press.
[8] T. Barron. Multiplayer Game Programming, chapter
16-17, pages 672-731. Prima Tech"s Game
Development Series. Prima Publishing, 2001.
8 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006
[9] Carsten Griwodz and P˚al Halvorsen. The fun of using
tcp for an mmorpg. In NOSSDAV "06: Proceedings of
the International Workshop on Network and Operating
Systems Support for Digital Audio and VIdeo, New
York, NY, USA, 2006. ACM Press.
[10] Sudhir Aggarwal, Hemant Banavar, Amit Khandelwal,
Sarit Mukherjee, and Sampath Rangarajan. Accuracy
in dead-reckoning based distributed multi-player
games. In NetGames "04: Proceedings of 3rd ACM
SIGCOMM workshop on Network and system support
for games, pages 161-165, New York, NY, USA, 2004.
ACM Press.
[11] Sudhir Aggarwal, Hemant Banavar, Sarit Mukherjee,
and Sampath Rangarajan. Fairness in dead-reckoning
based distributed multi-player games. In NetGames
"05: Proceedings of 4th ACM SIGCOMM workshop on
Network and system support for games, pages 1-10,
New York, NY, USA, 2005. ACM Press.
[12] Riker, T. et al. Bzflag. http://www.bzflag.org,
2000-2006.
[13] Linden Lab. Second life. http://secondlife.com,
2003.
[14] Martin Mauve. How to keep a dead man from
shooting. In IDMS "00: Proceedings of the 7th
International Workshop on Interactive Distributed
Multimedia Systems and Telecommunication Services,
pages 199-204, London, UK, 2000. Springer-Verlag.
[15] Max Skibinsky. Massively Multiplayer Game
Development 2, chapter The Quest for Holy
ScalePart 2: P2P Continuum, pages 355-373. Charles River
Media, 2005.
[16] Joseph D. Pellegrino and Constantinos Dovrolis.
Bandwidth requirement and state consistency in three
multiplayer game architectures. In NetGames "03:
Proceedings of the 2nd workshop on Network and
system support for games, pages 52-59, New York,
NY, USA, 2003. ACM Press.
[17] M. Mauve J. Widmer and S. Fischer. A Generic Proxy
Systems for Networked Computer Games. In Proc. of
the Workshop on Network Games, Netgames 2002,
April 2002.
[18] S. Gorlatch J. Muller, S. Fischer and M.Mauve. A
Proxy Server Network Architecture for Real-Time
Computer Games. In Euor-Par 2004 Parallel
Processing: 10th International EURO-PAR
Conference, August-September 2004.
[19] H. Hazeyama T. Limura and Y. Kadobayashi. Zoned
Federation of Game Servers: A Peer-to-Peer Approach
to Scalable Multiplayer On-line Games. In Proc. of
ACM Workshop on Network Games, Netgames 2004,
August-September 2004.
[20] B. Kelly and S. Aggarwal. A Framework for a Fidelity
Based Agent Architecture for Distributed Interactive
Simulation. In Proc. 14th Workshop on Standards for
Distributed Interactive Simulation, pages 541-546,
March 1996.
[21] S. Aggarwal and B. Kelly. Hierarchical Structuring for
Distributed Interactive Simulation. In Proc. 13th
Workshop on Standards for Distributed Interactive
Simulation, pages 125-132, Sept 1995.
[22] Even Balance, Inc. Punkbuster.
http://www.evenbalance.com/, 2001-2006.
[23] Y. Wang and J. Vassileva. Trust and Reputation
Model in Peer-to-Peer Networks. In Third
International Conference on Peer-to-Peer Computing,
2003.
The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 9 | cheat-proof mechanism;latency compensation;role playing game;artificial latency;mmog;central-server architecture;assignment of authority;authority;proxy-based game architecture;first person shooter;communication proxy;distribute multi-player game;client authoritative approach;authority assignment;multi-player online game |
train_C-62 | Network Monitors and Contracting Systems: Competition and Innovation | Today"s Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network"s primitive system of contracts does not align incentives properly. In this study, we identify the network"s lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation. | 1. INTRODUCTION
Many studies before us have noted the Internet"s resistance to new
services and evolution. In recent decades, numerous ideas have
been developed in universities, implemented in code, and even
written into the routers and end systems of the network, only to
languish as network operators fail to turn them on on a large scale.
The list includes Multicast, IPv6, IntServ, and DiffServ. Lacking
the incentives just to activate services, there seems to be little hope
of ISPs devoting adequate resources to developing new ideas. In the
long term, this pathology stands out as a critical obstacle to the
network"s continued success (Ratnasamy, Shenker, and McCanne
provide extensive discussion in [11]).
On a smaller time scale, ISPs shun new services in favor of cost
cutting measures. Thus, the network has characteristics of a
commodity market. Although in theory, ISPs have a plethora of
routing policies at their disposal, the prevailing strategy is to route in
the cheapest way possible [2]. On one hand, this leads directly to
suboptimal routing. More importantly, commoditization in the short
term is surely related to the lack of innovation in the long term.
When the routing decisions of others ignore quality characteristics,
ISPs are motivated only to lower costs. There is simply no reward
for introducing new services or investing in quality improvements.
In response to these pathologies and others, researchers have put
forth various proposals for improving the situation. These can be
divided according to three high-level strategies: The first attempts
to improve the status quo by empowering end-users. Clark, et al.,
suggest that giving end-users control over routing would lead to
greater service diversity, recognizing that some payment mechanism
must also be provided [5]. Ratnasamy, Shenker, and McCanne
postulate a link between network evolution and user-directed
routing [11]. They propose a system of Anycast to give end-users
the ability to tunnel their packets to an ISP that introduces a
desirable protocol. The extra traffic to the ISP, the authors suggest,
will motivate the initial investment.
The second strategy suggests a revision of the contracting system.
This is exemplified by MacKie-Mason and Varian, who propose a
smart market to control access to network resources [10]. Prices
are set to the market-clearing level based on bids that users associate
to their traffic. In another direction, Afergan and Wroclawski
suggest that prices should be explicitly encoded in the routing
protocols [2]. They argue that such a move would improve stability
and align incentives.
The third high-level strategy calls for greater network
accountability. In this vein, Argyraki, et al., propose a system of
packet obituaries to provide feedback as to which ISPs drop packets
[3]. They argue that such feedback would help reveal which ISPs
were adequately meeting their contractual obligations. Unlike the
first two strategies, we are not aware of any previous studies that
have connected accountability with the pathologies of
commoditization or lack of innovation.
It is clear that these three strategies are closely linked to each other
(for example, [2], [5], and [9] each argue that giving end-users
routing control within the current contracting system is
problematic). Until today, however, the relationship between them
has been poorly understood. There is currently little theoretical
foundation to compare the relative merits of each proposal, and a
particular lack of evidence linking accountability with innovation
and service differentiation. This paper will address both issues.
We will begin by introducing an economic network model that
relates accountability, contracts, competition, and innovation. Our
model is highly stylized and may be considered preliminary: it is
based on a single source sending data to a single destination.
Nevertheless, the structure is rich enough to expose previously
unseen features of network behavior. We will use our model for
two main purposes:
First, we will use our model to argue that the lack of accountability
in today"s network is a fundamental obstacle to overcoming the
pathologies of commoditization and lack of innovation. In other
words, unless new monitoring capabilities are introduced, and
integrated with the system of contracts, the network cannot achieve
optimal routing and innovation characteristics. This result provides
motivation for the remainder of the paper, in which we explore how
accountability can be leveraged to overcome these pathologies and
create a sustainable industry. We will approach this problem from a
clean-slate perspective, deriving the level of accountability needed
to sustain an ideal competitive structure.
When we say that today"s Internet has poor accountability, we mean
that it reveals little information about the behavior - or misbehavior
- of ISPs. This well-known trait is largely rooted in the network"s
history. In describing the design philosophy behind the Internet
protocols, Clark lists accountability as the least important among
seven second level goals. [4] Accordingly, accountability
received little attention during the network"s formative years. Clark
relates this to the network"s military context, and finds that had the
network been designed for commercial development, accountability
would have been a top priority.
Argyraki, et al., conjecture that applying the principles of layering
and transparency may have led to the network"s lack of
accountability [3]. According to these principles, end hosts should
be informed of network problems only to the extent that they are
required to adapt. They notice when packet drops occur so that they
can perform congestion control and retransmit packets. Details of
where and why drops occur are deliberately concealed.
The network"s lack of accountability is highly relevant to a
discussion of innovation because it constrains the system of
contracts. This is because contracts depend upon external
institutions to function - the judge in the language of incomplete
contract theory, or simply the legal system. Ultimately, if a judge
cannot verify that some condition holds, she cannot enforce a
contract based on that condition. Of course, the vast majority of
contracts never end up in court. Especially when a judge"s ruling is
easily predicted, the parties will typically comply with the contract
terms on their own volition. This would not be possible, however,
without the judge acting as a last resort.
An institution to support contracts is typically complex, but we
abstract it as follows: We imagine that a contract is an algorithm
that outputs a payment transfer among a set of ISPs (the parties) at
every time. This payment is a function of the past and present
behaviors of the participants, but only those that are verifiable.
Hence, we imagine that a contract only accepts proofs as inputs.
We will call any process that generates these proofs a contractible
monitor. Such a monitor includes metering or sensing devices on
the physical network, but it is a more general concept. Constructing
a proof of a particular behavior may require readings from various
devices distributed among many ISPs. The contractible monitor
includes whatever distributed algorithmic mechanism is used to
motivate ISPs to share this private information.
Figure 1 demonstrates how our model of contracts fits together. We
make the assumption that all payments are mediated by contracts.
This means that without contractible monitors that attest to, say,
latency, payments cannot be conditioned on latency.
Figure 1: Relationship between monitors and contracts
With this model, we may conclude that the level of accountability in
today"s Internet only permits best effort contracts. Nodes cannot
condition payments on either quality or path characteristics.
Is there anything wrong with best-effort contracts? The reader
might wonder why the Internet needs contracts at all. After all, in
non-network industries, traditional firms invest in research and
differentiate their products, all in the hopes of keeping their
customers and securing new ones. One might believe that such
market forces apply to ISPs as well. We may adopt this as our null
hypothesis:
Null hypothesis: Market forces are sufficient to maintain service
diversity and innovation on a network, at least to the same extent
as they do in traditional markets.
There is a popular intuitive argument that supports this hypothesis,
and it may be summarized as follows:
Intuitive argument supporting null hypothesis:
1. Access providers try to increase their quality to get more
consumers.
2. Access providers are themselves customers for second hop
ISPs, and the second hops will therefore try to provide
highquality service in order to secure traffic from access
providers. Access providers try to select high quality transit
because that increases their quality.
3. The process continues through the network, giving every
ISP a competitive reason to increase quality.
We are careful to model our network in continuous time, in order to
capture the essence of this argument. We can, for example, specify
equilibria in which nodes switch to a new next hop in the event of a
quality drop.
Moreover, our model allows us to explore any theoretically possible
punishments against cheaters, including those that are costly for
end-users to administer. By contrast, customers in the real world
rarely respond collectively, and often simply seek the best deal
currently offered. These constraints limit their ability to punish
cheaters.
Even with these liberal assumptions, however, we find that we must
reject our null hypothesis. Our model will demonstrate that
identifying a cheating ISP is difficult under low accountability,
limiting the threat of market driven punishment. We will define an
index of commoditization and show that it increases without bound
as data paths grow long. Furthermore, we will demonstrate a
framework in which an ISP"s maximum research investment
decreases hyperbolically with its distance from the end-user.
Network
Behavior
Monitor Contract
Proof
Payments
184
To summarize, we argue that the Internet"s lack of accountability
must be addressed before the pathologies of commoditization and
lack of innovation can be resolved. This leads us to our next topic:
How can we leverage accountability to overcome these pathologies?
We approach this question from a clean-slate perspective. Instead
of focusing on incremental improvements, we try to imagine how an
ideal industry would behave, then derive the level of accountability
needed to meet that objective. According to this approach, we first
craft a new equilibrium concept appropriate for network
competition. Our concept includes the following requirements:
First, we require that punishing ISPs that cheat is done without
rerouting the path. Rerouting is likely to prompt end-users to switch
providers, punishing access providers who administer punishments
correctly. Next, we require that the equilibrium cannot be
threatened by a coalition of ISPs that exchanges illicit side
payments. Finally, we require that the punishment mechanism that
enforces contracts does not punish innocent nodes that are not in the
coalition.
The last requirement is somewhat unconventional from an economic
perspective, but we maintain that it is crucial for any reasonable
solution. Although ISPs provide complementary services when they
form a data path together, they are likely to be horizontal
competitors as well. If innocent nodes may be punished, an ISP
may decide to deliberately cheat and draw punishment onto itself
and its neighbors. By cheating, the ISP may save resources, thereby
ensuring that the punishment is more damaging to the other ISPs,
which probably compete with the cheater directly for some
customers. In the extreme case, the cheater may force the other
ISPs out of business, thereby gaining a monopoly on some routes.
Applying this equilibrium concept, we derive the monitors needed
to maintain innovation and optimize routes. The solution is
surprisingly simple: contractible monitors must report the quality of
the rest of the path, from each ISP to the destination. It turns out
that this is the correct minimum accountability requirement, as
opposed to either end-to-end monitors or hop-by-hop monitors, as
one might initially suspect.
Rest of path monitors can be implemented in various ways. They
may be purely local algorithms that listen for packet echoes.
Alternately, they can be distributed in nature. We describe a way to
construct a rest of path monitor out of monitors for individual ISP
quality and for the data path. This requires a mechanism to
motivate ISPs to share their monitor outputs with each other. The
rest of path monitor then includes the component monitors and the
distributed algorithmic mechanism that ensures that information is
shared as required. This example shows that other types of monitors
may be useful as building blocks, but must be combined to form rest
of path monitors in order to achieve ideal innovation characteristics.
Our study has several practical implications for future protocol
design. We show that new monitors must be implemented and
integrated with the contracting system before the pathologies of
commoditization and lack of innovation can be overcome.
Moreover, we derive exactly what monitors are needed to optimize
routes and support innovation. In addition, our results provide
useful input for clean-slate architectural design, and we use several
novel techniques that we expect will be applicable to a variety of
future research.
The rest of this paper is organized as follows: In section 2, we lay
out our basic network model. In section 3, we present a
lowaccountability network, modeled after today"s Internet. We
demonstrate how poor monitoring causes commoditization and a
lack of innovation. In section 4, we present verifiable monitors, and
show that proofs, even without contracts, can improve the status
quo. In section 5, we turn our attention to contractible monitors.
We show that rest of path monitors can support competition games
with optimal routing and innovation. We further show that rest of
path monitors are required to support such competition games. We
continue by discussing how such monitors may be constructed using
other monitors as building blocks. In section 6, we conclude and
present several directions for future research.
2. BASIC NETWORK MODEL
A source, S, wants to send data to destination, D. S and D are nodes
on a directed, acyclic graph, with a finite set of intermediate nodes,
{ }NV ,...2,1= , representing ISPs. All paths lead to D, and every
node not connected to D has at least two choices for next hop.
We will represent quality by a finite dimensional vector space, Q,
called the quality space. Each dimension represents a distinct
network characteristic that end-users care about. For example,
latency, loss probability, jitter, and IP version can each be assigned
to a dimension.
To each node, i, we associate a vector in the quality space, Qqi ∈ .
This corresponds to the quality a user would experience if i were the
only ISP on the data path. Let N
Q∈q be the vector of all node
qualities.
Of course, when data passes through multiple nodes, their qualities
combine in some way to yield a path quality. We represent this by
an associative binary operation, *: QQQ →× . For path
( )nvvv ,...,, 21 , the quality is given by nvvv qqq ∗∗∗ ...21
. The *
operation reflects the characteristics of each dimension of quality.
For example, * can act as an addition in the case of latency,
multiplication in the case of loss probability, or a
minimumargument function in the case of security.
When data flows along a complete path from S to D, the source and
destination, generally regarded as a single player, enjoy utility given
by a function of the path quality, →Qu : . Each node along the
path, i, experiences some cost of transmission, ci.
2.1 Game Dynamics
Ultimately, we are most interested in policies that promote
innovation on the network. In this study, we will use innovation in
a fairly general sense. Innovation describes any investment by an
ISP that alters its quality vector so that at least one potential data
path offers higher utility. This includes researching a new routing
algorithm that decreases the amount of jitter users experience. It
also includes deploying a new protocol that supports quality of
service. Even more broadly, buying new equipment to decrease
S D
185
latency may also be regarded as innovation. Innovation
may be thought of as the micro-level process by which
the network evolves.
Our analysis is limited in one crucial respect: We focus
on inventions that a single ISP can implement to improve
the end-user experience. This excludes technologies that
require adoption by all ISPs on the network to function.
Because such technologies do not create a competitive
advantage, rewarding them is difficult and may require
intellectual property or some other market distortion. We
defer this interesting topic to future work.
At first, it may seem unclear how a large-scale distributed process
such as innovation can be influenced by mechanical details like
networks monitors. Our model must draw this connection in a
realistic fashion.
The rate of innovation depends on the profits that potential
innovators expect in the future. The reward generated by an
invention must exceed the total cost to develop it, or the inventor
will not rationally invest. This reward, in turn, is governed by the
competitive environment in which the firm operates, including the
process by which firms select prices, and agree upon contracts with
each other. Of course, these decisions depend on how routes are
established, and how contracts determine actual monetary
exchanges.
Any model of network innovation must therefore relate at least three
distinct processes: innovation, competition, and routing. We select
a game dynamics that makes the relation between these processes as
explicit as possible. This is represented schematically in Figure 2.
The innovation stage occurs first, at time 2−=t . In this stage, each
agent decides whether or not to make research investments. If she
chooses not to, her quality remains fixed. If she makes an
investment, her quality may change in some way. It is not
necessary for us to specify how such changes take place. The
agents" choices in this stage determine the vector of qualities, q,
common knowledge for the rest of the game.
Next, at time 1−=t , agents participate in the competition stage, in
which contracts are agreed upon. In today"s industry, these
contracts include prices for transit access, and peering agreements.
Since access is provided on a best-effort basis, a transit agreement
can simply be represented by its price. Other contracting systems
we will explore will require more detail.
Finally, beginning at 0=t , firms participate in the routing stage.
Other research has already employed repeated games to study
routing, for example [1], [12]. Repetition reveals interesting effects
not visible in a single stage game, such as informal collusion to
elevate prices in [12]. We use a game in continuous time in order to
study such properties. For example, we will later ask whether a
player will maintain higher quality than her contracts require, in the
hope of keeping her customer base or attracting future customers.
Our dynamics reflect the fact that ISPs make innovation decisions
infrequently. Although real firms have multiple opportunities to
innovate, each opportunity is followed by a substantial length of
time in which qualities are fixed. The decision to invest focuses on
how the firm"s new quality will improve the contracts it can enter
into. Hence, our model places innovation at the earliest stage,
attempting to capture a single investment decision. Contracting
decisions are made on an intermediate time scale, thus appearing
next in the dynamics. Routing decisions are made very frequently,
mainly to maximize immediate profit flows, so they appear in the
last stage.
Because of this ordering, our model does not allow firms to route
strategically to affect future innovation or contracting decisions. In
opposition, Afergan and Wroclawski argue that contracts are formed
in response to current traffic patterns, in a feedback loop [2].
Although we are sympathetic to their observation, such an addition
would make our analysis intractable. Our model is most realistic
when contracting decisions are infrequent.
Throughout this paper, our solution concept will be a subgame
perfect equilibrium (SPE). An SPE is a strategy point that is a Nash
equilibrium when restricted to each subgame. Three important
subgames have been labeled in Figure 2. The innovation game
includes all three stages. The competition game includes only the
competition stage and the routing stage. The routing game includes
only the routing stage.
An SPE guarantees that players are forward-looking. This means,
for example, that in the competition stage, firms must act rationally,
maximizing their expected profits in the routing stage. They cannot
carry out threats they made in the innovation stage if it lowers their
expected payoff.
Our schematic already suggests that the routing game is crucial for
promoting innovation. To support innovation, the competition
game must somehow reward ISPs with high quality. But that
means that the routing game must tend to route to nodes with high
quality. If the routing game always selects the lowest-cost routes,
for example, innovation will not be supported. We will support this
observation with analysis later.
2.2 The Routing Game
The routing game proceeds in continuous time, with all players
discounting by a common factor, r. The outputs from previous
stages, q and the set of contracts, are treated as exogenous
parameters for this game. For each time 0≥t , each node must
select a next hop to route data to. Data flows across the resultant
path, causing utility flow to S and D, and a flow cost to the nodes on
the path, as described above. Payment flows are also created, based
on the contracts in place.
Relating our game to the familiar repeated prisoners" dilemma,
imagine that we are trying to impose a high quality, but costly path.
As we argued loosely above, such paths must be sustainable in order
to support innovation. Each ISP on the path tries to maximize her
own payment, net of costs, so she may not want to cooperate with
our plan. Rather, if she can find a way to save on costs, at the
expense of the high quality we desire, she will be tempted to do so.
Innovation Game Competition Game Routing Game
Innovation
stage
Competition
stage
Routing
stageQualities
(q)
Contracts
(prices)
Profits
t = -2 t = -1 t ∈ [ 0 , )
Figure 2: Game Dynamics
186
Analogously to the prisoners" dilemma, we will call such a decision
cheating. A little more formally,
Cheating refers to any action that an ISP can take, contrary to
some target strategy point that we are trying to impose, that
enhances her immediate payoff, but compromises the quality of
the data path.
One type of cheating relates to the data path. Each node on the path
has to pay the next node to deliver its traffic. If the next node offers
high quality transit, we may expect that a lower quality node will
offer a lower price. Each node on the path will be tempted to route
to a cheaper next hop, increasing her immediate profits, but
lowering the path quality. We will call this type of action cheating
in route.
Another possibility we can model, is that a node finds a way to save
on its internal forwarding costs, at the expense of its own quality.
We will call this cheating internally to distinguish it from cheating
in route. For example, a node might drop packets beyond the rate
required for congestion control, in order to throttle back TCP flows
and thus save on forwarding costs [3]. Alternately, a node
employing quality of service could give high priority packets a
lower class of service, thus saving on resources and perhaps
allowing itself to sell more high priority service.
If either cheating in route or cheating internally is profitable, the
specified path will not be an equilibrium. We assume that cheating
can never be caught instantaneously. Rather, a cheater can always
enjoy the payoff from cheating for some positive time, which we
label 0t . This includes the time for other players to detect and react
to the cheating. If the cheater has a contract which includes a
customer lock-in period, 0t also includes the time until customers
are allowed to switch to a new ISP. As we will see later, it is
socially beneficial to decrease 0t , so such lock-in is detrimental to
welfare.
3. PATHOLOGIES OF A
LOWACCOUNTABILITY NETWORK
In order to motivate an exploration of monitoring systems, we begin
in this section by considering a network with a poor degree of
accountability, modeled after today"s Internet. We will show how
the lack of monitoring necessarily leads to poor routing and
diminishes the rate of innovation. Thus, the network"s lack of
accountability is a fundamental obstacle to resolving these
pathologies.
3.1 Accountability in the Current Internet
First, we reflect on what accountability characteristics the present
Internet has. Argyraki, et al., point out that end hosts are given
minimal information about packet drops [3]. Users know when
drops occur, but not where they occur, nor why. Dropped packets
may represent the innocent signaling of congestion, or, as we
mentioned above, they may be a form of cheating internally. The
problem is similar for other dimensions of quality, or in fact more
acute. Finding an ISP that gives high priority packets a lower class
of service, for example, is further complicated by the lack of even
basic diagnostic tools.
In fact, it is similarly difficult to identify an ISP that cheats in route.
Huston notes that Internet traffic flows do not always correspond to
routing information [8]. An ISP may hand a packet off to a
neighbor regardless of what routes that neighbor has advertised.
Furthermore, blocks of addresses are summarized together for
distant hosts, so a destination may not even be resolvable until
packets are forwarded closer.
One might argue that diagnostic tools like ping and traceroute can
identify cheaters. Unfortunately, Argyraki, et al., explain that these
tools only reveal whether probe packets are echoed, not the fate of
past packets [3]. Thus, for example, they are ineffective in detecting
low-frequency packet drops. Even more fundamentally, a
sophisticated cheater can always spot diagnostic packets and give
them special treatment.
As a further complication, a cheater may assume different aliases
for diagnostic packets arriving over different routes. As we will see
below, this gives the cheater a significant advantage in escaping
punishment for bad behavior, even if the data path is otherwise
observable.
3.2 Modeling Low-Accountability
As the above evidence suggests, the current industry allows for very
little insight into the behavior of the network. In this section, we
attempt to capture this lack of accountability in our model. We
begin by defining a monitor, our model of the way that players
receive external information about network behavior,
A monitor is any distributed algorithmic mechanism that runs on
the network graph, and outputs, to specific nodes, informational
statements about current or past network behavior.
We assume that all external information about network behavior is
mediated in this way. The accountability properties of the Internet
can be represented by the following monitors:
E2E (End to End): A monitor that informs S/D about what the
total path quality is at any time (this is the quality they
experience).
ROP (Rest of Path): A monitor that informs each node along the
data path what the quality is for the rest of the path to the
destination.
PRc (Packets Received): A monitor that tells nodes how much
data they accept from each other, so that they can charge by
volume. It is important to note, however, that this information is
aggregated over many source-destination pairs. Hence, for the
sake of realism, it cannot be used to monitor what the data path is.
Players cannot measure the qualities of other, single nodes, just the
rest of the path. Nodes cannot see the path past the next hop. This
last assumption is stricter than needed for our results. The critical
ingredient is that nodes cannot verify that the path avoids a specific
hop. This holds, for example, if the path is generally visible, except
nodes can use different aliases for different parents. Similar results
also hold if alternate paths always converge after some integer
number, m, of hops.
It is important to stress that E2E and ROP are not the contractible
monitors we described in the introduction - they do not generate
proofs. Thus, even though a player observes certain information,
she generally cannot credibly share it with another player. For
example, if a node after the first hop starts cheating, the first hop
will detect the sudden drop in quality for the rest of the path, but the
first hop cannot make the source believe this observation - the
187
source will suspect that the first hop was the cheater, and fabricated
the claim against the rest of the path.
Typically, E2E and ROP are envisioned as algorithms that run on a
single node, and listen for packet echoes. This is not the only way
that they could be implemented, however; an alternate strategy is to
aggregate quality measurements from multiple points in the
network. These measurements can originate in other monitors,
located at various ISPs. The monitor then includes the component
monitors as well as whatever mechanisms are in place to motivate
nodes to share information honestly as needed. For example, if the
source has monitors that reveal the qualities of individual nodes,
they could be combined with path information to create an ROP
monitor.
Since we know that contracts only accept proofs as input, we can
infer that payments in this environment can only depend on the
number of packets exchanged between players. In other words,
contracts are best-effort. For the remainder of this section, we will
assume that contracts are also linear - there is a constant payment
flow so long as a node accepts data, and all conditions of the
contract are met. Other, more complicated tariffs are also possible,
and are typically used to generate lock-in. We believe that our
parameter t0 is sufficient to describe lock-in effects, and we believe
that the insights in this section apply equally to any tariffs that are
bounded so that the routing game remains continuous at infinity.
Restricting attention to linear contracts allows us to represent some
node i"s contract by its price, pi.
Because we further know that nodes cannot observe the path after
the next hop, we can infer that contracts exist only between
neighboring nodes on the graph. We will call this arrangement of
contracts bilateral. When a competition game exclusively uses
bilateral contracts, we will call it a bilateral contract competition
game.
We first focus on the routing game and ask whether a high quality
route can be maintained, even when a low quality route is cheaper.
Recall that this is a requirement in order for nodes to have any
incentive to innovate. If nodes tend to route to low price next hops,
regardless of quality, we say that the network is commoditized. To
measure this tendency, we define an index of commoditization as
follows:
For a node on the data path, i, define its quality premium,
minppd ji −= , where pj is the flow payment to the next hop in
equilibrium, and pmin is the price of the lowest cost next hop.
Definition: The index of commoditization, CI , is the average,
over each node on the data path, i, of i"s flow profit as a fraction
of i"s quality premium, ( ) ijii dpcp /−− .
CI ranges from 0, when each node spends all of its potential profit
on its quality premium, to infinite, when a node absorbs positive
profit, but uses the lowest price next hop. A high value for CI
implies that nodes are spending little of their money inflow on
purchasing high quality for the rest of the path. As the next claim
shows, this is exactly what happens as the path grows long:
Claim 1. If the only monitors are E2E, ROP, and PRc, ∞→CI
as ∞→n , where n is the number of nodes on the data path.
To show that this is true, we first need the following lemma, which
will establish the difficulty of punishing nodes in the network.
First a bit of notation: Recall that a cheater can benefit from its
actions for 00 >t before other players can react. When a node
cheats, it can expect a higher profit flow, at least until it is caught
and other players react, perhaps by diverting traffic. Let node i"s
normal profit flow be iπ , and her profit flow during cheating be
some greater value, yi. We will call the ratio, iiy π/ , the
temptation to cheat.
Lemma 1. If the only monitors are E2E, ROP, and PRc, the
discounted time, −nt
rt
e
0
, needed to punish a cheater increases at
least as fast as the product of the temptations to cheat along the data
path,
∏ −−
≥
0
0
pathdataon
0
t
rt
i i
i
t
rt
e
y
e
n
π
(1)
Corollary. If nodes share a minimum temptation to cheat, π/y ,
the discounted time needed to punish cheating increases at least
exponentially in the length of the data path, n,
−−
≥
0
00
t
rt
nt
rt
e
y
e
n
π
(2)
Since it is the discounted time that increases exponentially, the
actual time increases faster than exponentially. If n is so large that
tn is undefined, the given path cannot be maintained in equilibrium.
Proof. The proof proceeds by induction on the number of nodes on
the equilibrium data path, n. For 1=n , there is a single node, say i.
By cheating, the node earns extra profit ( ) −
−
0
0
t
rt
ii ey π . If node i
is then punished until time 1t , the extra profit must be cancelled out
by the lost profit between time 0t and 1t , −1
0
t
t
rt
i eπ . A little
manipulation gives −−
=
01
00
t
rt
i
i
t
rt
e
y
e
π
, as required.
For 1>n , assume for induction that the claim holds for 1−n . The
source does not know whether the cheater is the first hop, or after
the first hop. Because the source does not know the data path after
the first hop, it is unable to punish nodes beyond it. If it chooses a
new first hop, it might not affect the rest of the data path. Because
of this, the source must rely on the first hop to punish cheating
nodes farther along the path. The first hop needs discounted time,
∏ −0
0
hopfirstafter
t
rt
i i
i e
y
π
, to accomplish this by assumption. So
the source must give the first hop this much discounted time in order
to punish defectors further down the line (and the source will expect
poor quality during this period).
Next, the source must be protected against a first hop that cheats,
and pretends that the problem is later in the path. The first hop can
188
do this for the full discounted time, ∏ −0
0
hopfirstafter
t
rt
i i
i e
y
π
, so
the source must punish the first hop long enough to remove the extra
profit it can make. Following the same argument as for 1=n , we
can show that the full discounted time is ∏ −0
0
pathdataon
t
rt
i i
i e
y
π
,
which completes the proof.
The above lemma and its corollary show that punishing cheaters
becomes more and more difficult as the data path grows long, until
doing so is impossible. To capture some intuition behind this result,
imagine that you are an end user, and you notice a sudden drop in
service quality. If your data only travels through your access
provider, you know it is that provider"s fault. You can therefore
take your business elsewhere, at least for some time. This threat
should motivate your provider to maintain high quality.
Suppose, on the other hand, that your data traverses two providers.
When you complain to your ISP, he responds, yes, we know your
quality went down, but it"s not our fault, it"s the next ISP. Give us
some time to punish them and then normal quality will resume. If
your access provider is telling the truth, you will want to listen,
since switching access providers may not even route around the
actual offender. Thus, you will have to accept lower quality service
for some longer time. On the other hand, you may want to punish
your access provider as well, in case he is lying. This means you
have to wait longer to resume normal service. As more ISPs are
added to the path, the time increases in a recursive fashion.
With this lemma in hand, we can return to prove Claim 1.
Proof of Claim 1. Fix an equilibrium data path of length n. Label
the path nodes 1,2,…,n. For each node i, let i"s quality premium be
'11 ++ −= iii ppd . Then we have,
[ ]
=
−
=
−
+
+
=
−
+
++
=
+
−=−
−−
−−
=
−−
−
=
−−
=
n
i
i
n
i iii
iii
n
i iii
ii
n
i i
iii
C
g
npcp
pcp
n
pcp
pp
nd
pcp
n
I
1
1
1
1
1
1
1
1
1
11
1
1
1
1
1
'1
'11
, (3)
where gi is node i"s temptation to cheat by routing to the lowest
price next hop. Lemma 1 tells us that Tg
n
i
i <∏
=1
, where
( )01 rt
eT −
−= . It requires a bit of calculus to show that IC is
minimized by setting each gi equal to n
T /1
. However, as ∞→n ,
we have 1/1
→n
T , which shows that ∞→CI .
According to the claim, as the data path grows long, it increasingly
resembles a lowest-price path. Since lowest-price routing does not
support innovation, we may speculate that innovation degrades with
the length of the data path. Though we suspect stronger claims are
possible, we can demonstrate one such result by including an extra
assumption:
Available Bargain Path: A competitive market exists for
lowcost transit, such that every node can route to the destination for
no more than flow payment, lp .
Claim 2. Under the available bargain path assumption, if node i , a
distance n from S, can invest to alter its quality, and the source will
spend no more than sP for a route including node i"s new quality,
then the payment to node i, p, decreases hyperbolically with n,
( )
( ) s
n
l P
n
T
pp
1
1/1
−
+≤
−
, (4)
where ( )01 rt
eT −
−= is the bound on the product of temptations
from the previous claim. Thus, i will spend no more than
( )
( )−
+
−
s
n
l P
n
T
p
r 1
1 1/1
on this quality improvement, which
approaches the bargain path"s payment,
r
pl
, as ∞→n .
The proof is given in the appendix. As a node gets farther from the
source, its maximum payment approaches the bargain price, pl.
Hence, the reward for innovation is bounded by the same amount.
Large innovations, meaning substantially more expensive than
rpl / , will not be pursued deep into the network.
Claim 2 can alternately be viewed as a lower bound on how much it
costs to elicit innovation in a network. If the source S wants node i
to innovate, it needs to get a motivating payment, p, to i during the
routing stage. However, it must also pay the nodes on the way to i a
premium in order to motivate them to route properly. The claim
shows that this premium increases with the distance to i, until it
dwarfs the original payment, p.
Our claims stand in sharp contrast to our null hypothesis from the
introduction. Comparing the intuitive argument that supported our
hypothesis with these claims, we can see that we implicitly used an
oversimplified model of market pressure (as either present or not).
As is now clear, market pressure relies on the decisions of
customers, but these are limited by the lack of information. Hence,
competitive forces degrade as the network deepens.
4. VERIFIABLE MONITORS
In this section, we begin to introduce more accountability into the
network. Recall that in the previous section, we assumed that
players couldn"t convince each other of their private information.
What would happen if they could? If a monitor"s informational
signal can be credibly conveyed to others, we will call it a verifiable
monitor. The monitor"s output in this case can be thought of as a
statement accompanied by a proof, a string that can be processed by
any player to determine that the statement is true.
A verifiable monitor is a distributed algorithmic mechanism that
runs on the network graph, and outputs, to specific nodes, proofs
about current or past network behavior.
Along these lines, we can imagine verifiable counterparts to E2E
and ROP. We will label these E2Ev and ROPv. With these
monitors, each node observes the quality of the rest of the path and
can also convince other players of these observations by giving
them a proof.
189
By adding verifiability to our monitors, identifying a single cheater
is straightforward. The cheater is the node that cannot produce
proof that the rest of path quality decreased. This means that the
negative results of the previous section no longer hold. For
example, the following lemma stands in contrast to Lemma 1.
Lemma 2. With monitors E2Ev, ROPv, and PRc, and provided that
the node before each potential cheater has an alternate next hop that
isn"t more expensive, it is possible to enforce any data path in SPE
so long as the maximum temptation is less than what can be deterred
in finite time,
−
≤
0
0
max
1
t
rt
er
y
π
(5)
Proof. This lemma follows because nodes can share proofs to
identify who the cheater is. Only that node must be punished in
equilibrium, and the preceding node does not lose any payoff in
administering the punishment.
With this lemma in mind, it is easy to construct counterexamples to
Claim 1 and Claim 2 in this new environment.
Unfortunately, there are at least four reasons not to be satisfied with
this improved monitoring system. The first, and weakest reason is
that the maximum temptation remains finite, causing some
distortion in routes or payments. Each node along a route must
extract some positive profit unless the next hop is also the cheapest.
Of course, if t0 is small, this effect is minimal.
The second, and more serious reason is that we have always given
our source the ability to commit to any punishment. Real world
users are less likely to act collectively, and may simply search for
the best service currently offered. Since punishment phases are
generally characterized by a drop in quality, real world end-users
may take this opportunity to shop for a new access provider. This
will make nodes less motivated to administer punishments.
The third reason is that Lemma 2 does not apply to cheating by
coalitions. A coalition node may pretend to punish its successor,
but instead enjoy a secret payment from the cheating node.
Alternately, a node may bribe its successor to cheat, if the
punishment phase is profitable, and so forth. The required
discounted time for punishment may increase exponentially in the
number of coalition members, just as in the previous section!
The final reason not to accept this monitoring system is that when a
cheater is punished, the path will often be routed around not just the
offender, but around other nodes as well. Effectively, innocent
nodes will be punished along with the guilty. In our abstract model,
this doesn"t cause trouble since the punishment falls off the
equilibrium path. The effects are not so benign in the real world.
When ISPs lie in sequence along a data path, they contribute
complementary services, and their relationship is vertical. From the
perspective of other source-destination pairs, however, these same
firms are likely to be horizontal competitors. Because of this, a
node might deliberately cheat, in order to trigger punishment for
itself and its neighbors. By cheating, the node will save money to
some extent, so the cheater is likely to emerge from the punishment
phase better off than the innocent nodes. This may give the cheater
a strategic advantage against its competitors. In the extreme, the
cheater may use such a strategy to drive neighbors out of business,
and thereby gain a monopoly on some routes.
5. CONTRACTIBLE MONITORS
At the end of the last section, we identified several drawbacks that
persist in an environment with E2Ev, ROPv, and PRc. In this
section, we will show how all of these drawbacks can be overcome.
To do this, we will require our third and final category of monitor:
A contractible monitor is simply a verifiable monitor that generates
proofs that can serve as input to a contract. Thus, contractible is
jointly a property of the monitor and the institutions that must verify
its proofs. Contractibility requires that a court,
1. Can verify the monitor"s proofs.
2. Can understand what the proofs and contracts represent to
the extent required to police illegal activity.
3. Can enforce payments among contracting parties.
Understanding the agreements between companies has traditionally
been a matter of reading contracts on paper. This may prove to be a
harder task in a future network setting. Contracts may plausibly be
negotiated by machine, be numerous, even per-flow, and be further
complicated by the many dimensions of quality.
When a monitor (together with institutional infrastructure) meets
these criteria, we will label it with a subscript c, for contractible.
The reader may recall that this is how we labeled the packets
received monitor, PRc, which allows ISPs to form contracts with
per-packet payments. Similarly, E2Ec and ROPc are contractible
versions of the monitors we are now familiar with.
At the end of the previous section, we argued for some desirable
properties that we"d like our solution to have. Briefly, we would
like to enforce optimal data paths with an equilibrium concept that
doesn"t rely on re-routing for punishment, is coalition proof, and
doesn"t punish innocent nodes when a coalition cheats. We will call
such an equilibrium a fixed-route coalition-proof
protect-theinnocent equilibrium.
As the next claim shows, ROPc allows us to create a system of
linear (price, quality) contracts under just such an equilibrium.
Claim 3. With ROPc, for any feasible and consistent assignment of
rest of path qualities to nodes, and any corresponding payment
schedule that yields non-negative payoffs, these qualities can be
maintained with bilateral contracts in a fixed-route coalition-proof
protect-the-innocent equilibrium.
Proof: Fix any data path consistent with the given rest of path
qualities. Select some monetary punishment, P, large enough to
prevent any cheating for time t0 (the discounted total payment from
the source will work). Let each node on the path enter into a
contract with its parent, which fixes an arbitrary payment schedule
so long as the rest of path quality is as prescribed. When the parent
node, which has ROPc, submits a proof that the rest of path quality
is less than expected, the contract awards her an instantaneous
transfer, P, from the downstream node. Such proofs can be
submitted every 0t for the previous interval.
Suppose now that a coalition, C, decides to cheat. The source
measures a decrease in quality, and according to her contract, is
awarded P from the first hop. This means that there is a net outflow
of P from the ISPs as a whole. Suppose that node i is not in C. In
order for the parent node to claim P from i, it must submit proof that
the quality of the path starting at i is not as prescribed. This means
190
that there is a cheater after i. Hence, i would also have detected a
change in quality, so i can claim P from the next node on the path.
Thus, innocent nodes are not punished. The sequence of payments
must end by the destination, so the net outflow of P must come from
the nodes in C. This establishes all necessary conditions of the
equilibrium.
Essentially, ROPc allows for an implementation of (price, quality)
contracts. Building upon this result, we can construct competition
games in which nodes offer various qualities to each other at
specified prices, and can credibly commit to meet these
performance targets, even allowing for coalitions and a desire to
damage other ISPs.
Example 1. Define a Stackelberg price-quality competition game
as follows: Extend the partial order of nodes induced by the graph
to any complete ordering, such that downstream nodes appear
before their parents. In this order, each node selects a contract to
offer to its parents, consisting of a rest of path quality, and a linear
price. In the routing game, each node selects a next hop at every
time, consistent with its advertised rest of path quality. The
Stackelberg price-quality competition game can be implemented in
our model with ROPc monitors, by using the strategy in the proof,
above. It has the following useful property:
Claim 4. The Stackelberg price-quality competition game yields
optimal routes in SPE.
The proof is given in the appendix. This property is favorable from
an innovation perspective, since firms that invest in high quality will
tend to fall on the optimal path, gaining positive payoff. In general,
however, investments may be over or under rewarded. Extra
conditions may be given under which innovation decisions approach
perfect efficiency for large innovations. We omit the full analysis
here.
Example 2. Alternately, we can imagine that players report their
private information to a central authority, which then assigns all
contracts. For example, contracts could be computed to implement
the cost-minimizing VCG mechanism proposed by Feigenbaum, et
al. in [7]. With ROPc monitors, we can adapt this mechanism to
maximize welfare. For node, i, on the optimal path, L, the net
payment must equal, essentially, its contribution to the welfare of S,
D, and the other nodes. If L" is an optimal path in the graph with i
removed, the profit flow to i is,
( ) ( )
∈≠∈
+−−
',
'
Lj
j
ijLj
jLL ccququ , (6)
where Lq and 'Lq are the qualities of the two paths. Here, (price,
quality) contracts ensure that nodes report their qualities honestly.
The incentive structure of the VCG mechanism is what motivates
nodes to report their costs accurately.
A nice feature of this game is that individual innovation decisions
are efficient, meaning that a node will invest in an innovation
whenever the investment cost is less than the increased welfare of
the optimal data path. Unfortunately, the source may end up paying
more than the utility of the path.
Notice that with just E2Ec, a weaker version of Claim 3 holds.
Bilateral (price, quality) contracts can be maintained in an
equilibrium that is fixed-route and coalition-proof, but not
protectthe-innocent. This is done by writing contracts to punish everyone
on the path when the end to end quality drops. If the path length is
n, the first hop pays nP to the source, the second hop pays ( )Pn 1−
to the first, and so forth. This ensures that every node is punished
sufficiently to make cheating unprofitable. For the reasons we gave
previously, we believe that this solution concept is less than ideal,
since it allows for malicious nodes to deliberately trigger
punishments for potential competitors.
Up to this point, we have adopted fixed-route coalition-proof
protect-the-innocent equilibrium as our desired solution concept,
and shown that ROPc monitors are sufficient to create some
competition games that are desirable in terms of service diversity
and innovation. As the next claim will show, rest of path
monitoring is also necessary to construct such games under our
solution concept.
Before we proceed, what does it mean for a game to be desirable
from the perspective of service diversity and innovation? We will
use a very weak assumption, essentially, that the game is not fully
commoditized for any node. The claim will hold for this entire class
of games.
Definition: A competition game is nowhere-commoditized if for
each node, i, not adjacent to D, there is some assignment of qualities
and marginal costs to nodes, such that the optimal data path includes
i, and i has a positive temptation to cheat.
In the case of linear contracts, it is sufficient to require that ∞<CI ,
and that every node make positive profit under some assignment of
qualities and marginal costs.
Strictly speaking, ROPc monitors are not the only way to construct
these desirable games. To prove the next claim, we must broaden
our notion of rest of path monitoring to include the similar ROPc"
monitor, which attests to the quality starting at its own node,
through the end of the path. Compare the two monitors below:
ROPc: gives a node proof that the path quality from the next node
to the destination is not correct.
ROPc": gives a node proof that the path quality from that node to
the destination is correct.
We present a simplified version of this claim, by including an
assumption that only one node on the path can cheat at a time
(though conspirators can still exchange side payments). We will
discuss the full version after the proof.
Claim 5. Assume a set of monitors, and a nowhere-commoditized
bilateral contract competition game that always maintains the
optimal quality in fixed-route coalition-proof protect-the-innocent
equilibrium, with only one node allowed to cheat at a time. Then
for each node, i, not adjacent to D, either i has an ROPc monitor, or
i"s children each have an ROPc" monitor.
Proof: First, because of the fixed-route assumption, punishments
must be purely monetary.
Next, when cheating occurs, if the payment does not go to the
source or destination, it may go to another coalition member,
rendering it ineffective. Thus, the source must accept some
monetary compensation, net of its normal flow payment, when
cheating occurs. Since the source only contracts with the first hop,
it must accept this money from the first hop. The source"s contract
must therefore distinguish when the path quality is normal from
when it is lowered by cheating. To do so, it can either accept proofs
191
from the source, that the quality is lower than required, or it can
accept proofs from the first hop, that the quality is correct. These
nodes will not rationally offer the opposing type of proof.
By definition, any monitor that gives the source proof that the path
quality is wrong is an ROPc monitor. Any monitor that gives the
first hop proof that the quality is correct is a ROPc" monitor. Thus,
at least one of these monitors must exist.
By the protect-the-innocent assumption, if cheating occurs, but the
first hop is not a cheater, she must be able to claim the same size
reward from the next ISP on the path, and thus pass on the
punishment. The first hop"s contract with the second must then
distinguish when cheating occurs after the first hop. By argument
similar to that for the source, either the first hop has a ROPc
monitor, or the second has a ROPc" monitor. This argument can be
iterated along the entire path to the penultimate node before D.
Since the marginal costs and qualities can be arranged to make any
path the optimal path, these statements must hold for all nodes and
their children, which completes the proof.
The two possibilities for monitor correspond to which node has the
burden of proof. In one case, the prior node must prove the
suboptimal quality to claim its reward. In the other, the subsequent
node must prove that the quality was correct to avoid penalty.
Because the two monitors are similar, it seems likely that they
require comparable costs to implement. If submitting the proofs is
costly, it seems natural that nodes would prefer to use the ROPc
monitor, placing the burden of proof on the upstream node.
Finally, we note that it is straightforward to derive the full version of
the claim, which allows for multiple cheaters. The only
complication is that cheaters can exchange side payments, which
makes any money transfers between them redundant. Because of
this, we have to further generalize our rest of path monitors, so they
are less constrained in the case that there are cheaters on either side.
5.1 Implementing Monitors
Claim 5 should not be interpreted as a statement that each node must
compute the rest of path quality locally, without input from other
nodes. Other monitors, besides ROPc and ROPc" can still be used,
loosely speaking, as building blocks. For instance, network
tomography is concerned with measuring properties of the network
interior with tools located at the edge. Using such techniques, our
source might learn both individual node qualities and the data path.
This is represented by the following two monitors:
SHOPc
i
: (source-based hop quality) A monitor that gives the
source proof of what the quality of node i is.
SPATHc: (source-based path) A monitor that gives the source
proof of what the data path is at any time, at least as far as it
matches the equilibrium path.
With these monitors, a punishment mechanism can be designed to
fulfill the conditions of Claim 5. It involves the source sharing the
proofs it generates with nodes further down the path, which use
them to determine bilateral payments. Ultimately however, the
proof of Claim 5 shows us that each node i"s bilateral contracts
require proof of the rest of path quality. This means that node i (or
possibly its children) will have to combine the proofs that they
receive to generate a proof of the rest of path quality. Thus, the
combined process is itself a rest of path monitor.
What we have done, all in all, is constructed a rest of path monitor
using SPATHc and SHOPc
i
as building blocks. Our new monitor
includes both the component monitors and whatever distributed
algorithmic mechanism exists to make sure nodes share their proofs
correctly.
This mechanism can potentially involve external institutions. For a
concrete example, suppose that when node i suspects it is getting
poor rest of path quality from its successor, it takes the downstream
node to court. During the discovery process, the court subpoenas
proofs of the path and of node qualities from the source (ultimately,
there must be some threat to ensure the source complies). Finally,
for the court to issue a judgment, one party or the other must
compile a proof of what the rest of path quality was. Hence, the
entire discovery process acts as a rest of path monitor, albeit a rather
costly monitor in this case.
Of course, mechanisms can be designed to combine these monitors
at much lower cost. Typically, such mechanisms would call for
automatic sharing of proofs, with court intervention only as a last
resort. We defer these interesting mechanisms to future work.
As an aside, intuition might dictate that SHOPc
i
generates more
information than ROPc; after all, inferring individual node qualities
seems a much harder problem. Yet, without path information,
SHOPc
i
is not sufficient for our first-best innovation result. The
proof of this demonstrates a useful technique:
Claim 6. With monitors E2E, ROP, SHOPc
i
and PRc, and a
nowhere-commoditized bilateral contract competition game, the
optimal quality cannot be maintained for all assignments of quality
and marginal cost, in fixed-route coalition-proof
protect-theinnocent equilibrium.
Proof: Because nodes cannot verify the data path, they cannot form
a proof of what the rest of path quality is. Hence, ROPc monitors do
not exist, and therefore the requirements of Claim 5 cannot hold.
6. CONCLUSIONS AND FUTURE WORK
It is our hope that this study will have a positive impact in at least
three different ways. The first is practical: we believe our analysis
has implications for the design of future monitoring protocols and
for public policy.
For protocol designers, we first provide fresh motivation to create
monitoring systems. We have argued that the poor accountability of
the Internet is a fundamental obstacle to alleviating the pathologies
of commoditization and lack of innovation. Unless accountability
improves, these pathologies are guaranteed to remain.
Secondly, we suggest directions for future advances in monitoring.
We have shown that adding verifiability to monitors allows for
some improvements in the characteristics of competition. At the
same time, this does not present a fully satisfying solution. This
paper has suggested a novel standard for monitors to aspire to - one
of supporting optimal routes in innovative competition games under
fixed-route coalition-proof protect-the-innocent equilibrium. We
have shown that under bilateral contracts, this specifically requires
contractible rest of path monitors.
This is not to say that other types of monitors are unimportant. We
included an example in which individual hop quality monitors and a
path monitor can also meet our standard for sustaining competition.
However, in order for this to happen, a mechanism must be included
192
to combine proofs from these monitors to form a proof of rest of
path quality. In other words, the monitors must ultimately be
combined to form contractible rest of path monitors. To support
service differentiation and innovation, it may be easier to design rest
of path monitors directly, thereby avoiding the task of designing
mechanisms for combining component monitors.
As far as policy implications, our analysis points to the need for
legal institutions to enforce contracts based on quality. These
institutions must be equipped to verify proofs of quality, and police
illegal contracting behavior. As quality-based contracts become
numerous and complicated, and possibly negotiated by machine,
this may become a challenging task, and new standards and
regulations may have to emerge in response. This remains an
interesting and unexplored area for research.
The second area we hope our study will benefit is that of clean-slate
architectural design. Traditionally, clean-slate design tends to focus
on creating effective and elegant networks for a static set of
requirements. Thus, the approach is often one of engineering,
which tends to neglect competitive effects. We agree with
Ratnasamy, Shenker, and McCanne, that designing for evolution
should be a top priority [11]. We have demonstrated that the
network"s monitoring ability is critical to supporting innovation, as
are the institutions that support contracting. These elements should
feature prominently in new designs. Our analysis specifically
suggests that architectures based on bilateral contracts should
include contractible rest of path monitoring. From a clean-slate
perspective, these monitors can be transparently and fully integrated
with the routing and contracting systems.
Finally, the last contribution our study makes is methodological.
We believe that the mathematical formalization we present is
applicable to a variety of future research questions. While a
significant literature addresses innovation in the presence of
network effects, to the best of our knowledge, ours is the first model
of innovation in a network industry that successfully incorporates
the actual topological structure as input. This allows the discovery
of new properties, such as the weakening of market forces with the
number of ISPs on a data path that we observe with
lowaccountability.
Our method also stands in contrast to the typical approach of
distributed algorithmic mechanism design. Because this field is
based on a principle-agent framework, contracts are usually
proposed by the source, who is allowed to make a take it or leave it
offer to network nodes. Our technique allows contracts to emerge
from a competitive framework, so the source is limited to selecting
the most desirable contract. We believe this is a closer reflection of
the industry.
Based on the insights in this study, the possible directions for future
research are numerous and exciting. To some degree, contracting
based on quality opens a Pandora"s Box of pressing questions: Do
quality-based contracts stand counter to the principle of network
neutrality? Should ISPs be allowed to offer a choice of contracts at
different quality levels? What anti-competitive behaviors are
enabled by quality-based contracts? Can a contracting system
support optimal multicast trees?
In this study, we have focused on bilateral contracts. This system
has seemed natural, especially since it is the prevalent system on the
current network. Perhaps its most important benefit is that each
contract is local in nature, so both parties share a common, familiar
legal jurisdiction. There is no need to worry about who will enforce
a punishment against another ISP on the opposite side of the planet,
nor is there a dispute over whose legal rules to apply in interpreting
a contract.
Although this benefit is compelling, it is worth considering other
systems. The clearest alternative is to form a contract between the
source and every node on the path. We may call these source
contracts. Source contracting may present surprising advantages.
For instance, since ISPs do not exchange money with each other, an
ISP cannot save money by selecting a cheaper next hop.
Additionally, if the source only has contracts with nodes on the
intended path, other nodes won"t even be willing to accept packets
from this source since they won"t receive compensation for carrying
them. This combination seems to eliminate all temptation for a
single cheater to cheat in route. Because of this and other
encouraging features, we believe source contracts are a fertile topic
for further study.
Another important research task is to relax our assumption that
quality can be measured fully and precisely. One possibility is to
assume that monitoring is only probabilistic or suffers from noise.
Even more relevant is the possibility that quality monitors are
fundamentally incomplete. A quality monitor can never anticipate
every dimension of quality that future applications will care about,
nor can it anticipate a new and valuable protocol that an ISP
introduces. We may define a monitor space as a subspace of the
quality space that a monitor can measure, QM ⊂ , and a
corresponding monitoring function that simply projects the full
range of qualities onto the monitor space, MQm →: .
Clearly, innovations that leave quality invariant under m are not
easy to support - they are invisible to the monitoring system. In this
environment, we expect that path monitoring becomes more
important, since it is the only way to ensure data reaches certain
innovator ISPs. Further research is needed to understand this
process.
7. ACKNOWLEDGEMENTS
We would like to thank the anonymous reviewers, Jens Grossklags,
Moshe Babaioff, Scott Shenker, Sylvia Ratnasamy, and Hal Varian
for their comments. This work is supported in part by the National
Science Foundation under ITR award ANI-0331659.
8. REFERENCES
[1] Afergan, M. Using Repeated Games to Design
IncentiveBased Routing Systems. In Proceedings of IEEE INFOCOM
(April 2006).
[2] Afergan, M. and Wroclawski, J. On the Benefits and
Feasibility of Incentive Based Routing Infrastructure. In ACM
SIGCOMM'04 Workshop on Practice and Theory of Incentives
in Networked Systems (PINS) (August 2004).
[3] Argyraki, K., Maniatis, P., Cheriton, D., and Shenker, S.
Providing Packet Obituaries. In Third Workshop on Hot Topics
in Networks (HotNets) (November 2004).
[4] Clark, D. D. The Design Philosophy of the DARPA Internet
Protocols. In Proceedings of ACM SIGCOMM (1988).
193
[5] Clark, D. D., Wroclawski, J., Sollins, K. R., and Braden, R.
Tussle in cyberspace: Defining tomorrow's internet. In
Proceedings of ACM SIGCOMM (August 2002).
[6] Dang-Nguyen, G. and Pénard, T. Interconnection Agreements:
Strategic Behaviour and Property Rights. In Brousseau, E. and
Glachant, J.M. Eds. The Economics of Contracts: Theories and
Applications, Cambridge University Press, 2002.
[7] Feigenbaum, J., Papadimitriou, C., Sami, R., and Shenker, S.
A BGP-based Mechanism for Lowest-Cost Routing.
Distributed Computing 18 (2005), pp. 61-72.
[8] Huston, G. Interconnection, Peering, and Settlements. Telstra,
Australia.
[9] Liu, Y., Zhang, H., Gong, W., and Towsley, D. On the
Interaction Between Overlay Routing and Traffic Engineering.
In Proceedings of IEEE INFOCOM (2005).
[10] MacKie-Mason, J. and Varian, H. Pricing the Internet. In
Kahin, B. and Keller, J. Eds. Public access to the Internet.
Englewood Cliffs, NJ; Prentice-Hall, 1995.
[11] Ratnasamy, S., Shenker, S., and McCanne, S. Towards an
Evolvable Internet Architecture. In Proceeding of ACM
SIGCOMM (2005).
[12] Shakkottai, S., and Srikant, R. Economics of Network Pricing
with Multiple ISPs. In Proceedings of IEEE INFOCOM
(2005).
9. APPENDIX
Proof of Claim 2. Node i must fall on the equilibrium data path to
receive any payment. Let the prices along the data path be
ppppP nS == ,..., 21 , with marginal costs, ncc ,...,1 . We may
assume the prices on the path are greater than lp or the claim
follows trivially. Each node along the data path can cheat in route
by giving data to the bargain path at price no more than lp . So
node j"s temptation to cheat is at least
11 ++ −
−
≥
−−
−−
jj
l
jjj
ljj
pp
pp
pcp
pcp
. Then Lemma 1 gives,
1
13221
1...
−
−
−
−>
−
−
⋅⋅
−
−
−
−
≥
n
S
l
n
lll
P
pp
n
pp
pp
pp
pp
pp
pp
T (7)
This can be rearranged to give
( )
( ) s
n
l P
n
T
pp
1
1/1
−
+≤
−
, as required.
The rest of the claim simply recognizes that rp / is the greatest
reward node i can receive for its investment, so it will not invest
sums greater than this.
Proof of Claim 4. Label the nodes 1,2,.. N in the order in which
they select contracts. Let subgame n be the game that begins with n
choosing its contract. Let Ln be the set of possible paths restricted to
nodes n,…,N. That is, Ln is the set of possible routes from S to
reach some node that has already moved.
For subgame n, define the local welfare over paths nLl ∈ , and their
possible next hops, nj < as follows,
( ) ( ) j
li
ipathjl pcqqujlV −−=
∈
*, , (8)
where ql is the quality of path l in the set {n,…,N}, and pathjq and
pj are the quality and price of the contract j has offered.
For induction, assume that subgame n + 1 maximizes local welfare.
We show that subgame n does as well. If node n selects next hop k,
we can write the following relation,
( ) ( )( ) ( )( ) nknn knlVpcpknlVnlV π−=++−= ,,,,, , (9)
where n is node n"s profit if the path to n is chosen. This path is
chosen whenever ( )nlV , is maximal over Ln+1 and possible next
hops. If ( )( )knlV ,, is maximal over Ln, it is also maximal over the
paths in Ln+1 that don"t lead to n. This means that node n can choose
some n small enough so that ( )nlV , is maximal over Ln+1, so the
route will lead to k.
Conversely, if ( )( )knlV ,, is not maximal over Ln, either V is greater
for another of n"s next hops, in which case n will select that one in
order to increase n, or V is greater for some path in Ln+1 that don"t
lead to n, in which case ( )nlV , cannot be maximal for any
nonnegative n.
Thus, we conclude that subgame n maximizes local welfare. For the
initial case, observe that this assumption holds for the source.
Finally, we deduce that subgame 1, which is the entire game,
maximizes local welfare, which is equivalent to actual welfare.
Hence, the Stackelberg price-quality game yields an optimal route.
194 | monitor;clean-slate architectural design;contracting system;verifiable monitor;innovation;contract;routing policy;network monitor;routing stagequality;contractible monitor;commoditization;smart market;incentive |
train_C-65 | Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors | The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier"s PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested. | 1. INTRODUCTION
The importance of countersniper systems is underscored
by the constant stream of news reports coming from the
Middle East. In October 2006 CNN reported on a new
tactic employed by insurgents. A mobile sniper team moves
around busy city streets in a car, positions itself at a good
standoff distance from dismounted US military personnel,
takes a single well-aimed shot and immediately melts in the
city traffic. By the time the soldiers can react, they are
gone. A countersniper system that provides almost
immediate shooter location to every soldier in the vicinity would
provide clear benefits to the warfigthers.
Our team introduced PinPtr, the first sensor
networkbased countersniper system [17, 8] in 2003. The system is
based on potentially hundreds of inexpensive sensor nodes
deployed in the area of interest forming an ad-hoc multihop
network. The acoustic sensors measure the Time of Arrival
(ToA) of muzzle blasts and ballistic shockwaves, pressure
waves induced by the supersonic projectile, send the data to
a base station where a sensor fusion algorithm determines
the origin of the shot. PinPtr is characterized by high
precision: 1m average 3D accuracy for shots originating within
or near the sensor network and 1 degree bearing precision
for both azimuth and elevation and 10% accuracy in range
estimation for longer range shots. The truly unique
characteristic of the system is that it works in such reverberant
environments as cluttered urban terrain and that it can
resolve multiple simultaneous shots at the same time. This
capability is due to the widely distributed sensing and the
unique sensor fusion approach [8]. The system has been
tested several times in US Army MOUT (Military
Operations in Urban Terrain) facilities.
The obvious disadvantage of such a system is its static
nature. Once the sensors are distributed, they cover a
certain area. Depending on the operation, the deployment may
be needed for an hour or a month, but eventually the area
looses its importance. It is not practical to gather and reuse
the sensors, especially under combat conditions. Even if the
sensors are cheap, it is still a waste and a logistical problem
to provide a continuous stream of sensors as the operations
move from place to place. As it is primarily the soldiers that
the system protects, a natural extension is to mount the
sensors on the soldiers themselves. While there are
vehiclemounted countersniper systems [1] available commercially,
we are not aware of a deployed system that protects
dismounted soldiers. A helmet-mounted system was developed
in the mid 90s by BBN [3], but it was not continued beyond
the Darpa program that funded it.
113
To move from a static sensor network-based solution to a
highly mobile one presents significant challenges. The sensor
positions and orientation need to be constantly monitored.
As soldiers may work in groups of as little as four people,
the number of sensors measuring the acoustic phenomena
may be an order of magnitude smaller than before.
Moreover, the system should be useful to even a single soldier.
Finally, additional requirements called for caliber estimation
and weapon classification in addition to source localization.
The paper presents the design and evaluation of our
soldierwearable mobile countersniper system. It describes the
hardware and software architecture including the custom sensor
board equipped with a small microphone array and
connected to a COTS MICAz mote [12]. Special emphasis is
paid to the sensor fusion technique that estimates the
trajectory, range, caliber and weapon type simultaneously. The
results and analysis of an independent evaluation of the
system at the US Army Aberdeen Test Center are also
presented.
2. APPROACH
The firing of a typical military rifle, such as the AK47
or M16, produces two distinct acoustic phenomena. The
muzzle blast is generated at the muzzle of the gun and
travels at the speed sound. The supersonic projectile generates
an acoustic shockwave, a kind of sonic boom. The
wavefront has a conical shape, the angle of which depends on the
Mach number, the speed of the bullet relative to the speed
of sound.
The shockwave has a characteristic shape resembling a
capital N. The rise time at both the start and end of the
signal is very fast, under 1 μsec. The length is determined by
the caliber and the miss distance, the distance between the
trajectory and the sensor. It is typically a few hundred μsec.
Once a trajectory estimate is available, the shockwave length
can be used for caliber estimation.
Our system is based on four microphones connected to
a sensorboard. The board detects shockwaves and muzzle
blasts and measures their ToA. If at least three acoustic
channels detect the same event, its AoA is also computed.
If both the shockwave and muzzle blast AoA are available,
a simple analytical solution gives the shooter location as
shown in Section 6. As the microphones are close to each
other, typically 2-4, we cannot expect very high precision.
Also, this method does not estimate a trajectory. In fact, an
infinite number of trajectory-bullet speed pairs satisfy the
observations. However, the sensorboards are also connected
to COTS MICAz motes and they share their AoA and ToA
measurements, as well as their own location and orientation,
with each other using a multihop routing service [9]. A
hybrid sensor fusion algorithm then estimates the trajectory,
the range, the caliber and the weapon type based on all
available observations.
The sensorboard is also Bluetooth capable for
communication with the soldier"s PDA or laptop computer. A wired
USB connection is also available. The sensorfusion
algorithm and the user interface get their data through one of
these channels.
The orientation of the microphone array at the time of
detection is provided by a 3-axis digital compass. Currently
the system assumes that the soldier"s PDA is GPS-capable
and it does not provide self localization service itself.
However, the accuracy of GPS is a few meters degrading the
Figure 1: Acoustic sensorboard/mote assembly
.
overall accuracy of the system. Refer to Section 7 for an
analysis. The latest generation sensorboard features a Texas
Instruments CC-1000 radio enabling the high-precision radio
interferometric self localization approach we have developed
separately [7]. However, we leave the integration of the two
technologies for future work.
3. HARDWARE
Since the first static version of our system in 2003, the
sensor nodes have been built upon the UC Berkeley/Crossbow
MICA product line [11]. Although rudimentary acoustic
signal processing can be done on these microcontroller-based
boards, they do not provide the required computational
performance for shockwave detection and angle of arrival
measurements, where multiple signals from different
microphones need to be processed in parallel at a high sampling
rate. Our 3rd generation sensorboard is designed to be used
with MICAz motes-in fact it has almost the same size as
the mote itself (see Figure 1).
The board utilizes a powerful Xilinx XC3S1000 FPGA
chip with various standard peripheral IP cores, multiple soft
processor cores and custom logic for the acoustic detectors
(Figure 2). The onboard Flash (4MB) and PSRAM (8MB)
modules allow storing raw samples of several acoustic events,
which can be used to build libraries of various acoustic
signatures and for refining the detection cores off-line. Also, the
external memory blocks can store program code and data
used by the soft processor cores on the FPGA.
The board supports four independent analog channels
sampled at up to 1 MS/s (million samples per seconds). These
channels, featuring an electret microphone (Panasonic
WM64PNT), amplifiers with controllable gain (30-60 dB) and
a 12-bit serial ADC (Analog Devices AD7476), reside on
separate tiny boards which are connected to the main
sensorboard with ribbon cables. This partitioning enables the
use of truly different audio channels (eg.: slower sampling
frequency, different gain or dynamic range) and also results
in less noisy measurements by avoiding long analog signal
paths.
The sensor platform offers a rich set of interfaces and can
be integrated with existing systems in diverse ways. An
RS232 port and a Bluetooth (BlueGiga WT12) wireless link
with virtual UART emulation are directly available on the
board and provide simple means to connect the sensor to
PCs and PDAs. The mote interface consists of an I2
C bus
along with an interrupt and GPIO line (the latter one is used
114
Figure 2: Block diagram of the sensorboard.
for precise time synchronization between the board and the
mote). The motes are equipped with IEEE 802.15.4
compliant radio transceivers and support ad-hoc wireless
networking among the nodes and to/from the base station. The
sensorboard also supports full-speed USB transfers (with
custom USB dongles) for uploading recorded audio samples
to the PC. The on-board JTAG chain-directly accessible
through a dedicated connector-contains the FPGA part
and configuration memory and provides in-system
programming and debugging facilities.
The integrated Honeywell HMR3300 digital compass
module provides heading, pitch and roll information with 1◦
accuracy, which is essential for calculating and combining
directional estimates of the detected events.
Due to the complex voltage requirements of the FPGA,
the power supply circuitry is implemented on the
sensorboard and provides power both locally and to the mote. We
used a quad pack of rechargeable AA batteries as the power
source (although any other configuration is viable that meets
the voltage requirements). The FPGA core (1.2 V) and I/O
(3.3 V) voltages are generated by a highly efficient buck
switching regulator. The FPGA configuration (2.5 V) and a
separate 3.3 V power net are fed by low current LDOs, the
latter one is used to provide independent power to the mote
and to the Bluetooth radio. The regulators-except the last
one-can be turned on/off from the mote or through the
Bluetooth radio (via GPIO lines) to save power.
The first prototype of our system employed 10 sensor
nodes. Some of these nodes were mounted on military kevlar
helmets with the microphones directly attached to the
surface at about 20 cm separation as shown in Figure 3(a). The
rest of the nodes were mounted in plastic enclosures
(Figure 3(b)) with the microphones placed near the corners of
the boxes to form approximately 5 cm×10 cm rectangles.
4. SOFTWARE ARCHITECTURE
The sensor application relies on three subsystems
exploiting three different computing paradigms as they are shown
in Figure 4. Although each of these execution models suit
their domain specific tasks extremely well, this diversity
(a) (b)
Figure 3: Sensor prototypes mounted on a kevlar
helmet (a) and in a plastic box on a tripod (b).
presents a challenge for software development and system
integration. The sensor fusion and user interface
subsystem is running on PDAs and were implemented in Java.
The sensing and signal processing tasks are executed by an
FPGA, which also acts as a bridge between various wired
and wireless communication channels. The ad-hoc internode
communication, time synchronization and data sharing are
the responsibilities of a microcontroller based radio module.
Similarly, the application employs a wide variety of
communication protocols such as Bluetooth and IEEE 802.14.5
wireless links, as well as optional UARTs, I2
C and/or USB
buses.
Soldier
Operated Device
(PDA/Laptop)
FPGA
Sensor Board
Mica Radio
Module
2.4 GHz Wireless Link
Radio Control
Message Routing
Acoustic Event Encoder
Sensor Time Synch.
Network Time Synch.Remote Control
Time
stamping
Interrupts
Virtual
Register
Interface
C
O
O
R
D
I
N
A
T
O
R
A
n
a
l
o
g
c
h
a
n
n
e
l
s Compass
PicoBlaze
Comm.
Interface
PicoBlaze
WT12 Bluetooth Radio
MOTE IF:I2C,Interrupts
USB PSRAM
U
A
R
T
U
A
R
T
MB
det
SW
det
REC
Bluetooth Link
User
Interface
Sensor
Fusion
Location
Engine GPS
Message (Dis-)AssemblerSensor
Control
Figure 4: Software architecture diagram.
The sensor fusion module receives and unpacks raw
measurements (time stamps and feature vectors) from the
sensorboard through the Bluetooth link. Also, it fine tunes
the execution of the signal processing cores by setting
parameters through the same link. Note that measurements
from other nodes along with their location and orientation
information also arrive from the sensorboard which acts as
a gateway between the PDA and the sensor network. The
handheld device obtains its own GPS location data and
di115
rectly receives orientation information through the
sensorboard. The results of the sensor fusion are displayed on the
PDA screen with low latency. Since, the application is
implemented in pure Java, it is portable across different PDA
platforms.
The border between software and hardware is
considerably blurred on the sensor board. The IP
cores-implemented in hardware description languages (HDL) on the
reconfigurable FPGA fabric-closely resemble hardware
building blocks. However, some of them-most notably the soft
processor cores-execute true software programs. The
primary tasks of the sensor board software are 1) acquiring
data samples from the analog channels, 2) processing
acoustic data (detection), and 3) providing access to the results
and run-time parameters through different interfaces.
As it is shown in Figure 4, a centralized virtual register
file contains the address decoding logic, the registers for
storing parameter values and results and the point to point
data buses to and from the peripherals. Thus, it effectively
integrates the building blocks within the sensorboard and
decouples the various communication interfaces. This
architecture enabled us to deploy the same set of sensors in a
centralized scenario, where the ad-hoc mote network (using
the I2
C interface) collected and forwarded the results to a
base station or to build a decentralized system where the
local PDAs execute the sensor fusion on the data obtained
through the Bluetooth interface (and optionally from other
sensors through the mote interface). The same set of
registers are also accessible through a UART link with a terminal
emulation program. Also, because the low-level interfaces
are hidden by the register file, one can easily add/replace
these with new ones (eg.: the first generation of motes
supported a standard μP interface bus on the sensor connector,
which was dropped in later designs).
The most important results are the time stamps of the
detected events. These time stamps and all other timing
information (parameters, acoustic event features) are based
on a 1 MHz clock and an internal timer on the FPGA. The
time conversion and synchronization between the sensor
network and the board is done by the mote by periodically
requesting the capture of the current timer value through a
dedicated GPIO line and reading the captured value from
the register file through the I2
C interface. Based on the the
current and previous readings and the corresponding mote
local time stamps, the mote can calculate and maintain the
scaling factor and offset between the two time domains.
The mote interface is implemented by the I2
C slave IP
core and a thin adaptation layer which provides a data and
address bus abstraction on top of it. The maximum
effective bandwidth is 100 Kbps through this interface. The
FPGA contains several UART cores as well: for
communicating with the on-board Bluetooth module, for
controlling the digital compass and for providing a wired RS232
link through a dedicated connector. The control, status and
data registers of the UART modules are available through
the register file. The higher level protocols on these lines are
implemented by Xilinx PicoBlaze microcontroller cores [13]
and corresponding software programs. One of them provides
a command line interface for test and debug purposes, while
the other is responsible for parsing compass readings. By
default, they are connected to the RS232 port and to the
on-board digital compass line respectively, however, they
can be rewired to any communication interface by changing
the register file base address in the programs (e.g. the
command line interface can be provided through the Bluetooth
channel).
Two of the external interfaces are not accessible through
the register file: a high speed USB link and the SRAM
interface are tied to the recorder block. The USB module
implements a simple FIFO with parallel data lines connected to an
external FT245R USB device controller. The RAM driver
implements data read/write cycles with correct timing and
is connected to the on-board pseudo SRAM. These
interfaces provide 1 MB/s effective bandwidth for downloading
recorded audio samples, for example.
The data acquisition and signal processing paths exhibit
clear symmetry: the same set of IP cores are instantiated
four times (i.e. the number of acoustic channels) and run
independently. The signal paths meet only just before
the register file. Each of the analog channels is driven by
a serial A/D core for providing a 20 MHz serial clock and
shifting in 8-bit data samples at 1 MS/s and a digital
potentiometer driver for setting the required gain. Each channel
has its own shockwave and muzzle blast detector, which are
described in Section 5. The detectors fetch run-time
parameter values from the register file and store their results there
as well. The coordinator core constantly monitors the
detection results and generates a mote interrupt promptly upon
full detection or after a reasonable timeout after partial
detection.
The recorder component is not used in the final
deployment, however, it is essential for development purposes for
refining parameter values for new types of weapons or for
other acoustic sources. This component receives the
samples from all channels and stores them in circular buffers in
the PSRAM device. If the signal amplitude on one of the
channels crosses a predefined threshold, the recorder
component suspends the sample collection with a predefined delay
and dumps the contents of the buffers through the USB link.
The length of these buffers and delays, the sampling rate,
the threshold level and the set of recorded channels can be
(re)configured run-time through the register file. Note that
the core operates independently from the other signal
processing modules, therefore, it can be used to validate the
detection results off-line.
The FPGA cores are implemented in VHDL, the PicoBlaze
programs are written in assembly. The complete
configuration occupies 40% of the resources (slices) of the FPGA and
the maximum clock speed is 30 MHz, which is safely higher
than the speed used with the actual device (20MHz).
The MICAz motes are responsible for distributing
measurement data across the network, which drastically
improves the localization and classification results at each node.
Besides a robust radio (MAC) layer, the motes require two
essential middleware services to achieve this goal. The
messages need to be propagated in the ad-hoc multihop network
using a routing service. We successfully integrated the
Directed Flood-Routing Framework (DFRF) [9] in our
application. Apart from automatic message aggregation and
efficient buffer management, the most unique feature of DFRF
is its plug-in architecture, which accepts custom routing
policies. Routing policies are state machines that govern
how received messages are stored, resent or discarded.
Example policies include spanning tree routing, broadcast,
geographic routing, etc. Different policies can be used for
different messages concurrently, and the application is able to
116
change the underlying policies at run-time (eg.: because of
the changing RF environment or power budget). In fact, we
switched several times between a simple but lavish broadcast
policy and a more efficient gradient routing on the field.
Correlating ToA measurements requires a common time
base and precise time synchronization in the sensor network.
The Routing Integrated Time Synchronization (RITS) [15]
protocol relies on very accurate MAC-layer time-stamping
to embed the cumulative delay that a data message accrued
since the time of the detection in the message itself. That
is, at every node it measures the time the message spent
there and adds this to the number in the time delay slot of
the message, right before it leaves the current node. Every
receiving node can subtract the delay from its current time
to obtain the detection time in its local time reference. The
service provides very accurate time conversion (few μs per
hop error), which is more than adequate for this application.
Note, that the motes also need to convert the sensorboard
time stamps to mote time as it is described earlier.
The mote application is implemented in nesC [5] and is
running on top of TinyOS [6]. With its 3 KB RAM and
28 KB program space (ROM) requirement, it easily fits on
the MICAz motes.
5. DETECTION ALGORITHM
There are several characteristics of acoustic shockwaves
and muzzle blasts which distinguish their detection and
signal processing algorithms from regular audio applications.
Both events are transient by their nature and present very
intense stimuli to the microphones. This is increasingly
problematic with low cost electret microphones-designed
for picking up regular speech or music. Although
mechanical damping of the microphone membranes can mitigate
the problem, this approach is not without side effects. The
detection algorithms have to be robust enough to handle
severe nonlinear distortion and transitory oscillations. Since
the muzzle blast signature closely follows the shockwave
signal and because of potential automatic weapon bursts, it is
extremely important to settle the audio channels and the
detection logic as soon as possible after an event. Also,
precise angle of arrival estimation necessitates high sampling
frequency (in the MHz range) and accurate event detection.
Moreover, the detection logic needs to process multiple
channels in parallel (4 channels on our existing hardware).
These requirements dictated simple and robust algorithms
both for muzzle blast and shockwave detections. Instead of
using mundane energy detectors-which might not be able
to distinguish the two different events-the applied
detectors strive to find the most important characteristics of the
two signals in the time-domain using simple state machine
logic. The detectors are implemented as independent IP
cores within the FPGA-one pair for each channel. The
cores are run-time configurable and provide detection event
signals with high precision time stamps and event specific
feature vectors. Although the cores are running
independently and in parallel, a crude local fusion module integrates
them by shutting down those cores which missed their events
after a reasonable timeout and by generating a single
detection message towards the mote. At this point, the mote can
read and forward the detection times and features and is
responsible to restart the cores afterwards.
The most conspicuous characteristics of an acoustic
shockwave (see Figure 5(a)) are the steep rising edges at the
be0 200 400 600 800 1000 1200 1400 1600
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Shockwave (M16)
Time (µs)
Amplitude
1
3
5
2
4
len
(a)
s[t] - s[t-D] > E
tstart
:= t
s[t] - s[t-D] < E
s[t] - s[t-D] > E &
t - t_start > Lmin
s[t] - s[t-D] < E
len := t - tstart
IDLE
1
FIRST EDGE DONE
3
SECOND EDGE
4
FIRST EDGE
2
FOUND
5
t - tstart
≥ Lmax
t - tstart
≥ Lmax
(b)
Figure 5: Shockwave signal generated by a 5.56 ×
45 mm NATO projectile (a) and the state machine
of the detection algorithm (b).
ginning and end of the signal. Also, the length of the N-wave
is fairly predictable-as it is described in Section 6.5-and is
relatively short (200-300 μs). The shockwave detection core
is continuously looking for two rising edges within a given
interval. The state machine of the algorithm is shown in
Figure 5(b). The input parameters are the minimum
steepness of the edges (D, E), and the bounds on the length of
the wave (Lmin, Lmax). The only feature calculated by the
core is the length of the observed shockwave signal.
In contrast to shockwaves, the muzzle blast signatures are
characterized by a long initial period (1-5 ms) where the first
half period is significantly shorter than the second half [4].
Due to the physical limitations of the analog circuitry
described at the beginning of this section, irregular oscillations
and glitches might show up within this longer time window
as they can be clearly seen in Figure 6(a). Therefore, the real
challenge for the matching detection core is to identify the
first and second half periods properly. The state machine
(Figure 6(b)) does not work on the raw samples directly
but is fed by a zero crossing (ZC) encoder. After the initial
triggering, the detector attempts to collect those ZC
segments which belong to the first period (positive amplitude)
while discarding too short (in our terminology: garbage)
segments-effectively implementing a rudimentary low-pass
filter in the ZC domain. After it encounters a sufficiently
long negative segment, it runs the same collection logic for
the second half period. If too much garbage is discarded
in the collection phases, the core resets itself to prevent the
(false) detection of the halves from completely different
periods separated by rapid oscillation or noise. Finally, if the
constraints on the total length and on the length ratio hold,
the core generates a detection event along with the actual
length, amplitude and energy of the period calculated
concurrently. The initial triggering mechanism is based on two
amplitude thresholds: one static (but configurable)
amplitude level and a dynamically computed one. The latter one
is essential to adapt the sensor to different ambient noise
environments and to temporarily suspend the muzzle blast
detector after a shock wave event (oscillations in the analog
section or reverberations in the sensor enclosure might
otherwise trigger false muzzle blast detections). The dynamic
noise level is estimated by a single pole recursive low-pass
filter (cutoff @ 0.5 kHz ) on the FPGA.
117
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Time (µs)
Amplitude
Muzzle blast (M16)
1
2
3
4 5
len2
+
len1
(a)
IDLE
1
SECOND ZC
3
PENDING ZC
4
FIRST ZC
2
FOUND
5
amplitude
threshold
long
positive ZC
long
negative ZC
valid
full period
max
garbage
wrong sign
garbage
collect
first period
garbage
collect
first period
garbage
(b)
Figure 6: Muzzle blast signature (a) produced by an
M16 assault rifle and the corresponding detection
logic (b).
The detection cores were originally implemented in Java
and evaluated on pre-recorded signals because of much faster
test runs and more convenient debugging facilities. Later
on, they were ported to VHDL and synthesized using the
Xilinx ISE tool suite. The functional equivalence between
the two implementations were tested by VHDL test benches
and Python scripts which provided an automated way to
exercise the detection cores on the same set of pre-recorded
signals and to compare the results.
6. SENSOR FUSION
The sensor fusion algorithm receives detection messages
from the sensor network and estimates the bullet trajectory,
the shooter position, the caliber of the projectile and the
type of the weapon. The algorithm consists of well separated
computational tasks outlined below:
1. Compute muzzle blast and shockwave directions of
arrivals for each individual sensor (see 6.1).
2. Compute range estimates. This algorithm can
analytically fuse a pair of shockwave and muzzle blast AoA
estimates. (see 6.2).
3. Compute a single trajectory from all shockwave
measurements (see 6.3).
4. If trajectory available compute range (see 6.4).
else compute shooter position first and then trajectory
based on it. (see 6.4)
5. If trajectory available compute caliber (see 6.5).
6. If caliber available compute weapon type (see 6.6).
We describe each step in the following sections in detail.
6.1 Direction of arrival
The first step of the sensor fusion is to calculate the
muzzle blast and shockwave AoA-s for each sensorboard. Each
sensorboard has four microphones that measure the ToA-s.
Since the microphone spacing is orders of magnitude smaller
than the distance to the sound source, we can approximate
the approaching sound wave front with a plane (far field
assumption).
Let us formalize the problem for 3 microphones first. Let
P1, P2 and P3 be the position of the microphones ordered by
time of arrival t1 < t2 < t3. First we apply a simple
geometry validation step. The measured time difference between
two microphones cannot be larger than the sound
propagation time between the two microphones:
|ti − tj| <= |Pi − Pj |/c + ε
Where c is the speed of sound and ε is the maximum
measurement error. If this condition does not hold, the
corresponding detections are discarded. Let v(x, y, z) be the
normal vector of the unknown direction of arrival. We also
use r1(x1, y1, z1), the vector from P1 to P2 and r2(x2, y2, z2),
the vector from P1 to P3. Let"s consider the projection of
the direction of the motion of the wave front (v) to r1
divided by the speed of sound (c). This gives us how long it
takes the wave front to propagate form P1 to P2:
vr1 = c(t2 − t1)
The same relationship holds for r2 and v:
vr2 = c(t3 − t1)
We also know that v is a normal vector:
vv = 1
Moving from vectors to coordinates using the dot product
definition leads to a quadratic system:
xx1 + yy1 + zz1 = c(t2 − t1)
xx2 + yy2 + zz2 = c(t3 − t1)
x2
+ y2
+ z2
= 1
We omit the solution steps here, as they are
straightforward, but long. There are two solutions (if the source is on
the P1P2P3 plane the two solutions coincide). We use the
fourth microphone"s measurement-if there is one-to
eliminate one of them. Otherwise, both solutions are considered
for further processing.
6.2 Muzzle-shock fusion
u
v
11,tP
22,tP
tP,
2P′
Bullet trajectory
Figure 7: Section plane of a shot (at P) and two
sensors (at P1 and at P2). One sensor detects the
muzzle blast"s, the other the shockwave"s time and
direction of arrivals.
Consider the situation in Figure 7. A shot was fired from
P at time t. Both P and t are unknown. We have one muzzle
blast and one shockwave detections by two different sensors
118
with AoA and hence, ToA information available. The
muzzle blast detection is at position P1 with time t1 and AoA
u. The shockwave detection is at P2 with time t2 and AoA
v. u and v are normal vectors. It is shown below that these
measurements are sufficient to compute the position of the
shooter (P).
Let P2 be the point on the extended shockwave cone
surface where PP2 is perpendicular to the surface. Note that
PP2 is parallel with v. Since P2 is on the cone surface which
hits P2, a sensor at P2 would detect the same shockwave
time of arrival (t2). The cone surface travels at the speed of
sound (c), so we can express P using P2:
P = P2 + cv(t2 − t).
P can also be expressed from P1:
P = P1 + cu(t1 − t)
yielding
P1 + cu(t1 − t) = P2 + cv(t2 − t).
P2P2 is perpendicular to v:
(P2 − P2)v = 0
yielding
(P1 + cu(t1 − t) − cv(t2 − t) − P2)v = 0
containing only one unknown t. One obtains:
t =
(P1−P2)v
c
+uvt1−t2
uv−1
.
From here we can calculate the shoter position P.
Let"s consider the special single sensor case where P1 = P2
(one sensor detects both shockwave and muzzle blast AoA).
In this case:
t = uvt1−t2
uv−1
.
Since u and v are not used separately only uv, the absolute
orientation of the sensor can be arbitrary, we still get t which
gives us the range.
Here we assumed that the shockwave is a cone which is
only true for constant projectile speeds. In reality, the angle
of the cone slowly grows; the surface resembles one half of
an American football. The decelerating bullet results in a
smaller time difference between the shockwave and the
muzzle blast detections because the shockwave generation slows
down with the bullet. A smaller time difference results in a
smaller range, so the above formula underestimates the true
range. However, it can still be used with a proper
deceleration correction function. We leave this for future work.
6.3 Trajectory estimation
Danicki showed that the bullet trajectory and speed can
be computed analytically from two independent shockwave
measurements where both ToA and AoA are measured [2].
The method gets more sensitive to measurement errors as
the two shockwave directions get closer to each other. In
the special case when both directions are the same, the
trajectory cannot be computed. In a real world application,
the sensors are typically deployed on a plane approximately.
In this case, all sensors located on one side of the
trajectory measure almost the same shockwave AoA. To avoid
this error sensitivity problem, we consider shockwave
measurement pairs only if the direction of arrival difference is
larger than a certain threshold.
We have multiple sensors and one sensor can report two
different directions (when only three microphones detect the
shockwave). Hence, we typically have several trajectory
candidates, i.e. one for each AoA pair over the threshold. We
applied an outlier filtering and averaging method to fuse
together the shockwave direction and time information and
come up with a single trajectory. Assume that we have
N individual shockwave AoA measurements. Let"s take all
possible unordered pairs where the direction difference is
above the mentioned threshold and compute the trajectory
for each. This gives us at most N(N−1)
2
trajectories. A
trajectory is represented by one point pi and the normal vector
vi (where i is the trajectory index). We define the distance
of two trajectories as the dot product of their normal
vectors:
D(i, j) = vivj
For each trajectory a neighbor set is defined:
N(i) := {j|D(i, j) < R}
where R is a radius parameter. The largest neighbor set is
considered to be the core set C, all other trajectories are
outliers. The core set can be found in O(N2
) time. The
trajectories in the core set are then averaged to get the final
trajectory.
It can happen that we cannot form any sensor pairs
because of the direction difference threshold. It means all
sensors are on the same side of the trajectory. In this case,
we first compute the shooter position (described in the next
section) that fixes p making v the only unknown. To find
v in this case, we use a simple high resolution grid search
and minimize an error function based on the shockwave
directions.
We have made experiments to utilize the measured
shockwave length in the trajectory estimation. There are some
promising results, but it needs further research.
6.4 Shooter position estimation
The shooter position estimation algorithm aggregates the
following heterogenous information generated by earlier
computational steps:
1. trajectory,
2. muzzle blast ToA at a sensor,
3. muzzle blast AoA at a sensor, which is effectively a
bearing estimate to the shooter, and
4. range estimate at a sensor (when both shockwave and
muzzle blast AoA are available).
Some sensors report only ToA, some has bearing
estimate(s) also and some has range estimate(s) as well,
depending on the number of successful muzzle blast and shockwave
detections by the sensor. For an example, refer to Figure 8.
Note that a sensor may have two different bearing and range
estimates. 3 detections gives two possible AoA-s for
muzzle blast (i.e. bearing) and/or shockwave. Furthermore, the
combination of two different muzzle blast and shockwave
AoA-s may result in two different ranges.
119
11111 ,,,, rrvvt ′′
22 ,vt
333 ,, vvt ′
4t
5t
6t
bullet trajectory
shooter position
Figure 8: Example of heterogenous input data for
the shooter position estimation algorithm. All
sensors have ToA measurements (t1, t2, t3, t4, t5), one
sensor has a single bearing estimate (v2), one sensor has
two possible bearings (v3, v3) and one sensor has two
bearing and two range estimates (v1, v1,r1, r1)
In a multipath environment, these detections will not only
contain gaussian noise, but also possibly large errors due to
echoes. It has been showed in our earlier work that a similar
problem can be solved efficiently with an interval arithmetic
based bisection search algorithm [8]. The basic idea is to
define a discrete consistency function over the area of
interest and subdivide the space into 3D boxes. For any given
3D box, this function gives the number of measurements
supporting the hypothesis that the shooter was within that
box. The search starts with a box large enough to contain
the whole area of interest, then zooms in by dividing and
evaluating boxes. The box with the maximum consistency
is divided until the desired precision is reached.
Backtracking is possible to avoid getting stuck in a local maximum.
This approach has been shown to be fast enough for
online processing. Note, however, that when the trajectory
has already been calculated in previous steps, the search
needs to be done only on the trajectory making it orders of
magnitude faster.
Next let us describe how the consistency function is
calculated in detail. Consider B, a three dimensional box, we
would like to compute the consistency value of. First we
consider only the ToA information. If one sensor has
multiple ToA detections, we use the average of those times, so
one sensor supplies at most one ToA estimate. For each
ToA, we can calculate the corresponding time of the shot,
since the origin is assumed to be in box B. Since it is a box
and not a single point, this gives us an interval for the shot
time. The maximum number of overlapping time intervals
gives us the value of the consistency function for B. For a
detailed description of the consistency function and search
algorithm, refer to [8].
Here we extend the approach the following way. We
modify the consistency function based on the bearing and range
data from individual sensors. A bearing estimate supports
B if the line segment starting from the sensor with the
measured direction intersects the B box. A range supports B,
if the sphere with the radius of the range and origin of the
sensor intersects B. Instead of simply checking whether the
position specified by the corresponding bearing-range pairs
falls within B, this eliminates the sensor"s possible
orientation error. The value of the consistency function is
incremented by one for each bearing and range estimate that is
consistent with B.
6.5 Caliber estimation
The shockwave signal characteristics has been studied
before by Whitham [20]. He showed that the shockwave period
T is related to the projectile diameter d, the length l, the
perpendicular miss distance b from the bullet trajectory to
the sensor, the Mach number M and the speed of sound c.
T = 1.82Mb1/4
c(M2−1)3/8
d
l1/4 ≈ 1.82d
c
(Mb
l
)1/4
0
100
200
300
400
500
600
0 10 20 30
miss distance (m)shockwavelength(microseconds)
.50 cal
5.56 mm
7.62 mm
Figure 9: Shockwave length and miss distance
relationship. Each data point represents one
sensorboard after an aggregation of the individual
measurements of the four acoustic channels. Three
different caliber projectiles have been tested (196
shots, 10 sensors).
To illustrate the relationship between miss distance and
shockwave length, here we use all 196 shots with three
different caliber projectiles fired during the evaluation. (During
the evaluation we used data obtained previously using a few
practice shots per weapon.) 10 sensors (4 microphones by
sensor) measured the shockwave length. For each sensor,
we considered the shockwave length estimation valid if at
least three out of four microphones agreed on a value with
at most 5 microsecond variance. This filtering leads to a
86% report rate per sensor and gets rid of large
measurement errors. The experimental data is shown in Figure 9.
Whitham"s formula suggests that the shockwave length for a
given caliber can be approximated with a power function of
the miss distance (with a 1/4 exponent). Best fit functions
on our data are:
.50 cal: T = 237.75b0.2059
7.62 mm: T = 178.11b0.1996
5.56 mm: T = 144.39b0.1757
To evaluate a shot, we take the caliber whose
approximation function results in the smallest RMS error of the
filtered sensor readings. This method has less than 1%
caliber estimation error when an accurate trajectory estimate
is available. In other words, caliber estimation only works
if enough shockwave detections are made by the system to
compute a trajectory.
120
6.6 Weapon estimation
We analyzed all measured signal characteristics to find
weapon specific information. Unfortunately, we concluded
that the observed muzzle blast signature is not characteristic
enough of the weapon for classification purposes. The
reflections of the high energy muzzle blast from the environment
have much higher impact on the muzzle blast signal shape
than the weapon itself. Shooting the same weapon from
different places caused larger differences on the recorded signal
than shooting different weapons from the same place.
0
100
200
300
400
500
600
700
800
900
0 100 200 300 400
range (m)
speed(m/s)
AK-47
M240
Figure 10: AK47 and M240 bullet deceleration
measurements. Both weapons have the same caliber.
Data is approximated using simple linear regression.
0
100
200
300
400
500
600
700
800
900
1000
0 50 100 150 200 250 300 350
range (m)
speed(m/s)
M16
M249
M4
Figure 11: M16, M249 and M4 bullet deceleration
measurements. All weapons have the same caliber.
Data is approximated using simple linear regression.
However, the measured speed of the projectile and its
caliber showed good correlation with the weapon type. This
is because for a given weapon type and ammunition pair,
the muzzle velocity is nearly constant. In Figures 10 and
11 we can see the relationship between the range and the
measured bullet speed for different calibers and weapons.
In the supersonic speed range, the bullet deceleration can
be approximated with a linear function. In case of the
7.62 mm caliber, the two tested weapons (AK47, M240) can
be clearly separated (Figure 10). Unfortunately, this is not
necessarily true for the 5.56 mm caliber. The M16 with its
higher muzzle speed can still be well classified, but the M4
and M249 weapons seem practically undistinguishable
(Figure 11). However, this may be partially due to the limited
number of practice shots we were able to take before the
actual testing began. More training data may reveal better
separation between the two weapons since their published
muzzle velocities do differ somewhat.
The system carries out weapon classification in the
following manner. Once the trajectory is known, the speed can be
calculated for each sensor based on the shockwave geometry.
To evaluate a shot, we choose the weapon type whose
deceleration function results in the smallest RMS error of the
estimated range-speed pairs for the estimated caliber class.
7. RESULTS
An independent evaluation of the system was carried out
by a team from NIST at the US Army Aberdeen Test Center
in April 2006 [19]. The experiment was setup on a shooting
range with mock-up wooden buildings and walls for
supporting elevated shooter positions and generating multipath
effects. Figure 12 shows the user interface with an aerial
photograph of the site. 10 sensor nodes were deployed on
surveyed points in an approximately 30×30 m area. There
were five fixed targets behind the sensor network. Several
firing positions were located at each of the firing lines at
50, 100, 200 and 300 meters. These positions were known
to the evaluators, but not to the operators of the system.
Six different weapons were utilized: AK47 and M240
firing 7.62 mm projectiles, M16, M4 and M249 with 5.56mm
ammunition and the .50 caliber M107.
Note that the sensors remained static during the test. The
primary reason for this is that nobody is allowed downrange
during live fire tests. Utilizing some kind of remote
control platform would have been too involved for the limited
time the range was available for the test. The experiment,
therefore, did not test the mobility aspect of the system.
During the one day test, there were 196 shots fired. The
results are summarized in Table 1. The system detected all
shots successfully. Since a ballistic shockwave is a unique
acoustic phenomenon, it makes the detection very robust.
There were no false positives for shockwaves, but there were
a handful of false muzzle blast detections due to parallel
tests of artillery at a nearby range.
Shooter Local- Caliber Trajectory Trajectory Distance No.
Range ization Accu- Azimuth Distance Error of
(m) Rate racy Error (deg) Error (m) (m) Shots
50 93% 100% 0.86 0.91 2.2 54
100 100% 100% 0.66 1.34 8.7 54
200 96% 100% 0.74 2.71 32.8 54
300 97% 97% 1.49 6.29 70.6 34
All 96% 99.5% 0.88 2.47 23.0 196
Table 1: Summary of results fusing all available
sensor observations. All shots were successfully
detected, so the detection rate is omitted. Localization
rate means the percentage of shots that the sensor
fusion was able to estimate the trajectory of. The
caliber accuracy rate is relative to the shots localized
and not all the shots because caliber estimation
requires the trajectory. The trajectory error is broken
down to azimuth in degrees and the actual distance
of the shooter from the trajectory. The distance
error shows the distance between the real shooter
position and the estimated shooter position. As such,
it includes the error caused by both the trajectory
and that of the range estimation. Note that the
traditional bearing and range measures are not good
ones for a distributed system such as ours because
of the lack of a single reference point.
121
Figure 12: The user interface of the system
showing the experimental setup. The 10 sensor nodes
are labeled by their ID and marked by dark circles.
The targets are black squares marked T-1 through
T-5. The long white arrows point to the shooter
position estimated by each sensor. Where it is
missing, the corresponding sensor did not have enough
detections to measure the AoA of either the
muzzle blast, the shockwave or both. The thick black
line and large circle indicate the estimated
trajectory and the shooter position as estimated by fusing
all available detections from the network. This shot
from the 100-meter line at target T-3 was localized
almost perfectly by the sensor network. The caliber
and weapon were also identified correctly. 6 out of
10 nodes were able to estimate the location alone.
Their bearing accuracy is within a degree, while the
range is off by less than 10% in the worst case.
The localization rate characterizes the system"s ability to
successfully estimate the trajectory of shots. Since caliber
estimation and weapon classification relies on the trajectory,
non-localized shots are not classified either. There were 7
shots out of 196 that were not localized. The reason for
missed shots is the trajectory ambiguity problem that occurs
when the projectile passes on one side of all the sensors. In
this case, two significantly different trajectories can generate
the same set of observations (see [8] and also Section 6.3).
Instead of estimating which one is more likely or displaying
both possibilities, we decided not to provide a trajectory at
all. It is better not to give an answer other than a shot
alarm than misleading the soldier.
Localization accuracy is broken down to trajectory
accuracy and range estimation precision. The angle of the
estimated trajectory was better than 1 degree except for the
300 m range. Since the range should not affect trajectory
estimation as long as the projectile passes over the network,
we suspect that the slightly worse angle precision for 300 m
is due to the hurried shots we witnessed the soldiers took
near the end of the day. This is also indicated by another
datapoint: the estimated trajectory distance from the
actual targets has an average error of 1.3 m for 300 m shots,
0.75 m for 200 m shots and 0.6 m for all but 300 m shots.
As the distance between the targets and the sensor network
was fixed, this number should not show a 2× improvement
just because the shooter is closer.
Since the angle of the trajectory itself does not
characterize the overall error-there can be a translation
alsoTable 1 also gives the distance of the shooter from the
estimated trajectory. These indicate an error which is about
1-2% of the range. To put this into perspective, a
trajectory estimate for a 100 m shot will very likely go through or
very near the window the shooter is located at. Again, we
believe that the disproportionally larger errors at 300 m are
due to human errors in aiming. As the ground truth was
obtained by knowing the precise location of the shooter and
the target, any inaccuracy in the actual trajectory directly
adds to the perceived error of the system.
We call the estimation of the shooter"s position on the
calculated trajectory range estimation due to the lack of a
better term. The range estimates are better than 5%
accurate from 50 m and 10% for 100 m. However, this goes
to 20% or worse for longer distances. We did not have a
facility to test system before the evaluation for ranges
beyond 100 m. During the evaluation, we ran into the
problem of mistaking shockwave echoes for muzzle blasts. These
echoes reached the sensors before the real muzzle blast for
long range shots only, since the projectile travels 2-3× faster
than the speed of sound, so the time between the shockwave
(and its possible echo from nearby objects) and the muzzle
blast increases with increasing ranges. This resulted in
underestimating the range, since the system measured shorter
times than the real ones. Since the evaluation we finetuned
the muzzle blast detection algorithm to avoid this problem.
Distance M16 AK47 M240 M107 M4 M249 M4-M249
50m 100% 100% 100% 100% 11% 25% 94%
100m 100% 100% 100% 100% 22% 33% 100%
200m 100% 100% 100% 100% 50% 22% 100%
300m 67% 100% 83% 100% 33% 0% 57%
All 96% 100% 97% 100% 23% 23% 93%
Table 2: Weapon classification results. The
percentages are relative to the number of shots localized and
not all shots, as the classification algorithm needs to
know the trajectory and the range. Note that the
difference is small; there were 189 shots localized
out of the total 196.
The caliber and weapon estimation accuracy rates are
based on the 189 shots that were successfully localized. Note
that there was a single shot that was falsely classified by the
caliber estimator. The 73% overall weapon classification
accuracy does not seem impressive. But if we break it down
to the six different weapons tested, the picture changes
dramatically as shown in Table 2. For four of the weapons
(AK14, M16, M240 and M107), the classification rate is
almost 100%. There were only two shots out of approximately
140 that were missed. The M4 and M249 proved to be too
similar and they were mistaken for each other most of the
time. One possible explanation is that we had only a limited
number of test shots taken with these weapons right before
the evaluation and used the wrong deceleration
approximation function. Either this or a similar mistake was made
122
since if we simply used the opposite of the system"s answer
where one of these weapons were indicated, the accuracy
would have improved 3x. If we consider these two weapons
a single weapon class, then the classification accuracy for it
becomes 93%.
Note that the AK47 and M240 have the same caliber
(7.62 mm), just as the M16, M4 and M249 do (5.56 mm).
That is, the system is able to differentiate between weapons
of the same caliber. We are not aware of any system that
classifies weapons this accurately.
7.1 Single sensor performance
As was shown previously, a single sensor alone is able
to localize the shooter if it can determine both the muzzle
blast and the shockwave AoA, that is, it needs to measure
the ToA of both on at least three acoustic channels. While
shockwave detection is independent of the range-unless the
projectile becomes subsonic-, the likelihood of muzzle blast
detection beyond 150 meters is not enough for consistently
getting at least three per sensor node for AoA estimation.
Hence, we only evaluate the single sensor performance for
the 104 shots that were taken from 50 and 100 m. Note that
we use the same test data as in the previous section, but we
evaluate individually for each sensor.
Table 3 summarizes the results broken down by the ten
sensors utilized. Since this is now not a distributed system,
the results are given relative to the position of the given
sensor, that is, a bearing and range estimate is provided. Note
that many of the common error sources of the networked
system do not play a role here. Time synchronization is
not applicable. The sensor"s absolute location is irrelevant
(just as the relative location of multiple sensors). The
sensor"s orientation is still important though. There are several
disadvantages of the single sensor case compared to the
networked system: there is no redundancy to compensate for
other errors and to perform outlier rejection, the
localization rate is markedly lower, and a single sensor alone is not
able to estimate the caliber or classify the weapon.
Sensor id 1 2 3 5 7 8 9 10 11 12
Loc. rate 44% 37% 53% 52% 19% 63% 51% 31% 23% 44%
Bearing (deg) 0.80 1.25 0.60 0.85 1.02 0.92 0.73 0.71 1.28 1.44
Range (m) 3.2 6.1 4.4 4.7 4.6 4.6 4.1 5.2 4.8 8.2
Table 3: Single sensor accuracy for 108 shots fired
from 50 and 100 meters. Localization rate refers to
the percentage of shots the given sensor alone was
able to localize. The bearing and range values are
average errors. They characterize the accuracy of
localization from the given sensor"s perspective.
The data indicates that the performance of the sensors
varied significantly especially considering the localization
rate. One factor has to be the location of the given
sensor including how far it was from the firing lines and how
obstructed its view was. Also, the sensors were hand-built
prototypes utilizing nowhere near production quality
packaging/mounting. In light of these factors, the overall
average bearing error of 0.9 degrees and range error of 5 m
with a microphone spacing of less than 10 cm are excellent.
We believe that professional manufacturing and better
microphones could easily achieve better performance than the
best sensor in our experiment (>60% localization rate and
3 m range error).
Interestingly, the largest error in range was a huge 90 m
clearly due to some erroneous detection, yet the largest
bearing error was less than 12 degrees which is still a good
indication for the soldier where to look.
The overall localization rate over all single sensors was
42%, while for 50 m shots only, this jumped to 61%. Note
that the firing range was prepared to simulate an urban
area to some extent: there were a few single- and two-storey
wooden structures built both in and around the sensor
deployment area and the firing lines. Hence, not all sensors had
line-of-sight to all shooting positions. We estimate that 10%
of the sensors had obstructed view to the shooter on
average. Hence, we can claim that a given sensor had about 50%
chance of localizing a shot within 130 m. (Since the sensor
deployment area was 30 m deep, 100 m shots correspond to
actual distances between 100 and 130 m.) Again, we
emphasize that localization needs at least three muzzle blast and
three shockwave detections out of a possible four for each per
sensor. The detection rate for single sensors-corresponding
to at least one shockwave detection per sensor-was
practically 100%.
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0 1 2 3 4 5 6 7 8 9 10
number of sensors
percentageofshots
Figure 13: Histogram showing what fraction of the
104 shots taken from 50 and 100 meters were
localized by at most how many individual sensors alone.
13% of the shots were missed by every single
sensor, i.e., none of them had both muzzle blast and
shockwave AoA detections. Note that almost all
of these shots were still accurately localized by the
networked system, i.e. the sensor fusion using all
available observations in the sensor network.
It would be misleading to interpret these results as the
system missing half the shots. As soldiers never work alone
and the sensor node is relatively cheap to afford having
every soldier equipped with one, we also need to look at the
overall detection rates for every shot. Figure 13 shows the
histogram of the percentage of shots vs. the number of
individual sensors that localized it. 13% of shots were not
localized by any sensor alone, but 87% was localized by at
least one sensor out of ten.
7.2 Error sources
In this section, we analyze the most significant sources of
error that affect the performance of the networked shooter
localization and weapon classification system. In order to
correlate the distributed observations of the acoustic events,
the nodes need to have a common time and space reference.
Hence, errors in the time synchronization, node localization
and node orientation all degrade the overall accuracy of the
system.
123
Our time synchronization approach yields errors
significantly less than 100 microseconds. As the sound travels
about 3 cm in that time, time synchronization errors have a
negligible effect on the system.
On the other hand, node location and orientation can have
a direct effect on the overall system performance. Notice
that to analyze this, we do not have to resort to
simulation, instead we can utilize the real test data gathered at
Aberdeen. But instead of using the real sensor locations
known very accurately and the measured and calibrated
almost perfect node orientations, we can add error terms to
them and run the sensor fusion. This exactly replicates how
the system would have performed during the test using the
imprecisely known locations and orientations.
Another aspect of the system performance that can be
evaluated this way is the effect of the number of available
sensors. Instead of using all ten sensors in the data fusion,
we can pick any subset of the nodes to see how the accuracy
degrades as we decrease the number of nodes.
The following experiment was carried out. The number
of sensors were varied from 2 to 10 in increments of 2. Each
run picked the sensors randomly using a uniform
distribution. At each run each node was randomly moved to a
new location within a circle around its true position with
a radius determined by a zero-mean Gaussian distribution.
Finally, the node orientations were perturbed using a
zeromean Gaussian distribution. Each combination of
parameters were generated 100 times and utilized for all 196 shots.
The results are summarized in Figure 14. There is one 3D
barchart for each of the experiment sets with the given fixed
number of sensors. The x-axis shows the node location error,
that is, the standard deviation of the corresponding
Gaussian distribution that was varied between 0 and 6 meters.
The y-axis shows the standard deviation of the node
orientation error that was varied between 0 and 6 degrees. The
z-axis is the resulting trajectory azimuth error. Note that
the elevation angles showed somewhat larger errors than the
azimuth. Since all the sensors were in approximately a
horizontal plane and only a few shooter positions were out of the
same plane and only by 2 m or so, the test was not sufficient
to evaluate this aspect of the system.
There are many interesting observation one can make by
analyzing these charts. Node location errors in this range
have a small effect on accuracy. Node orientation errors, on
the other hand, noticeably degrade the performance. Still
the largest errors in this experiment of 3.5 degrees for 6
sensors and 5 degrees for 2 sensors are still very good.
Note that as the location and orientation errors increase
and the number of sensors decrease, the most significantly
affected performance metric is the localization rate. See
Table 4 for a summary. Successful localization goes down
from almost 100% to 50% when we go from 10 sensors to
2 even without additional errors. This is primarily caused
by geometry: for a successful localization, the bullet needs
to pass over the sensor network, that is, at least one sensor
should be on the side of the trajectory other than the rest
of the nodes. (This is a simplification for illustrative
purposes. If all the sensors and the trajectory are not coplanar,
localization may be successful even if the projectile passes
on one side of the network. See Section 6.3.) As the
numbers of sensors decreased in the experiment by randomly
selecting a subset, the probability of trajectories abiding by
this rule decreased. This also means that even if there are
0
2
4
6
0
2
4
6
0
1
2
3
4
5
6
azimutherror(degree)
position error (m)
orientation error
(degree)
2 sensors
0
2
4
6
0
2
4
6
0
1
2
3
4
5
6
azimutherror(degree)
position error (m)
orientation error
(degree)
4 sensors
0
2
4
6
0
2
4
6
0
1
2
3
4
5
6
azimutherror(degree)
position error (m)
orientation error
(degree)
6 sensors
0
2
4
6
0
2
4
6
0
1
2
3
4
5
6
azimutherror(degree)
position error (m)
orientation error
(degree)
8 sensors
Figure 14: The effect of node localization and
orientation errors on azimuth accuracy with 2, 4, 6 and
8 nodes. Note that the chart for 10 nodes is almost
identical for the 8-node case, hence, it is omitted.
124
many sensors (i.e. soldiers), but all of them are right next to
each other, the localization rate will suffer. However, when
the sensor fusion does provide a result, it is still accurate
even with few available sensors and relatively large
individual errors. A very few consistent observation lead to good
accuracy as the inconsistent ones are discarded by the
algorithm. This is also supported by the observation that for
the cases with the higher number of sensors (8 or 10), the
localization rate is hardly affected by even large errors.
Errors/Sensors 2 4 6 8 10
0 m, 0 deg 54% 87% 94% 95% 96%
2 m, 2 deg 53% 80% 91% 96% 96%
6 m, 0 deg 43% 79% 88% 94% 94%
0 m, 6 deg 44% 78% 90% 93% 94%
6 m, 6 deg 41% 73% 85% 89% 92%
Table 4: Localization rate as a function of the
number of sensors used, the sensor node location and
orientation errors.
One of the most significant observations on Figure 14 and
Table 4 is that there is hardly any difference in the data for
6, 8 and 10 sensors. This means that there is little advantage
of adding more nodes beyond 6 sensors as far as the accuracy
is concerned.
The speed of sound depends on the ambient temperature.
The current prototype considers it constant that is typically
set before a test. It would be straightforward to employ
a temperature sensor to update the value of the speed of
sound periodically during operation. Note also that wind
may adversely affect the accuracy of the system. The sensor
fusion, however, could incorporate wind speed into its
calculations. It would be more complicated than temperature
compensation, but could be done.
Other practical issues also need to be looked at before a
real world deployment. Silencers reduce the muzzle blast
energy and hence, the effective range the system can
detect it at. However, silencers do not effect the shockwave
and the system would still detect the trajectory and caliber
accurately. The range and weapon type could not be
estimated without muzzle blast detections. Subsonic weapons
do not produce a shockwave. However, this is not of great
significance, since they have shorter range, lower accuracy
and much less lethality. Hence, their use is not widespread
and they pose less danger in any case.
Another issue is the type of ammunition used. Irregular
armies may use substandard, even hand manufactured
bullets. This effects the muzzle velocity of the weapon. For
weapon classification to work accurately, the system would
need to be calibrated with the typical ammunition used by
the given adversary.
8. RELATED WORK
Acoustic detection and recognition has been under
research since the early fifties. The area has a close
relevance to the topic of supersonic flow mechanics [20]. Fansler
analyzed the complex near-field pressure waves that occur
within a foot of the muzzle blast. Fansler"s work gives a
good idea of the ideal muzzle blast pressure wave without
contamination from echoes or propagation effects [4].
Experiments with greater distances from the muzzle were
conducted by Stoughton [18]. The measurements of the ballistic
shockwaves using calibrated pressure transducers at known
locations, measured bullet speeds, and miss distances of
355 meters for 5.56 mm and 7.62 mm projectiles were made.
Results indicate that ground interaction becomes a problem
for miss distances of 30 meters or larger.
Another area of research is the signal processing of gunfire
acoustics. The focus is on the robust detection and length
estimation of small caliber acoustic shockwaves and
muzzle blasts. Possible techniques for classifying signals as
either shockwaves or muzzle blasts includes short-time Fourier
Transform (STFT), the Smoothed Pseudo Wigner-Ville
distribution (SPWVD), and a discrete wavelet transformation
(DWT). Joint time-frequency (JTF) spectrograms are used
to analyze the typical separation of the shockwave and
muzzle blast transients in both time and frequency. Mays
concludes that the DWT is the best method for classifying
signals as either shockwaves or muzzle blasts because it works
well and is less expensive to compute than the SPWVD [10].
The edges of the shockwave are typically well defined and
the shockwave length is directly related to the bullet
characteristics. A paper by Sadler [14] compares two shockwave
edge detection methods: a simple gradient-based detector,
and a multi-scale wavelet detector. It also demonstrates how
the length of the shockwave, as determined by the edge
detectors, can be used along with Whithams equations [20] to
estimate the caliber of a projectile. Note that the available
computational performance on the sensor nodes, the limited
wireless bandwidth and real-time requirements render these
approaches infeasible on our platform.
A related topic is the research and development of
experimental and prototype shooter location systems. Researchers
at BBN have developed the Bullet Ears system [3] which has
the capability to be installed in a fixed position or worn by
soldiers. The fixed system has tetrahedron shaped
microphone arrays with 1.5 meter spacing. The overall system
consists of two to three of these arrays spaced 20 to 100
meters from each other. The soldier-worn system has 12
microphones as well as a GPS antenna and orientation
sensors mounted on a helmet. There is a low speed RF
connection from the helmet to the processing body. An extensive
test has been conducted to measure the performance of both
type of systems. The fixed systems performance was one
order of magnitude better in the angle calculations while their
range performance where matched. The angle accuracy of
the fixed system was dominantly less than one degree while
it was around five degrees for the helmet mounted one. The
range accuracy was around 5 percent for both of the
systems. The problem with this and similar centralized
systems is the need of the one or handful of microphone arrays
to be in line-of-sight of the shooter. A sensor networked
based solution has the advantage of widely distributed
sensing for better coverage, multipath effect compensation and
multiple simultaneous shot resolution [8]. This is especially
important for operation in acoustically reverberant urban
areas. Note that BBN"s current vehicle-mounted system
called BOOMERANG, a modified version of Bullet Ears,
is currently used in Iraq [1].
The company ShotSpotter specializes in law enforcement
systems that report the location of gunfire to police within
seconds. The goal of the system is significantly different
than that of military systems. Shotspotter reports 25 m
typical accuracy which is more than enough for police to
125
respond. They are also manufacturing experimental soldier
wearable and UAV mounted systems for military use [16],
but no specifications or evaluation results are publicly
available.
9. CONCLUSIONS
The main contribution of this work is twofold. First, the
performance of the overall distributed networked system is
excellent. Most noteworthy are the trajectory accuracy of
one degree, the correct caliber estimation rate of well over
90% and the close to 100% weapon classification rate for 4 of
the 6 weapons tested. The system proved to be very robust
when increasing the node location and orientation errors and
decreasing the number of available sensors all the way down
to a couple. The key factor behind this is the sensor fusion
algorithm"s ability to reject erroneous measurements. It is
also worth mentioning that the results presented here
correspond to the first and only test of the system beyond 100 m
and with six different weapons. We believe that with the
lessons learned in the test, a consecutive field experiment
could have showed significantly improved results especially
in range estimation beyond 100 m and weapon classification
for the remaining two weapons that were mistaken for each
other the majority of the times during the test.
Second, the performance of the system when used in
standalone mode, that is, when single sensors alone provided
localization, was also very good. While the overall
localization rate of 42% per sensor for shots up to 130 m could be
improved, the bearing accuracy of less than a degree and
the average 5% range error are remarkable using the
handmade prototypes of the low-cost nodes. Note that 87% of
the shots were successfully localized by at least one of the
ten sensors utilized in standalone mode.
We believe that the technology is mature enough that
a next revision of the system could be a commercial one.
However, important aspects of the system would still need
to be worked on. We have not addresses power
management yet. A current node runs on 4 AA batteries for about
12 hours of continuous operation. A deployable version of
the sensor node would need to be asleep during normal
operation and only wake up when an interesting event occurs.
An analog trigger circuit could solve this problem, however,
the system would miss the first shot. Instead, the acoustic
channels would need to be sampled and stored in a circular
buffer. The rest of the board could be turned off. When
a trigger wakes up the board, the acoustic data would be
immediately available. Experiments with a previous
generation sensor board indicated that this could provide a 10x
increase in battery life. Other outstanding issues include
weatherproof packaging and ruggedization, as well as
integration with current military infrastructure.
10. REFERENCES
[1] BBN technologies website. http://www.bbn.com.
[2] E. Danicki. Acoustic sniper localization. Archives of
Acoustics, 30(2):233-245, 2005.
[3] G. L. Duckworth et al. Fixed and wearable acoustic
counter-sniper systems for law enforcement. In E. M.
Carapezza and D. B. Law, editors, Proc. SPIE Vol.
3577, p. 210-230, pages 210-230, Jan. 1999.
[4] K. Fansler. Description of muzzle blast by modified
scaling models. Shock and Vibration, 5(1):1-12, 1998.
[5] D. Gay, P. Levis, R. von Behren, M. Welsh,
E. Brewer, and D. Culler. The nesC language: a
holistic approach to networked embedded systems.
Proceedings of Programming Language Design and
Implementation (PLDI), June 2003.
[6] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and
K. Pister. System architecture directions for networked
sensors. in Proc. of ASPLOS 2000, Nov. 2000.
[7] B. Kus´y, G. Balogh, P. V¨olgyesi, J. Sallai, A. N´adas,
A. L´edeczi, M. Mar´oti, and L. Meertens. Node-density
independent localization. Information Processing in
Sensor Networks (IPSN 06) SPOTS Track, Apr. 2006.
[8] A. L´edeczi, A. N´adas, P. V¨olgyesi, G. Balogh,
B. Kus´y, J. Sallai, G. Pap, S. D´ora, K. Moln´ar,
M. Mar´oti, and G. Simon. Countersniper system for
urban warfare. ACM Transactions on Sensor
Networks, 1(1):153-177, Nov. 2005.
[9] M. Mar´oti. Directed flood-routing framework for
wireless sensor networks. In Proceedings of the 5th
ACM/IFIP/USENIX International Conference on
Middleware, pages 99-114, New York, NY, USA, 2004.
Springer-Verlag New York, Inc.
[10] B. Mays. Shockwave and muzzle blast classification
via joint time frequency and wavelet analysis.
Technical report, Army Research Lab Adelphi MD
20783-1197, Sept. 2001.
[11] TinyOS Hardware Platforms.
http://tinyos.net/scoop/special/hardware.
[12] Crossbow MICAz (MPR2400) Radio Module.
http://www.xbow.com/Products/productsdetails.
aspx?sid=101.
[13] PicoBlaze User Resources.
http://www.xilinx.com/ipcenter/processor_
central/picoblaze/picoblaze_user_resources.htm.
[14] B. M. Sadler, T. Pham, and L. C. Sadler. Optimal
and wavelet-based shock wave detection and
estimation. Acoustical Society of America Journal,
104:955-963, Aug. 1998.
[15] J. Sallai, B. Kus´y, A. L´edeczi, and P. Dutta. On the
scalability of routing-integrated time synchronization.
3rd European Workshop on Wireless Sensor Networks
(EWSN 2006), Feb. 2006.
[16] ShotSpotter website. http:
//www.shotspotter.com/products/military.html.
[17] G. Simon, M. Mar´oti, A. L´edeczi, G. Balogh, B. Kus´y,
A. N´adas, G. Pap, J. Sallai, and K. Frampton. Sensor
network-based countersniper system. In SenSys "04:
Proceedings of the 2nd international conference on
Embedded networked sensor systems, pages 1-12, New
York, NY, USA, 2004. ACM Press.
[18] R. Stoughton. Measurements of small-caliber ballistic
shock waves in air. Acoustical Society of America
Journal, 102:781-787, Aug. 1997.
[19] B. A. Weiss, C. Schlenoff, M. Shneier, and A. Virts.
Technology evaluations and performance metrics for
soldier-worn sensors for assist. In Performance Metrics
for Intelligent Systems Workshop, Aug. 2006.
[20] G. Whitham. Flow pattern of a supersonic projectile.
Communications on pure and applied mathematics,
5(3):301, 1952.
126 | 1degree trajectory precision;weapon type;weapon classification;range;sensorboard;trajectory;internode communication;datum fusion;acoustic source localization;sensor network;caliber estimation accuracy;helmetmounted microphone array;caliber;caliber estimation;wireless sensor network-based mobile countersniper system;self orientation |
train_C-66 | Heuristics-Based Scheduling of Composite Web Service Workloads | Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow"s web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches. | 1. INTRODUCTION
Web services can be composed into workflows to provide
streamlined end-to-end functionality for human users or other systems.
Although previous research efforts have looked at ways to
intelligently automate the composition of web services into workflows
(e.g. [1, 9]), an important remaining problem is the assignment of
web service requests to the underlying web service providers in a
multi-tiered runtime scenario within constraints. In this paper we
address this scheduling problem and examine means to manage a
large number of business process workflows in a scalable manner.
The problem of scheduling web service requests to providers is
relevant to modern business domains that depend on multi-tiered
service provisioning. Consider the example shown in Figure 1
that illustrates our problem space. Workflows comprise multiple
related business processes that are web service consumers; here we
assume that the workflows represent requested service from
customers or automated systems and that the workflow has already
been composed with an existing choreography toolkit. These
workflows are then submitted to a portal (not shown) that acts as a
scheduling agent between the web service consumers and the web
service providers.
In this example, a workflow could represent the actions needed to
instantiate a vacation itinerary, where one business process requests
booking an airline ticket, another business process requests a hotel
room, and so forth. Each of these requests target a particular service
type (e.g. airline reservations, hotel reservations, car reservations,
etc.), and for each service type, there are multiple instances of
service providers that publish a web service interface. An important
challenge is that the workflows must meet some quality-of-service
(QoS) metric, such as end-to-end completion time of all its business
processes, and that meeting or failing this goal results in the
assignment of a quantitative business value metric for the workflow;
intuitively, it is desired that all workflows meet their respective QoS
goals. We further leverage the notion that QoS service agreements
are generally agreed-upon between the web service providers and
the scheduling agent such that the providers advertise some level
of guaranteed QoS to the scheduler based upon runtime conditions
such as turnaround time and maximum available concurrency. The
resulting problem is then to schedule and assign the business
processes" requests for service types to one of the service providers
for that type. The scheduling must be done such that the aggregate
business value across all the workflows is maximised.
In Section 3 we state the scenario as a combinatorial problem and
utilise a genetic search algorithm [5] to find the best assignment
of web service requests to providers. This approach converges
towards an assignment that maximises the overall business value for
all the workflows.
In Section 4 we show through experimentation that this search
heuristic finds better assignments than other algorithms (greedy,
round-robin, and proportional). Further, this approach allows us to
scale the number of simultaneous workflows (up to one thousand
workflows in our experiments) and yet still find effective schedules.
2. RELATED WORK
In the context of service assignment and scheduling, [11] maps
web service calls to potential servers using linear programming, but
their work is concerned with mapping only single workflows; our
principal focus is on scalably scheduling multiple workflows (up
30
Service Type
SuperHotels.com
Business
Process
Business
Process
Workflow
...
Business
Process
Business
Process
...
HostileHostels.com
IncredibleInns.com
Business
Process
Business
Process
Business
Process
...
Business
Process
Service
Provider
SkyHighAirlines.com
SuperCrazyFlights.com
Business
Process
.
.
.
.
.
.
Advertised QoS
Service Agreement
CarRentalService.com
Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.
Each business process accesses a service type and is then mapped to a service provider for that type.
to one thousand as we show later) using different business
metrics and a search heuristic. [10] presents a dynamic
provisioning approach that uses both predictive and reactive techniques for
multi-tiered Internet application delivery. However, the
provisioning techniques do not consider the challenges faced when there are
alternative query execution plans and replicated data sources. [8]
presents a feedback-based scheduling mechanism for multi-tiered
systems with back-end databases, but unlike our work, it assumes
a tighter coupling between the various components of the system.
Our work also builds upon prior scheduling research. The classic
job-shop scheduling problem, shown to be NP-complete [4] [3], is
similar to ours in that tasks within a job must be scheduled onto
machinery (c.f. our scenario is that business processes within a
workflow must be scheduled onto web service providers). The salient
differences are that the machines can process only one job at a time
(we assume servers can multi-task but with degraded performance
and a maximum concurrency level), tasks within a job cannot
simultaneously run on different machines (we assume business
processes can be assigned to any available server), and the principal
metric of performance is the makespan, which is the time for the
last task among all the jobs to complete (and as we show later,
optimising on the makespan is insufficient for scheduling the business
processes, necessitating different metrics).
3. DESIGN
In this section we describe our model and discuss how we can
find scheduling assignments using a genetic search algorithm.
3.1 Model
We base our model on the simplified scenario shown in Figure
1. Specifically, we assume that users or automated systems request
the execution of a workflow. The workflows comprise business
processes, each of which makes one web service invocation to a
service type. Further, business processes have an ordering in the
workflow. The arrangement and execution of the business processes and
the data flow between them are all managed by a composition or
choreography tool (e.g. [1, 9]). Although composition languages
can use sophisticated flow-control mechanisms such as conditional
branches, for simplicity we assume the processes execute
sequentially in a given order.
This scenario can be naturally extended to more complex
relationships that can be expressed in BPEL [7], which defines how
business processes interact, messages are exchanged, activities are
ordered, and exceptions are handled. Due to space constraints,
we focus on the problem space presented here and will extend our
model to more advanced deployment scenarios in the future.
Each workflow has a QoS requirement to complete within a
specified number of time units (e.g. on the order of seconds, as
detailed in the Experiments section). Upon completion (or failure),
the workflow is assigned a business value. We extended this
approach further and considered different types of workflow
completion in order to model differentiated QoS levels that can be applied
by businesses (for example, to provide tiered customer service).
We say that a workflow is successful if it completes within its QoS
requirement, acceptable if it completes within a constant factor κ
31
of its QoS bound (in our experiments we chose κ=3), or failing
if it finishes beyond κ times its QoS bound. For each category,
a business value score is assigned to the workflow, with the
successful category assigned the highest positive score, followed by
acceptable and then failing. The business value point
distribution is non-uniform across workflows, further modelling cases
where some workflows are of higher priority than others.
Each service type is implemented by a number of different
service providers. We assume that the providers make service level
agreements (SLAs) to guarantee a level of performance defined by
the completion time for completing a web service invocation.
Although SLAs can be complex, in this paper we assume for
simplicity that the guarantees can take the form of a linear performance
degradation under load. This guarantee is defined by several
parameters: α is the expected completion time (for example, on the
order of seconds) if the assigned workload of web service requests
is less than or equal to β, the maximum concurrency, and if the
workload is higher than β, the expected completion for a workload
of size ω is α+ γ(ω − β) where γ is a fractional coefficient. In our
experiments we vary α, β, and γ with different distributions.
Ideally, all workflows would be able to finish within their QoS
limits and thus maximise the aggregate business value across all
workflows. However, because we model service providers with
degrading performance under load, not all workflows will achieve
their QoS limit: it may easily be the case that business processes
are assigned to providers who are overloaded and cannot complete
within the respective workflow"s QoS limit. The key research
problem, then, is to assign the business processes to the web service
providers with the goal of optimising on the aggregate business
value of all workflows.
Given that the scope of the optimisation is the entire set of
workflows, it may be that the best scheduling assignments may result in
some workflows having to fail in order for more workflows to
succeed. This intuitive observation suggests that traditional scheduling
approaches such as round-robin or proportional assignments will
not fare well, which is what we observe and discuss in Section 4.
On the other hand, an exhaustive search of all the possible
assignments will find the best schedule, but the computational complexity
is prohibitively high. Suppose there are W workflows with an
average of B business processes per workflow. Further, in the worst
case each business process requests one service type, for which
there are P providers. There are thus W · PB
combinations to
explore to find the optimal assignments of business processes to
providers. Even for small configurations (e.g. W =10, B=5, P=10),
the computational time for exhaustive search is significant, and in
our work we look to scale these parameters. In the next subsection,
discuss how a genetic search algorithm can be used to converge
toward the optimum scheduling assignments.
3.2 Genetic algorithm
Given an exponential search space of business process
assignments to web service providers, the problem is to find the optimal
assignment that produces the overall highest aggregate business
value across all workflows. To explore the solution space, we use
a genetic algorithm (GA) search heuristic that simulates Darwinian
natural selection by having members of a population compete to
survive in order to pass their genetic chromosomes onto the next
generation; after successive generations, there is a tendency for the
chromosomes to converge toward the best combination [5] [6].
Although other search heuristics exist that can solve
optimization problems (e.g. simulated annealing or steepest-ascent
hillclimbing), the business process scheduling problem fits well with a
GA because potential solutions can be represented in a matrix form
and allows us to use prior research in effective GA chromosome
recombination to form new members of the population (e.g. [2]).
0 1 2 3 4
0 1 2 0 2 1
1 0 1 0 1 0
2 1 2 0 0 1
Figure 2: An example chromosome representing a scheduling
assignment of (workflow,service type) → service provider. Each
row represents a workflow, and each column represents a
service type. For example, here there are 3 workflows (0 to 2) and
5 service types (0 to 4). In workflow 0, any request for service
type 3 goes to provider 2. Note that the service provider
identifier is within a range limited to its service type (i.e. its column),
so the 2 listed for service type 3 is a different server from
server 2 in other columns.
Chromosome representation of a solution. In Figure 2 we
show an example chromosome that encodes one scheduling
assignment. The representation is a 2-dimensional matrix that maps
{workflow, service type} to a service provider. For a business
process in workflow i and utilising service type j, the (i, j)th
entry in
the table is the identifier for the service provider to which the
business process is assigned. Note that the service provider identifier is
within a range limited to its service type.
GA execution. A GA proceeds as follows. Initially a random
set of chromosomes is created for the population. The
chromosomes are evaluated (hashed) to some metric, and the best ones
are chosen to be parents. In our problem, the evaluation produces
the net business value across all workflows after executing all
business processes once they are assigned to their respective service
providers according to the mapping in the chromosome. The
parents recombine to produce children, simulating sexual crossover,
and occasionally a mutation may arise which produces new
characteristics that were not available in either parent. The principal idea
is that we would like the children to be different from the parents
(in order to explore more of the solution space) yet not too
different (in order to contain the portions of the chromosome that result
in good scheduling assignments). Note that finding the global
optimum is not guaranteed because the recombination and mutation
are stochastic.
GA recombination and mutation. As mentioned, the
chromosomes are 2-dimensional matrices that represent scheduling
assignments. To simulate sexual recombination of two chromosomes to
produce a new child chromosome, we applied a one-point crossover
scheme twice (once along each dimension). The crossover is best
explained by analogy to Cartesian space as follows. A random
point is chosen in the matrix to be coordinate (0, 0). Matrix
elements from quadrants II and IV from the first parent and elements
from quadrants I and III from the second parent are used to create
the new child. This approach follows GA best practices by keeping
contiguous chromosome segments together as they are transmitted
from parent to child.
The uni-chromosome mutation scheme randomly changes one
of the service provider assignments to another provider within the
available range. Other recombination and mutation schemes are an
area of research in the GA community, and we look to explore new
operators in future work.
GA evaluation function. An important GA component is the
evaluation function. Given a particular chromosome representing
one scheduling mapping, the function deterministically calculates
the net business value across all workloads. The business
processes in each workload are assigned to service providers, and each
provider"s completion time is calculated based on the service
agreement guarantee using the parameters mentioned in Section 3.1,
namely the unloaded completion time α, the maximum
concur32
rency β, and a coefficient γ that controls the linear performance
degradation under heavy load. Note that the evaluation function
can be easily replaced if desired; for example, other evaluation
functions can model different service provider guarantees or
parallel workflows.
4. EXPERIMENTS AND RESULTS
In this section we show the benefit of using our GA-based
scheduler. Because we wanted to scale the scenarios up to a large number
of workflows (up to 1000 in our experiments), we implemented a
simulation program that allowed us to vary parameters and to
measure the results with different metrics. The simulator was written
in standard C++ and was run on a Linux (Fedora Core) desktop
computer running at 2.8 GHz with 1GB of RAM.
We compared our algorithm against alternative candidates:
• A well-known round-robin algorithm that assigns each
business process in circular fashion to the service providers for a
particular service type. This approach provides the simplest
scheme for load-balancing.
• A random-proportional algorithm that proportionally assigns
business processes to the service providers; that is, for a
given service type, the service providers are ranked by their
guaranteed completion time, and business processes are
assigned proportionally to the providers based on their
completion time. (We also tried a proportionality scheme based
on both the completion times and maximum concurrency but
attained the same results, so only the former scheme"s results
are shown here.)
• A strawman greedy algorithm that always assigns business
processes to the service provider that has the fastest
guaranteed completion time. This algorithm represents a naive
approach based on greedy, local observations of each workflow
without taking into consideration all workflows.
In the experiments that follow, all results were averaged across
20 trials, and to help normalise the effects of randomisation used
during the GA, each trial started by reading in pre-initialised data
from disk. In Table 1 we list our experimental parameters.
In Figure 3 we show the results of running our GA against the
three candidate alternatives. The x-axis shows the number for
workflows scaled up to 1000, and the y-axis shows the aggregate
business value for all workflows. As can be seen, the GA consistently
produces the highest business value even as the number of
workflows grows; at 1000 workflows, the GA produces a 115%
improvement over the next-best alternative. (Note that although we
are optimising against the business value metric we defined earlier,
genetic algorithms are able to converge towards the optimal value
of any metric, as long as the evaluation function can consistently
measure a chromosome"s value with that metric.)
As expected, the greedy algorithm performs very poorly because
it does the worst job at balancing load: all business processes for
a given service type are assigned to only one server (the one
advertised to have the fastest completion time), and as more
business processes arrive, the provider"s performance degrades linearly.
The round-robin scheme is initially outperformed by the
randomproportional scheme up to around 120 workflows (as shown in the
magnified graph of Figure 4), but as the number of workflows
increases, the round-robin scheme consistently wins over
randomproportional. The reason is that although the random-proportional
scheme assigns business processes to providers proportionally
according to the advertised completion times (which is a measure of
the power of the service provider), even the best providers will
eventually reach a real-world maximum concurrency for the large
-2000
-1000
0
1000
2000
3000
4000
5000
6000
7000
0 200 400 600 800 1000
Aggregatebusinessvalueacrossallworkflows
Total number of workflows
Business value scores of scheduling algorithms
Genetic algorithm
Round robin
Random proportional
Greedy
Figure 3: Net business value scores of different scheduling algorithms.
-500
0
500
1000
1500
2000
2500
3000
3500
4000
0 50 100 150 200Aggregatebusinessvalueacrossallworkflows
Total number of workflows
Business value scores of scheduling algorithms
Genetic algorithm
Round robin
Random proportional
Greedy
Figure 4: Magnification of the left-most region in Figure 3.
number of workflows that we are considering. For a very large
number of workflows, the round-robin scheme is able to better
balance the load across all service providers.
To better understand the behaviour resulting from the scheduling
assignments, we show the workflow completion results in Figures
5, 6, and 7 for 100, 500, and 900 workflows, respectively. These
figures show the percentage of workflows that are successful (can
complete with their QoS limit), acceptable (can complete within
κ=3 times their QoS limit), and failed (cannot complete within κ=3
times their QoS limit). The GA consistently produces the highest
percentage of successful workflows (resulting in higher business
values for the aggregate set of workflows). Further, the round-robin
scheme produces better results than the random-proportional for a
large number of workflows but does not perform as well as the GA.
In Figure 8 we graph the makespan resulting from the same
experiments above. Makespan is a traditional metric from the job
scheduling community measuring elapsed time for the last job to
complete. While useful, it does not capture the high-level business
value metric that we are optimising against. Indeed, the makespan
is oblivious to the fact that we provide multiple levels of
completion (successful, acceptable, and failed) and assign business value
scores accordingly. For completeness, we note that the GA
provides the fastest makespan, but it is matched by the round robin
algorithm. The GA produces better business values (as shown in
Figure 3) because it is able to search the solution space to find
better mappings that produce more successful workflows (as shown in
Figures 5 to 7).
We also looked at the effect of the scheduling algorithms on
balancing the load. Figure 9 shows the percentage of services
providers that were accessed while the workflows ran. As expected,
the greedy algorithm always hits one service provider; on the other
hand, the round-robin algorithm is the fastest to spread the business
33
Experimental parameter Comment
Workflows 5 to 1000
Business processes per workflow uniform random: 1 - 10
Service types 10
Service providers per service type uniform random: 1 - 10
Workflow QoS goal uniform random: 10-30 seconds
Service provider completion time (α) uniform random: 1 - 12 seconds
Service provider maximum concurrency (β) uniform random: 1 - 12
Service provider degradation coefficient (γ) uniform random: 0.1 - 0.9
Business value for successful workflows uniform random: 10 - 50 points
Business value for acceptable workflows uniform random: 0 - 10 points
Business value for failed workflows uniform random: -10 - 0 points
GA: number of parents 20
GA: number of children 80
GA: number of generations 1000
Table 1: Experimental parameters
Failed
Acceptable (completed but not within QoS)
Successful (completed within QoS)
0%
20%
40%
60%
80%
100%
RoundRobinRandProportionalGreedyGeneticAlg
Percentageofallworkflows
Workflow behaviour, 100 workflows
Figure 5: Workflow behaviour for 100 workflows.
Failed
Acceptable (completed but not within QoS)
Successful (completed within QoS)
0%
20%
40%
60%
80%
100%
RoundRobinRandProportionalGreedyGeneticAlg
Percentageofallworkflows
Workflow behaviour, 500 workflows
Figure 6: Workflow behaviour for 500 workflows.
Failed
Acceptable (completed but not within QoS)
Successful (completed within QoS)
0%
20%
40%
60%
80%
100%
RoundRobinRandProportionalGreedyGeneticAlg
Percentageofallworkflows
Workflow behaviour, 500 workflows
Figure 7: Workflow behaviour for 900 workflows.
0
50
100
150
200
250
300
0 200 400 600 800 1000
Makespan[seconds]
Number of workflows
Maximum completion time for all workflows
Genetic algorithm
Round robin
Random proportional
Greedy
Figure 8: Maximum completion time for all workflows. This value
is the makespan metric used in traditional scheduling research.
Although useful, the makespan does not take into consideration the
business value scoring in our problem domain.
processes. Figure 10 is the percentage of accessed service providers
(that is, the percentage of service providers represented in Figure
9) that had more assigned business processes than their advertised
maximum concurrency. For example, in the greedy algorithm only
one service provider is utilised, and this one provider quickly
becomes saturated. On the other hand, the random-proportional
algorithm uses many service providers, but because business processes
are proportionally assigned with more assignments going to the
better providers, there is a tendency for a smaller percentage of
providers to become saturated.
For completeness, we show the performance of the genetic
algorithm itself in Figure 11. The algorithm scales linearly with an
increasing number of workflows. We note that the round-robin,
random-proportional, and greedy algorithms all finished within 1
second even for the largest workflow configuration. However, we
feel that the benefit of finding much higher business value scores
justifies the running time of the GA; further we would expect that
the running time will improve with both software tuning as well as
with a computer faster than our off-the-shelf PC.
5. CONCLUSION
Business processes within workflows can be orchestrated to
access web services. In this paper we looked at multi-tiered service
provisioning where web service requests to service types can be
mapped to different service providers. The resulting problem is
that in order to support a very large number of workflows, the
assignment of business process to web service provider must be
intelligent. We used a business value metric to measure the
be34
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000
Percentageofallserviceproviders
Number of workflows
Service providers utilised
Genetic algorithm
Round robin
Random proportional
Greedy
Figure 9: The percentage of service providers utilized during
workload executions. The Greedy algorithm always hits the one service
provider, while the Round Robin algorithm spreads requests evenly
across the providers.
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000
Percentageofallserviceproviders
Number of workflows
Service providers saturated
Genetic algorithm
Round robin
Random proportional
Greedy
Figure 10: The percentage of service providers that are saturated
among those providers who were utilized (that is, percentage of the
service providers represented in Figure 9). A saturated service provider
is one whose workload is greater that its advertised maximum
concurrency.
0
5
10
15
20
25
0 200 400 600 800 1000
Runningtimeinseconds
Total number of workflows
Running time of genetic algorithm
GA running time
Figure 11: Running time of the genetic algorithm.
haviour of workflows meeting or failing QoS values, and we
optimised our scheduling to maximise the aggregate business value
across all workflows. Since the solution space of scheduler
mappings is exponential, we used a genetic search algorithm to search
the space and converge toward the best schedule. With a default
configuration for all parameters and using our business value
scoring, the GA produced up to 115% business value improvement over
the next best algorithm. Finally, because a genetic algorithm will
converge towards the optimal value using any metric (even other
than the business value metric we used), we believe our approach
has strong potential for continuing work.
In future work, we look to acquire real-world traces of web
service instances in order to get better estimates of service agreement
guarantees, although we expect that such guarantees between the
providers and their consumers are not generally available to the
public. We will also look at other QoS metrics such as CPU and
I/O usage. For example, we can analyse transfer costs with
varying bandwidth, latency, data size, and data distribution. Further,
we hope to improve our genetic algorithm and compare it to more
scheduler alternatives. Finally, since our work is complementary
to existing work in web services choreography (because we rely on
pre-configured workflows), we look to integrate our approach with
available web service workflow systems expressed in BPEL.
6. REFERENCES
[1] A. Ankolekar, et al. DAML-S: Semantic Markup For Web
Services, In Proc. of the Int"l Semantic Web Working
Symposium, 2001.
[2] L. Davis. Job Shop Scheduling with Genetic Algorithms,
In Proc. of the Int"l Conference on Genetic Algorithms, 1985.
[3] H.-L. Fang, P. Ross, and D. Corne. A Promising Genetic
Algorithm Approach to Job-Shop Scheduling, Rescheduling,
and Open-Shop Scheduling Problems , In Proc. on the 5th
Int"l Conference on Genetic Algorithms, 1993.
[4] M. Gary and D. Johnson. Computers and Intractability: A
Guide to the Theory of NP-Completeness, Freeman, 1979.
[5] J. Holland. Adaptation in Natural and Artificial Systems:
An Introductory Analysis with Applications to Biology,
Control, and Artificial Intelligence, MIT Press, 1992.
[6] D. Goldberg. Genetic Algorithms in Search, Optimization
and Machine Learning, Kluwer Academic Publishers, 1989.
[7] Business Processes in a Web Services World,
www-128.ibm.com/developerworks/
webservices/library/ws-bpelwp/.
[8] G. Soundararajan, K. Manassiev, J. Chen, A. Goel, and C.
Amza. Back-end Databases in Shared Dynamic Content
Server Clusters, In Proc. of the IEEE Int"l Conference on
Autonomic Computing, 2005.
[9] B. Srivastava and J. Koehler. Web Service Composition
Current Solutions and Open Problems, ICAP, 2003.
[10] B. Urgaonkar, P. Shenoy, A. Chandra, and P. Goyal.
Dynamic Provisioning of Multi-Tier Internet Applications,
In Proc. of the IEEE Int"l Conference on Autonomic
Computing, 2005.
[11] L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q.
Sheng. Quality Driven Web Services Composition, In
Proc. of the WWW Conference, 2003.
35 | qo;service request;scheduling service;qos-defined limit;multi-tiered system;end-to-end workflow composition;workflows;multi-organisation environment;business process workflow;web service;streamlined functionality;business value metric;scheduling agent;heuristic;schedule |
train_C-67 | A Holistic Approach to High-Performance Computing: Xgrid Experience | The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process. | 1. INTRODUCTION
Grid computing does not have a single, universally accepted
definition. The technology behind grid computing model is not
new. Its roots lie in early distributed computing models that date
back to early 1980s, where scientists harnessed the computing
power of idle workstations to let compute intensive applications
to run on multiple workstations, which dramatically shortening
processing times. Although numerous distributed computing
models were available for discipline-specific scientific
applications, only recently have the tools became available to use
general-purpose applications on a grid. Consequently, the grid
computing model is gaining popularity and has become a show
piece of "utility computing". Since in the IT industry, various
computing models are used interchangeably with grid computing,
we first sort out the similarities and difference between these
computing models so that grid computing can be placed in
perspective.
1.1 Clustering
A cluster is a group of machines in a fixed configuration united to
operate and be managed as a single entity to increase robustness
and performance. The cluster appears as a single high-speed
system or a single highly available system. In this model,
resources can not enter and leave the group as necessary. There
are at least two types of clusters: parallel clusters and
highavailability clusters. Clustered machines are generally in spatial
proximity, such as in the same server room, and dedicated solely
to their task.
In a high-availability cluster, each machine provides the same
service. If one machine fails, another seamlessly takes over its
workload. For example, each computer could be a web server for
a web site. Should one web server "die," another provides the
service, so that the web site rarely, if ever, goes down.
A parallel cluster is a type of supercomputer. Problems are split
into many parts, and individual cluster members are given part of
the problem to solve. An example of a parallel cluster is
composed of Apple Power Mac G5 computers at Virginia Tech
University [1].
1.2 Distributed Computing
Distributed computing spatially expands network services so that
the components providing the services are separated. The major
objective of this computing model is to consolidate processing
power over a network. A simple example is spreading services
such as file and print serving, web serving, and data storage across
multiple machines rather than a single machine handling all the
tasks. Distributed computing can also be more fine-grained, where
even a single application is broken into parts and each part located
on different machines: a word processor on one server, a spell
checker on a second server, etc.
1.3 Utility Computing
Literally, utility computing resembles common utilities such as
telephone or electric service. A service provider makes computing
resources and infrastructure management available to a customer
as needed, and charges for usage rather than a flat rate. The
important thing to note is that resources are only used as needed,
and not dedicated to a single customer.
1.4 Grid Computing
Grid computing contains aspects of clusters, distributed
computing, and utility computing. In the most basic sense, grid
turns a group of heterogeneous systems into a centrally managed
but flexible computing environment that can work on tasks too
time intensive for the individual systems. The grid members are
not necessarily in proximity, but must merely be accessible over a
network; the grid can access computers on a LAN, WAN, or
anywhere in the world via the Internet. In addition, the computers
comprising the grid need not be dedicated to the grid; rather, they
can function as normal workstations, and then advertise their
availability to the grid when not in use.
The last characteristic is the most fundamental to the grid
described in this paper. A well-known example of such an ad
hoc grid is the SETI@home project [2] of the University of
California at Berkeley, which allows any person in the world with
a computer and an Internet connection to donate unused processor
time for analyzing radio telescope data.
1.5 Comparing the Grid and Cluster
A computer grid expands the capabilities of the cluster by loosing
its spatial bounds, so that any computer accessible through the
network gains the potential to augment the grid. A fundamental
grid feature is that it scales well. The processing power of any
machine added to the grid is immediately availably for solving
problems. In addition, the machines on the grid can be
generalpurpose workstations, which keep down the cost of expanding the
grid.
2. ASSESSING THE NEED FOR GRID
COMPUTING
Effective use of a grid requires a computation that can be divided
into independent (i.e., parallel) tasks. The results of each task
cannot depend on the results of any other task, and so the
members of the grid can solve the tasks in parallel. Once the tasks
have been completed, the results can be assembled into the
solution. Examples of parallelizable computations are the
Mandelbrot set of fractals, the Monte Carlo calculations used in
disciplines such as Solid State Physics, and the individual frames
of a rendered animation. This paper is concerned with the last
example.
2.1 Applications Appropriate for Grid
Computing
The applications used in grid computing must either be
specifically designed for grid use, or scriptable in such a way that
they can receive data from the grid, process the data, and then
return results. In other words, the best candidates for grid
computing are applications that run the same or very similar
computations on a large number of pieces of data without any
dependencies on the previous calculated results. Applications
heavily dependent on data handling rather than processing power
are generally more suitable to run on a traditional environment
than on a grid platform. Of course, the applications must also run
on the computing platform that hosts the grid. Our interest is in
using the Alias Maya application [3] with Apple"s Xgrid [4] on
Mac OS X.
Commercial applications usually have strict license requirements.
This is an important concern if we install a commercial
application such as Maya on all members of our grid. By its
nature, the size of the grid may change as the number of idle
computers changes. How many licenses will be required? Our
resolution of this issue will be discussed in a later section.
2.2 Integration into the Existing
Infrastructure
The grid requires a controller that recognizes when grid members
are available, and parses out job to available members. The
controller must be able to see members on the network. This does
not require that members be on the same subnet as the controller,
but if they are not, any intervening firewalls and routers must be
configured to allow grid traffic.
3. XGRID
Xgrid is Apple"s grid implementation. It was inspired by Zilla, a
desktop clustering application developed by NeXT and acquired
by Apple. In this report we describe the Xgrid Technology
Preview 2, a free download that requires Mac OS X 10.2.8 or later
and a minimum 128 MB RAM [5].
Xgrid, leverages Apple"s traditional ease of use and configuration.
If the grid members are on the same subnet, by default Xgrid
automatically discovers available resources through Rendezvous
[6]. Tasks are submitted to the grid through a GUI interface or by
the command line. A System Preference Pane controls when each
computer is available to the grid.
It may be best to view Xgrid as a facilitator. The Xgrid
architecture handles software and data distribution, job execution,
and result aggregation. However, Xgrid does not perform the
actual calculations.
3.1 Xgrid Components
Xgrid has three major components: the client, controller, and the
agent. Each component is included in the default installation, and
any computer can easily be configured to assume any role. In
120
fact, for testing purposes, a computer can simultaneously assume
all roles in local mode. The more typical production use is
called cluster mode.
The client submits jobs to the controller through the Xgrid GUI or
command line. The client defines how the job will be broken into
tasks for the grid. If any files or executables must be sent as part
of a job, they must reside on the client or at a location accessible
to the client. When a job is complete, the client can retrieve the
results from the controller. A client can only connect to a single
controller at a time.
The controller runs the GridServer process. It queues tasks
received from clients, distributes those tasks to the agents, and
handles failover if an agent cannot complete a task. In Xgrid
Technology Preview 2, a controller can handle a maximum of
10,000 agent connections. Only one controller can exist per
logical grid.
The agents run the GridAgent process. When the GridAgent
process starts, it registers with a controller; an agent can only be
connected to one controller at a time. Agents receive tasks from
their controller, perform the specified computations, and then
send the results back to the controller. An agent can be configured
to always accept tasks, or to just accept them when the computer
is not otherwise busy.
3.2 Security and Authentication
By default, Xgrid requires two passwords. First, a client needs a
password to access a controller. Second, the controller needs a
password to access an agent. Either password requirement can be
disabled. Xgrid uses two-way-random mutual authentication
protocol with MD5 hashes. At this time, data encryption is only
used for passwords.
As mentioned earlier, an agent registers with a controller when the
GridAgent process starts. There is no native method for the
controller to reject agents, and so it must accept any agent that
registers. This means that any agent could submit a job that
consumes excessive processor and disk space on the agents. Of
course, since Mac OS X is a BSD-based operating system, the
controller could employ Unix methods of restricting network
connections from agents.
The Xgrid daemons run as the user nobody, which means the
daemons can read, write, or execute any file according to world
permissions. Thus, Xgrid jobs can execute many commands and
write to /tmp and /Volumes. In general, this is not a major security
risk, but is does require a level of trust between all members of the
grid.
3.3 Using Xgrid
3.3.1 Installation
Basic Xgrid installation and configuration is described both in
Apple documentation [5] and online at the University of Utah web
site [8]. The installation is straightforward and offers no options
for customization. This means that every computer on which
Xgrid is installed has the potential to be a client, controller, or
agent.
3.3.2 Agent and Controller Configuration
The agents and controllers can be configured through the Xgrid
Preference Pane in the System Preferences or XML files in
/Library/Preferences. Here the GridServer and GridAgent
processes are started, passwords set, and the controller discovery
method used by agents is selected. By default, agents use
Rendezvous to find a controller, although the agents can also be
configured to look for a specific host.
The Xgrid Preference Pane also sets whether the Agents will
always accept jobs, or only accept jobs when idle. In Xgrid terms,
idle either means that the Xgrid screen saver has activated, or the
mouse and keyboard have not been used for more than 15
minutes. Even if the agent is configured to always accept tasks, if
the computer is being used these tasks will run in the background
at a low priority.
However, if an agent only accepts jobs when idle, any unfinished
task being performed when the computer ceases being idle are
immediately stopped and any intermediary results lost. Then the
controller assigns the task to another available member of the
grid.
Advertising the controller via Rendezvous can be disabled by
editing /Library/Preferences/com.apple.xgrid.controller.plist. This,
however, will not prevent an agent from connecting to the
controller by hostname.
3.3.3 Sending Jobs from an Xgrid Client
The client sends jobs to the controller either through the Xgrid
GUI or the command line. The Xgrid GUI submits jobs via small
applications called plug-ins. Sample plug-ins are provided by
Apple, but they are only useful as simple testing or as examples of
how to create a custom plug-in. If we are to employ Xgrid for
useful work, we will require a custom plug-in.
James Reynolds details the creation of custom plug-ins on the
University of Utah Mac OS web site [8]. Xgrid stores plug-ins in
/Library/Xgrid/Plug-ins or ~/Library/Xgrid/Plug-ins, depending
on whether the plug-in was installed with Xgrid or created by a
user.
The core plug-in parameter is the command, which includes the
executable the agents will run. Another important parameter is the
working directory. This directory contains necessary files that
are not installed on the agents or available to them over a network.
The working directory will always be copied to each agent, so it is
best to keep this directory small. If the files are installed on the
agents or available over a network, the working directory
parameter is not needed.
The command line allows the options available with the GUI
plug-in, but it can be slightly more cumbersome. However, the
command line probably will be the method of choice for serious
work. The command arguments must be included in a script
unless they are very basic. This can be a shell, perl, or python
script, as long as the agent can interpret it.
3.3.4 Running the Xgrid Job
When the Xgrid job is started, the command tells the controller
how to break the job into tasks for the agents. Then the command
is tarred and gzipped and sent to each agent; if there is a working
directory, this is also tarred and gzipped and sent to the agents.
121
The agents extract these files into /tmp and run the task. Recall
that since the GridAgent process runs as the user nobody,
everything associated with the command must be available to
nobody.
Executables called by the command should be installed on the
agents unless they are very simple. If the executable depends on
libraries or other files, it may not function properly if transferred,
even if the dependent files are referenced in the working directory.
When the task is complete, the results are available to the client.
In principle, the results are sent to the client, but whether this
actually happens depends on the command. If the results are not
sent to the client, they will be in /tmp on each agent. When
available, a better solution is to direct the results to a network
volume accessible to the client.
3.4 Limitations and Idiosyncrasies
Since Xgrid is only in its second preview release, there are some
rough edges and limitations. Apple acknowledges some
limitations [7]. For example, the controller cannot determine
whether an agent is trustworthy and the controller always copies
the command and working directory to the agent without checking
to see if these exist on the agent.
Other limitations are likely just a by-product of an unfinished
work. Neither the client nor controller can specify which agents
will receive the tasks, which is particularly important if the agents
contain a variety of processor types and speeds and the user wants
to optimize the calculations. At this time, the best solution to this
problem may be to divide the computers into multiple logical
grids. There is also no standard way to monitor the progress of a
running job on each agent. The Xgrid GUI and command line
indicate which agents are working on tasks, but gives no
indication of progress.
Finally, at this time only Mac OS X clients can submit jobs to the
grid. The framework exists to allow third parties to write plug-ins
for other Unix flavors, but Apple has not created them.
4. XGRID IMPLEMENTATION
Our goal is an Xgrid render farm for Alias Maya. The Ringling
School has about 400 Apple Power Mac G4"s and G5"s in 13
computer labs. The computers range from 733 MHz
singleprocessor G4"s and 500 MHz and 1 GHz dual-processor G4"s to
1.8 GHz dual-processor G5"s. All of these computers are lightly
used in the evening and on weekends and represent an enormous
processing resource for our student rendering projects.
4.1 Software Installation
During our Xgrid testing, we loaded software on each computer
multiple times, including the operating systems. We saved time by
facilitating our installations with the remote administration
daemon (radmind) software developed at the University of
Michigan [9], [10].
Everything we installed for testing was first created as a radmind
base load or overload. Thus, Mac OS X, Mac OS X Developer
Tools, Xgrid, POV-Ray [11], and Alias Maya were stored on a
radmind server and then installed on our test computers when
needed.
4.2 Initial Testing
We used six 1.8 GHz dual-processor Apple Power Mac G5"s for
our Xgrid tests. Each computer ran Mac OS X 10.3.3 and
contained 1 GB RAM. As shown in Figure 1, one computer
served as both client and controller, while the other five acted as
agents.
Before attempting Maya rendering with Xgrid, we performed
basic calculations to cement our understanding of Xgrid. Apple"s
Xgrid documentation is sparse, so finding helpful web sites
facilitated our learning.
We first ran the Mandelbrot set plug-in provided by Apple, which
allowed us to test the basic functionality of our grid. Then we
performed benchmark rendering with the Open Source
Application POV-Ray, as described by Daniel Côté [12] and
James Reynolds [8]. Our results showed that one dual-processor
G5 rendering the benchmark POV-Ray image took 104 minutes.
Breaking the image into three equal parts and using Xgrid to send
the parts to three agents required 47 minutes. However, two
agents finished their rendering in 30 minutes, while the third
agent used 47 minutes; the entire render was only as fast as the
slowest agent.
These results gave us two important pieces of information. First,
the much longer rendering time for one of the tasks indicated that
we should be careful how we split jobs into tasks for the agents.
All portions of the rendering will not take equal amounts of time,
even if the pixel size is the same. Second, since POV-Ray cannot
take advantage of both processors in a G5, neither can an Xgrid
task running POV-Ray. Alias Maya does not have this limitation.
4.3 Rendering with Alias Maya 6
We first installed Alias Maya 6 for Mac OS X on the
client/controller and each agent. Maya 6 requires licenses for use
as a workstation application. However, if it is just used for
rendering from the command line or a script, no license is needed.
We thus created a minimal installation of Maya as a radmind
overload. The application was installed in a hidden directory
inside /Applications. This was done so that normal users of the
workstations would not find and attempt to run Maya, which
would fail because these installations are not licensed for such
use.
In addition, Maya requires the existence of a directory ending in
the path /maya. The directory must be readable and writable by
the Maya user. For a user running Maya on a Mac OS X
workstation, the path would usually be ~/Documents/maya.
Unless otherwise specified, this directory will be the default
location for Maya data and output files. If the directory does not
Figure 1. Xgrid test grid.
Client/
Controller
Agent 1
Agent 2
Agent 3
Agent 4
Agent 5
Network
Volume
Jobs
Data
Data
122
exist, Maya will try to create it, even if the user specifies that the
data and output files exist in other locations.
However, Xgrid runs as the user nobody, which does not have a
home directory. Maya is unable to create the needed directory,
and looks instead for /Alias/maya. This directory also does not
exist, and the user nobody has insufficient rights to create it. Our
solution was to manually create /Alias/maya and give the user
nobody read and write permissions.
We also created a network volume for storage of both the
rendering data and the resulting rendered frames. This avoided
sending the Maya files and associated textures to each agent as
part of a working directory. Such a solution worked well for us
because our computers are geographically close on a LAN; if
greater distance had separated the agents from the
client/controller, specifying a working directory may have been a
better solution.
Finally, we created a custom GUI plug-in for Xgrid. The plug-in
command calls a Perl script with three arguments. Two arguments
specify the beginning and end frames of the render and the third
argument the number of frames in each job (which we call the
cluster size). The script then calculates the total number of jobs
and parses them out to the agents. For example, if we begin at
frame 201 and end at frame 225, with 5 frames for each job, the
plug-in will create 5 jobs and send them out to the agents.
Once the jobs are sent to the agents, the script executes the
/usr/sbin/Render command on each agent with the parameters
appropriate for the particular job. The results are sent to the
network volume.
With the setup described, we were able to render with Alias Maya
6 on our test grid. Rendering speed was not important at this time;
our first goal was to implement the grid, and in that we succeeded.
4.3.1 Pseudo Code for Perl Script in Custom Xgrid
Plug-in
In this section we summarize in simplified pseudo code format the
Perl script used in our Xgrig plug-in.
agent_jobs{
• Read beginning frame, end frame, and cluster size of
render.
• Check whether the render can be divided into an integer
number of jobs based on the cluster size.
• If there are not an integer number of jobs, reduce the cluster
size of the last job and set its last frame to the end frame of
the render.
• Determine the start frame and end frame for each job.
• Execute the Render command.
}
4.4 Lessons Learned
Rendering with Maya from the Xgrid GUI was not trivial. The
lack of Xgrid documentation and the requirements of Maya
combined into a confusing picture, where it was difficult to decide
the true cause of the problems we encountered. Trial and error
was required to determine the best way to set up our grid.
The first hurdle was creating the directory /Alias/maya with read
and write permissions for the user nobody. The second hurdle was
learning that we got the best performance by storing the rendering
data on a network volume.
The last major hurdle was retrieving our results from the agents.
Unlike the POV-Ray rendering tests, our initial Maya results were
never returned to the client; instead, Maya stored the results in
/tmp on each agent. Specifying in the plug-in where to send the
results would not change this behavior. We decided this was
likely a Maya issue rather than an Xgrid issue, and the solution
was to send the results to the network volume via the Perl script.
5. FUTURE PLANS
Maya on Xgrid is not yet ready to be used by the students of
Ringling School. In order to do this, we must address at least the
following concerns.
• Continue our rendering tests through the command line
rather than the GUI plug-in. This will be essential for the
following step.
• Develop an appropriate interface for users to send jobs to the
Xgrid controller. This will probably be an extension to the
web interface of our existing render farm, where the student
specifies parameters that are placed in a script that issues the
Render command.
• Perform timed Maya rendering tests with Xgrid. Part of this
should compare the rendering times for Power Mac G4"s and
G5"s.
6. CONCLUSION
Grid computing continues to advance. Recently, the IT industry
has witnessed the emergence of numerous types of contemporary
grid applications in addition to the traditional grid framework for
compute intensive applications. For instance, peer-to-peer
applications such as Kazaa, are based on storage grids that do not
share processing power but instead an elegant protocol to swap
files between systems. Although in our campuses we discourage
students from utilizing peer-to-peer applications from music
sharing, the same protocol can be utilized on applications such as
decision support and data mining. The National Virtual
Collaboratory grid project [13] will link earthquake researchers
across the U.S. with computing resources, allowing them to share
extremely large data sets, research equipment, and work together
as virtual teams over the Internet.
There is an assortment of new grid players in the IT world
expanding the grid computing model and advancing the grid
technology to the next level. SAP [14] is piloting a project to
grid-enable SAP ERP applications, Dell [15] has partnered with
Platform Computing to consolidate computing resources and
provide grid-enabled systems for compute intensive applications,
Oracle has integrated support for grid computing in their 10g
release [16], United Devices [17] offers hosting service for
gridon-demand, and Sun Microsystems continues their research and
development of Sun"s N1 Grid engine [18] which combines grid
and clustering platforms.
Simply, the grid computing is up and coming. The potential
benefits of grid computing are colossal in higher education
learning while the implementation costs are low. Today, it would
be difficult to identify an application with as high a return on
investment as grid computing in information technology divisions
in higher education institutions. It is a mistake to overlook this
technology with such a high payback.
123
7. ACKNOWLEDGMENTS
The authors would like to thank Scott Hanselman of the IT team
at the Ringling School of Art and Design for providing valuable
input in the planning of our Xgrid testing. We would also like to
thank the posters of the Xgrid Mailing List [13] for providing
insight into many areas of Xgrid.
8. REFERENCES
[1] Apple Academic Research,
http://www.apple.com/education/science/profiles/vatech/.
[2] SETI@home: Search for Extraterrestrial Intelligence at
home. http://setiathome.ssl.berkeley.edu/.
[3] Alias, http://www.alias.com/.
[4] Apple Computer, Xgrid, http://www.apple.com/acg/xgrid/.
[5] Xgrid Guide, http://www.apple.com/acg/xgrid/, 2004.
[6] Apple Mac OS X Features,
http://www.apple.com/macosx/features/rendezvous/.
[7] Xgrid Manual Page, 2004.
[8] James Reynolds, Xgrid Presentation, University of Utah,
http://www.macos.utah.edu:16080/xgrid/, 2004.
[9] Research Systems Unix Group, Radmind, University of
Michigan, http://rsug.itd.umich.edu/software/radmind.
[10]Using the Radmind Command Line Tools to Maintain
Multiple Mac OS X Machines,
http://rsug.itd.umich.edu/software/radmind/files/radmindtutorial-0.8.1.pdf.
[11]POV-Ray, http://www.povray.org/.
[12]Daniel Côté, Xgrid example: Parallel graphics rendering in
POVray, http://unu.novajo.ca/simple/, 2004.
[13]NEESgrid, http://www.neesgrid.org/.
[14]SAP, http://www.sap.com/.
[15]Platform Computing, http://platform.com/.
[16]Grid, http://www.oracle.com/technologies/grid/.
[17]United Devices, Inc., http://ud.com/.
[18]N1 Grid Engine 6, http://www.sun.com/
software/gridware/index.html/.
[19]Xgrig Users Mailing List,
http://www.lists.apple.com/mailman/listinfo/xgridusers/.
124 | render;multimedia;xgrid environment;xgrid;nonlinear video editing;macintosh os x;grid computing;large-scale integration of technology;web production;animation;digital video application;rendezvous;design;visual art;highperformance computing;design education;high-end graphic;mac os x;operating system;cluster |
train_C-68 | An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks | On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items. | 1. INTRODUCTION
Technological advances in areas of storage and wireless
communications have now made it feasible to envision on-demand delivery
of data items, for e.g., video and audio clips, in vehicular
peer-topeer networks. In prior work, Ghandeharizadeh et al. [10]
introduce the concept of vehicles equipped with a
Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment. The
notable features of an AutoMata include a mass storage device
offering hundreds of gigabytes (GB) of storage, a fast processor, and
several types of networking cards. Even with today"s 500 GB disk
drives, a repository of diverse entertainment content may exceed
the storage capacity of a single AutoMata. Such repositories
constitute the focus of this study. To exchange data, we assume each
AutoMata is configured with two types of networking cards: 1) a
low-bandwidth networking card with a long radio-range in the
order of miles that enables an AutoMata device to communicate with
a nearby cellular or WiMax station, 2) a high-bandwidth
networking card with a limited radio-range in the order of hundreds of feet.
The high bandwidth connection supports data rates in the
order of tens to hundreds of Megabits per second and represents the
ad-hoc peer to peer network between the vehicles. This is
labelled as the data plane and is employed to exchange data items
between devices. The low-bandwidth connection serves as the
control plane, enabling AutoMata devices to exchange meta-data with
one or more centralized servers. This connection offers bandwidths
in the order of tens to hundreds of Kilobits per second. The
centralized servers, termed dispatchers, compute schedules of data
delivery along the data plane using the provided meta-data. These
schedules are transmitted to the participating vehicles using the
control plane. The technical feasibility of such a two-tier
architecture is presented in [7], with preliminary results to demonstrate the
bandwidth of the control plane is sufficient for exchange of control
information needed for realizing such an application.
In a typical scenario, an AutoMata device presents a passenger
with a list of data items1
, showing both the name of each data item
and its availability latency. The latter, denoted as δ, is defined as
the earliest time at which the client encounters a copy of its
requested data item. A data item is available immediately when it
resides in the local storage of the AutoMata device serving the
request. Due to storage constraints, an AutoMata may not store the
entire repository. In this case, availability latency is the time from
when the user issues a request until when the AutoMata encounters
another car containing the referenced data item. (The terms car and
AutoMata are used interchangeably in this study.)
The availability latency for an item is a function of the current
location of the client, its destination and travel path, the mobility
model of the AutoMata equipped vehicles, the number of replicas
constructed for the different data items, and the placement of data
item replicas across the vehicles. A method to improve the
availability latency is to employ data carriers which transport a replica
of the requested data item from a server car containing it to a client
that requested it. These data carriers are termed ‘zebroids".
Selection of zebroids is facilitated by the two-tiered architecture.
The control plane enables centralized information gathering at a
dispatcher present at a base-station.2
Some examples of control
in1
Without loss of generality, the term data item might be either
traditional media such as text or continuous media such as an audio or
video clip.
2
There may be dispatchers deployed at a subset of the base-stations
for fault-tolerance and robustness. Dispatchers between
basestations may communicate via the wired infrastructure.
75
formation are currently active requests, travel path of the clients and
their destinations, and paths of the other cars. For each client
request, the dispatcher may choose a set of z carriers that collaborate
to transfer a data item from a server to a client (z-relay zebroids).
Here, z is the number of zebroids such that 0 ≤ z < N, where
N is the total number of cars. When z = 0 there are no carriers,
requiring a server to deliver the data item directly to the client.
Otherwise, the chosen relay team of z zebroids hand over the data item
transitively to one another to arrive at the client, thereby reducing
availability latency (see Section 3.1 for details). To increase
robustness, the dispatcher may employ multiple relay teams of z-carriers
for every request. This may be useful in scenarios where the
dispatcher has lower prediction accuracy in the information about the
routes of the cars. Finally, storage constraints may require a zebroid
to evict existing data items from its local storage to accommodate
the client requested item.
In this study, we quantify the following main factors that affect
availability latency in the presence of zebroids: (i) data item
repository size, (ii) car density, (iii) storage capacity per car, (iv) client
trip duration, (v) replacement scheme employed by the zebroids,
and (vi) accuracy of the car route predictions. For a significant
subset of these factors, we address some key questions pertaining to
use of zebroids both via analysis and extensive simulations.
Our main findings are as follows. A naive random replacement
policy employed by the zebroids shows competitive performance
in terms of availability latency. With such a policy, substantial
improvements in latency can be obtained with zebroids at a minimal
replacement overhead. In more practical scenarios, where the
dispatcher has inaccurate information about the car routes, zebroids
continue to provide latency improvements. A surprising result is
that changes in popularity of the data items do not impact the
latency gains obtained with a simple instantiation of z-relay zebroids
called one-instantaneous zebroids (see Section 3.1). This study
suggests a number of interesting directions to be pursued to gain
better understanding of design of carrier-based systems that
improve availability latency.
Related Work: Replication in mobile ad-hoc networks has been
a widely studied topic [11, 12, 15]. However, none of these
studies employ zebroids as data carriers to reduce the latency of the
client"s requests. Several novel and important studies such as
ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20],
and Seek and Focus [17] have analyzed factors impacting
intermittently connected networks consisting of data carriers similar in
spirit to zebroids. Factors considered by each study are dictated by
their assumed environment and target application. A novel
characteristic of our study is the impact on availability latency for a
given database repository of items. A detailed description of
related works can be obtained in [9].
The rest of this paper is organized as follows. Section 2
provides an overview of the terminology along with the factors that
impact availability latency in the presence of zebroids. Section 3
describes how the zebroids may be employed. Section 4 provides
details of the analysis methodology employed to capture the
performance with zebroids. Section 5 describes the details of the
simulation environment used for evaluation. Section 6 enlists the key
questions examined in this study and answers them via analysis
and simulations. Finally, Section 7 presents brief conclusions and
future research directions.
2. OVERVIEW AND TERMINOLOGY
Table 1 summarizes the notation of the parameters used in the
paper. Below we introduce some terminology used in the paper.
Assume a network of N AutoMata-equipped cars, each with
storage capacity of α bytes. The total storage capacity of the
system is ST =N ·α. There are T data items in the database, each with
Database Parameters
T Number of data items.
Si Size of data item i
fi Frequency of access to data item i.
Replication Parameters
Ri Normalized frequency of access to data item i
ri Number of replicas for data item i
n Characterizes a particular replication scheme.
δi Average availability latency of data item i
δagg Aggregate availability latency, δagg = T
j=1 δj · fj
AutoMata System Parameters
G Number of cells in the map (2D-torus).
N Number of AutoMata devices in the system.
α Storage capacity per AutoMata.
γ Trip duration of the client AutoMata.
ST Total storage capacity of the AutoMata system, ST = N · α.
Table 1: Terms and their definitions
size Si. The frequency of access to data item i is denoted as fi,
with T
j=1 fj = 1. Let the trip duration of the client AutoMata
under consideration be γ.
We now define the normalized frequency of access to the data
item i, denoted by Ri, as:
Ri =
(fi)n
T
j=1(fj)n
; 0 ≤ n ≤ ∞ (1)
The exponent n characterizes a particular replication technique.
A square-root replication scheme is realized when n = 0.5 [5].
This serves as the base-line for comparison with the case when
zebroids are deployed. Ri is normalized to a value between 0 and
1. The number of replicas for data item i, denoted as ri, is: ri =
min (N, max (1, Ri·N·α
Si
)). This captures the case when at least
one copy of every data item must be present in the ad-hoc network
at all times. In cases where a data item may be lost from the ad-hoc
network, this equation becomes ri = min (N, max (0, Ri·N·α
Si
)).
In this case, a request for the lost data item may need to be satisfied
by fetching the item from a remote server.
The availability latency for a data item i, denoted as δi, is defined
as the earliest time at which a client AutoMata will find the first
replica of the item accessible to it. If this condition is not satisfied,
then we set δi to γ. This indicates that data item i was not available
to the client during its journey. Note that since there is at least one
replica in the system for every data item i, by setting γ to a large
value we ensure that the client"s request for any data item i will be
satisfied. However, in most practical circumstances γ may not be
so large as to find every data item.
We are interested in the availability latency observed across all
data items. Hence, we augment the average availability latency
for every data item i with its fi to obtain the following weighted
availability latency (δagg) metric: δagg = T
i=1 δi · fi
Next, we present our solution approach describing how zebroids
are selected.
3. SOLUTION APPROACH
3.1 Zebroids
When a client references a data item missing from its local
storage, the dispatcher identifies all cars with a copy of the data item
as servers. Next, the dispatcher obtains the future routes of all cars
for a finite time duration equivalent to the maximum time the client
is willing to wait for its request to be serviced. Using this
information, the dispatcher schedules the quickest delivery path from any
of the servers to the client using any other cars as intermediate
carriers. Hence, it determines the optimal set of forwarding decisions
76
that will enable the data item to be delivered to the client in the
minimum amount of time. Note that the latency along the quickest
delivery path that employs a relay team of z zebroids is similar to
that obtained with epidemic routing [19] under the assumptions of
infinite storage and no interference.
A simple instantiation of z-relay zebroids occurs when z = 1
and the client"s request triggers a transfer of a copy of the requested
data item from a server to a zebroid in its vicinity. Such a
zebroid is termed one-instantaneous zebroid. In some cases, the
dispatcher might have inaccurate information about the routes of
the cars. Hence, a zebroid scheduled on the basis of this inaccurate
information may not rendezvous with its target client. To minimize
the likelihood of such scenarios, the dispatcher may schedule
multiple zebroids. This may incur additional overhead due to redundant
resource utilization to obtain the same latency improvements.
The time required to transfer a data item from a server to a
zebroid depends on its size and the available link bandwidth. With
small data items, it is reasonable to assume that this transfer time is
small, especially in the presence of the high bandwidth data plane.
Large data items may be divided into smaller chunks enabling the
dispatcher to schedule one or more zebroids to deliver each chunk
to a client in a timely manner. This remains a future research
direction.
Initially, number of replicas for each data item replicas might be
computed using Equation 1. This scheme computes the number
of data item replicas as a function of their popularity. It is static
because number of replicas in the system do not change and no
replacements are performed. Hence, this is referred to as the
‘nozebroids" environment. We quantify the performance of the various
replacement policies with reference to this base-line that does not
employ zebroids.
One may assume a cold start phase, where initially only one or
few copies of every data item exist in the system. Many storage
slots of the cars may be unoccupied. When the cars encounter one
another they construct new replicas of some selected data items to
occupy the empty slots. The selection procedure may be to choose
the data items uniformly at random. New replicas are created as
long as a car has a certain threshold of its storage unoccupied.
Eventually, majority of the storage capacity of a car will be
exhausted.
3.2 Carrier-based Replacement policies
The replacement policies considered in this paper are reactive
since a replacement occurs only in response to a request issued for a
certain data item. When the local storage of a zebroid is completely
occupied, it needs to replace one of its existing items to carry the
client requested data item. For this purpose, the zebroid must
select an appropriate candidate for eviction. This decision process
is analogous to that encountered in operating system paging where
the goal is to maximize the cache hit ratio to prevent disk access
delay [18]. The carrier-based replacement policies employed in our
study are Least Recently Used (LRU), Least Frequently Used
(LFU) and Random (where a eviction candidate is chosen
uniformly at random). We have considered local and global variants
of the LRU/LFU policies which determine whether local or global
knowledge of contents of the cars known at the dispatcher is used
for the eviction decision at a zebroid (see [9] for more details).
The replacement policies incur the following overheads. First,
the complexity associated with the implementation of a policy.
Second, the bandwidth used to transfer a copy of a data item from a
server to the zebroid. Third, the average number of replacements
incurred by the zebroids. Note that in the no-zebroids case neither
overhead is incurred.
The metrics considered in this study are aggregate availability
latency, δagg, percentage improvement in δagg with zebroids as
compared to the no-zebroids case and average number of replacements
incurred per client request which is an indicator of the overhead
incurred by zebroids.
Note that the dispatchers with the help of the control plane may
ensure that no data item is lost from the system. In other words,
at least one replica of every data item is maintained in the ad-hoc
network at all times. In such cases, even though a car may meet a
requesting client earlier than other servers, if its local storage
contains data items with only a single copy in the system, then such a
car is not chosen as a zebroid.
4. ANALYSIS METHODOLOGY
Here, we present the analytical evaluation methodology and some
approximations as closed-form equations that capture the
improvements in availability latency that can be obtained with both
oneinstantaneous and z-relay zebroids. First, we present some
preliminaries of our analysis methodology.
• Let N be the number of cars in the network performing a 2D
random walk on a
√
G×
√
G torus. An additional car serves
as a client yielding a total of N + 1 cars. Such a mobility
model has been used widely in the literature [17, 16] chiefly
because it is amenable to analysis and provides a baseline
against which performance of other mobility models can be
compared. Moreover, this class of Markovian mobility
models has been used to model the movements of vehicles [3,
21].
• We assume that all cars start from the stationary distribution
and perform independent random walks. Although for sparse
density scenarios, the independence assumption does hold, it
is no longer valid when N approaches G.
• Let the size of data item repository of interest be T. Also,
data item i has ri replicas. This implies ri cars, identified as
servers, have a copy of this data item when the client requests
item i.
All analysis results presented in this section are obtained
assuming that the client is willing to wait as long as it takes for its request
to be satisfied (unbounded trip duration γ = ∞). With the random
walk mobility model on a 2D-torus, there is a guarantee that as
long as there is at least one replica of the requested data item in the
network, the client will eventually encounter this replica [2].
Extensions to the analysis that also consider finite trip durations can
be obtained in [9].
Consider a scenario where no zebroids are employed. In this
case, the expected availability latency for the data item is the
expected meeting time of the random walk undertaken by the client
with any of the random walks performed by the servers. Aldous et
al. [2] show that the the meeting time of two random walks in such
a setting can be modelled as an exponential distribution with the
mean C = c · G · log G, where the constant c 0.17 for G ≥ 25.
The meeting time, or equivalently the availability latency δi, for
the client requesting data item i is the time till it encounters any of
these ri replicas for the first time. This is also an exponential
distribution with the following expected value (note that this formulation
is valid only for sparse cases when G >> ri): δi = cGlogG
ri
The aggregate availability latency without employing zebroids is
then this expression averaged over all data items, weighted by their
frequency of access:
δagg(no − zeb) =
T
i=1
fi · c · G · log G
ri
=
T
i=1
fi · C
ri
(2)
77
4.1 One-instantaneous zebroids
Recall that with one-instantaneous zebroids, for a given request,
a new replica is created on a car in the vicinity of the server,
provided this car meets the client earlier than any of the ri servers.
Moreover, this replica is spawned at the time step when the client
issues the request. Let Nc
i be the expected total number of nodes
that are in the same cell as any of the ri servers. Then, we have
Nc
i = (N − ri) · (1 − (1 −
1
G
)ri
) (3)
In the analytical model, we assume that Nc
i new replicas are
created, so that the total number of replicas is increased to ri +Nc
i .
The availability latency is reduced since the client is more likely to
meet a replica earlier. The aggregated expected availability latency
in the case of one-instantaneous zebroids is then given by,
δagg(zeb) =
T
i=1
fi · c · G · log G
ri + Nc
i
=
T
i=1
fi · C
ri + Nc
i
(4)
Note that in obtaining this expression, for ease of analysis, we
have assumed that the new replicas start from random locations
in the torus (not necessarily from the same cell as the original ri
servers). It thus treats all the Nc
i carriers independently, just like
the ri original servers. As we shall show below by comparison
with simulations, this approximation provides an upper-bound on
the improvements that can be obtained because it results in a lower
expected latency at the client.
It should be noted that the procedure listed above will yield a
similar latency to that employed by a dispatcher employing
oneinstantaneous zebroids (see Section 3.1). Since the dispatcher is
aware of all future car movements it would only transfer the
requested data item on a single zebroid, if it determines that the
zebroid will meet the client earlier than any other server. This selected
zebroid is included in the Nc
i new replicas.
4.2 z-relay zebroids
To calculate the expected availability latency with z-relay
zebroids, we use a coloring problem analog similar to an approach
used by Spyropoulos et al. [17]. Details of the procedure to obtain
a closed-form expression are given in [9]. The aggregate
availability latency (δagg) with z-relay zebroids is given by,
δagg(zeb) =
T
i=1
[fi ·
C
N + 1
·
1
N + 1 − ri
·
(N · log
N
ri
− log (N + 1 − ri))] (5)
5. SIMULATION METHODOLOGY
The simulation environment considered in this study comprises
of vehicles such as cars that carry a fraction of the data item
repository. A prediction accuracy parameter inherently provides a certain
probabilistic guarantee on the confidence of the car route
predictions known at the dispatcher. A value of 100% implies that the
exact routes of all cars are known at all times. A 70% value for this
parameter indicates that the routes predicted for the cars will match
the actual ones with probability 0.7. Note that this probability is
spread across the car routes for the entire trip duration. We now
provide the preliminaries of the simulation study and then describe
the parameter settings used in our experiments.
• Similar to the analysis methodology, the map used is a 2D
torus. A Markov mobility model representing a unbiased 2D
random walk on the surface of the torus describes the
movement of the cars across this torus.
• Each grid/cell is a unique state of this Markov chain. In each
time slot, every car makes a transition from a cell to any of
its neighboring 8 cells. The transition is a function of the
current location of the car and a probability transition matrix
Q = [qij] where qij is the probability of transition from state
i to state j. Only AutoMata equipped cars within the same
cell may communicate with each other.
• The parameters γ, δ have been discretized and expressed in
terms of the number of time slots.
• An AutoMata device does not maintain more than one replica
of a data item. This is because additional replicas occupy
storage without providing benefits.
• Either one-instantaneous or z-relay zebroids may be employed
per client request for latency improvement.
• Unless otherwise mentioned, the prediction accuracy
parameter is assumed to be 100%. This is because this study
aims to quantify the effect of a large number of parameters
individually on availability latency.
Here, we set the size of every data item, Si, to be 1. α represents
the number of storage slots per AutoMata. Each storage slot stores
one data item. γ represents the duration of the client"s journey in
terms of the number of time slots. Hence the possible values of
availability latency are between 0 and γ. δ is defined as the number
of time slots after which a client AutoMata device will encounter a
replica of the data item for the first time. If a replica for the data
item requested was encountered by the client in the first cell then
we set δ = 0. If δ > γ then we set δ = γ indicating that no copy
of the requested data item was encountered by the client during its
entire journey. In all our simulations, for illustration we consider a
5 × 5 2D-torus with γ set to 10. Our experiments indicate that the
trends in the results scale to maps of larger size.
We simulated a skewed distribution of access to the T data items
that obeys Zipf"s law with a mean of 0.27. This distribution is
shown to correspond to sale of movie theater tickets in the United
States [6]. We employ a replication scheme that allocates replicas
for a data item as a function of the square-root of the frequency of
access of that item. The square-root replication scheme is shown
to have competitive latency performance over a large parameter
space [8]. The data item replicas are distributed uniformly across
the AutoMata devices. This serves as the base-line no-zebroids
case. The square-root scheme also provides the initial replica
distribution when zebroids are employed. Note that the replacements
performed by the zebroids will cause changes to the data item replica
distribution. Requests generated as per the Zipf distribution are
issued one at a time. The client car that issues the request is chosen
in a round-robin manner. After a maximum period of γ, the latency
encountered by this request is recorded.
In all the simulation results, each point is an average of 200,000
requests. Moreover, the 95% confidence intervals determined for
the results are quite tight for the metrics of latency and
replacement overhead. Hence, we only present them for the metric that
captures the percentage improvement in latency with respect to the
no-zebroids case.
6. RESULTS
In this section, we describe our evaluation results where the
following key questions are addressed. With a wide choice of
replacement schemes available for a zebroid, what is their effect on
availability latency? A more central question is: Do zebroids provide
78
0 20 40 60 80 100
1.5
2
2.5
3
3.5
Number of cars
Aggregate availability latency (δ
agg
)
lru_global
lfu_global
lru_local
lfu_local
random
Figure 1: Figure 1 shows the availability latency when
employing one-instantaneous zebroids as a function of (N,α) values,
when the total storage in the system is kept fixed, ST = 200.
significant improvements in availability latency? What is the
associated overhead incurred in employing these zebroids? What
happens to these improvements in scenarios where a dispatcher may
have imperfect information about the car routes? What inherent
trade-offs exist between car density and storage per car with
regards to their combined as well as individual effect on availability
latency in the presence of zebroids? We present both simple
analysis and detailed simulations to provide answers to these questions
as well as gain insights into design of carrier-based systems.
6.1 How does a replacement scheme employed
by a zebroid impact availability latency?
For illustration, we present ‘scale-up" experiments where
oneinstantaneous zebroids are employed (see Figure 1). By scale-up,
we mean that α and N are changed proportionally to keep the total
system storage, ST , constant. Here, T = 50 and ST = 200. We
choose the following values of (N,α) = {(20,10), (25,8), (50,4),
(100,2)}. The figure indicates that a random replacement scheme
shows competitive performance. This is because of several reasons.
Recall that the initial replica distribution is set as per the
squareroot replication scheme. The random replacement scheme does not
alter this distribution since it makes replacements blind to the
popularity of a data item. However, the replacements cause dynamic
data re-organization so as to better serve the currently active
request. Moreover, the mobility pattern of the cars is random, hence,
the locations from which the requests are issued by clients are also
random and not known a priori at the dispatcher. These findings
are significant because a random replacement policy can be
implemented in a simple decentralized manner.
The lru-global and lfu-global schemes provide a latency
performance that is worse than random. This is because these
policies rapidly develop a preference for the more popular data items
thereby creating a larger number of replicas for them. During
eviction, the more popular data items are almost never selected as a
replacement candidate. Consequently, there are fewer replicas for
the less popular items. Hence, the initial distribution of the data
item replicas changes from square-root to that resembling linear
replication. The higher number of replicas for the popular data
items provide marginal additional benefits, while the lower number
of replicas for the other data items hurts the latency performance of
these global policies. The lfu-local and lru-local schemes have
similar performance to random since they do not have enough history
of local data item requests. We speculate that the performance of
these local policies will approach that of their global variants for a
large enough history of data item requests. On account of the
competitive performance shown by a random policy, for the remainder
of the paper, we present the performance of zebroids that employ a
random replacement policy.
6.2 Do zebroids provide significant
improvements in availability latency?
We find that in many scenarios employing zebroids provides
substantial improvements in availability latency.
6.2.1 Analysis
We first consider the case of one-instantaneous zebroids.
Figure 2.a shows the variation in δagg as a function of N for T = 10
and α = 1 with a 10 × 10 torus using Equation 4. Both the x and y
axes are drawn to a log-scale. Figure 2.b show the % improvement
in δagg obtained with one-instantaneous zebroids. In this case, only
the x-axis is drawn to a log-scale. For illustration, we assume that
the T data items are requested uniformly.
Initially, when the network is sparse the analytical approximation
for improvements in latency with zebroids, obtained from
Equations 2 and 4, closely matches the simulation results. However, as
N increases, the sparseness assumption for which the analysis is
valid, namely N << G, is no longer true. Hence, the two curves
rapidly diverge. The point at which the two curves move away from
each other corresponds to a value of δagg ≤ 1. Moreover, as
mentioned earlier, the analysis provides an upper bound on the latency
improvements, as it treats the newly created replicas given by Nc
i
independently. However, these Nc
i replicas start from the same cell
as one of the server replicas ri. Finally, the analysis captures a
oneshot scenario where given an initial data item replica distribution,
the availability latency is computed. The new replicas created do
not affect future requests from the client.
On account of space limitations, here, we summarize the
observations in the case when z-relay zebroids are employed. The
interested reader can obtain further details in [9]. Similar observations,
like the one-instantaneous zebroid case, apply since the simulation
and analysis curves again start diverging when the analysis
assumptions are no longer valid. However, the key observation here is that
the latency improvement with z-relay zebroids is significantly
better than the one-instantaneous zebroids case, especially for lower
storage scenarios. This is because in sparse scenarios, the
transitive hand-offs between the zebroids creates higher number of
replicas for the requested data item, yielding lower availability latency.
Moreover, it is also seen that the simulation validation curve for the
improvements in δagg with z-relay zebroids approaches that of the
one-instantaneous zebroid case for higher storage (higher N
values). This is because one-instantaneous zebroids are a special case
of z-relay zebroids.
6.2.2 Simulation
We conduct simulations to examine the entire storage spectrum
obtained by changing car density N or storage per car α to also
capture scenarios where the sparseness assumptions for which the
analysis is valid do not hold. We separate the effect of N and α
by capturing the variation of N while keeping α constant (case
1) and vice-versa (case 2) both with z-relay and one-instantaneous
zebroids. Here, we set the repository size as T = 25. Figure 3
captures case 1 mentioned above. Similar trends are observed with
case 2, a complete description of those results are available in [9].
With Figure 3.b, keeping α constant, initially increasing car
density has higher latency benefits because increasing N introduces
more zebroids in the system. As N is further increased, ω reduces
because the total storage in the system goes up. Consequently, the
number of replicas per data item goes up thereby increasing the
79
number of servers. Hence, the replacement policy cannot find a
zebroid as often to transport the requested data item to the client
earlier than any of the servers. On the other hand, the increased
number of servers benefits the no-zebroids case in bringing δagg
down. The net effect results in reduction in ω for larger values of
N.
10
1
10
2
10
3
10
−1
10
0
10
1
10
2
Number of cars
no−zebroidsanal
no−zebroids
sim
one−instantaneous
anal
one−instantaneoussim
Aggregate Availability latency (δagg
)
2.a) δagg
10
1
10
2
10
3
0
10
20
30
40
50
60
70
80
90
100
Number of cars
% Improvement in δagg
wrt no−zebroids (ω)
analytical upper−bound
simulation
2.b) ω
Figure 2: Figure 2 shows the latency performance with
oneinstantaneous zebroids via simulations along with the
analytical approximation for a 10 × 10 torus with T = 10.
The trends mentioned above are similar to that obtained from the
analysis. However, somewhat counter-intuitively with relatively
higher system storage, z-relay zebroids provide slightly lower
improvements in latency as compared to one-instantaneous zebroids.
We speculate that this is due to the different data item replica
distributions enforced by them. Note that replacements performed by
the zebroids cause fluctuations in these replica distributions which
may effect future client requests. We are currently exploring
suitable choices of parameters that can capture these changing replica
distributions.
6.3 What is the overhead incurred with
improvements in latency with zebroids?
We find that the improvements in latency with zebroids are
obtained at a minimal replacement overhead (< 1 per client request).
6.3.1 Analysis
With one-instantaneous zebroids, for each client request a
maximum of one zebroid is employed for latency improvement. Hence,
the replacement overhead per client request can amount to a
maximum of one. Recall that to calculate the latency with one-instantaneous
0 50 100 150 200 250 300 350 400
0
1
2
3
4
5
6
Number of cars
Aggregate availability latency (δagg
)
no−zebroids
one−instantaneous
z−relays
3.a
0 50 100 150 200 250 300 350 400
0
10
20
30
40
50
60
Number of cars
% Improvement in δagg
wrt no−zebroids (ω)
one−instantaneous
z−relays
3.b
Figure 3: Figure 3 depicts the latency performance with both
one-instantaneous and z-relay zebroids as a function of the car
density when α = 2 and T = 25.
zebroids, Nc
i new replicas are created in the same cell as the servers.
Now a replacement is only incurred if one of these Nc
i newly
created replicas meets the client earlier than any of the ri servers.
Let Xri and XNc
i
respectively be random variables that capture
the minimum time till any of the ri and Nc
i replicas meet the client.
Since Xri and XNc
i
are assumed to be independent, by the property
of exponentially distributed random variables we have,
Overhead/request = 1 · P(XNc
i
< Xri ) + 0 · P(Xri ≤ XNc
i
)
(6)
Overhead/request =
ri
C
ri
C
+
Nc
i
C
=
ri
ri + Nc
i
(7)
Recall that the number of replicas for data item i, ri, is a function
of the total storage in the system i.e., ri = k·N ·α where k satisfies
the constraint 1 ≤ ri ≤ N. Using this along with Equation 2, we
get
Overhead/request = 1 −
G
G + N · (1 − k · α)
(8)
Now if we keep the total system storage N · α constant since
G and T are also constant, increasing N increases the replacement
overhead. However, if N ·α is constant then increasing N causes α
80
0 20 40 60 80 100
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Number of cars
one−instantaneous
zebroids
Average number of replacements per request
(N=20,α=10)
(N=25,α=8)
(N=50,α=4)
(N=100,α=2)
Figure 4: Figure 4 captures replacement overhead when
employing one-instantaneous zebroids as a function of (N,α)
values, when the total storage in the system is kept fixed, ST =
200.
to go down. This implies that a higher replacement overhead is
incurred for higher N and lower α values. Moreover, when ri = N,
this means that every car has a replica of data item i. Hence, no
zebroids are employed when this item is requested, yielding an
overhead/request for this item as zero. Next, we present
simulation results that validate our analysis hypothesis for the overhead
associated with deployment of one-instantaneous zebroids.
6.3.2 Simulation
Figure 4 shows the replacement overhead with one-instantaneous
zebroids when (N,α) are varied while keeping the total system
storage constant. The trends shown by the simulation are in agreement
with those predicted by the analysis above. However, the total
system storage can be changed either by varying car density (N) or
storage per car (α). On account of similar trends, here we present
the case when α is kept constant and N is varied (Figure 5). We
refer the reader to [9] for the case when α is varied and N is held
constant.
We present an intuitive argument for the behavior of the
perrequest replacement overhead curves. When the storage is extremely
scarce so that only one replica per data item exists in the AutoMata
network, the number of replacements performed by the zebroids is
zero since any replacement will cause a data item to be lost from
the system. The dispatcher ensures that no data item is lost from
the system. At the other end of the spectrum, if storage becomes
so abundant that α = T then the entire data item repository can
be replicated on every car. The number of replacements is again
zero since each request can be satisfied locally. A similar scenario
occurs if N is increased to such a large value that another car with
the requested data item is always available in the vicinity of the
client. However, there is a storage spectrum in the middle where
replacements by the scheduled zebroids result in improvements in
δagg (see Figure 3).
Moreover, we observe that for sparse storage scenarios, the higher
improvements with z-relay zebroids are obtained at the cost of a
higher replacement overhead when compared to the one-instantaneous
zebroids case. This is because in the former case, each of these z
zebroids selected along the lowest latency path to the client needs
to perform a replacement. However, the replacement overhead is
still less than 1 indicating that on an average less than one
replacement per client request is needed even when z-relay zebroids are
employed.
0 50 100 150 200 250 300 350 400
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Number of cars
z−relays
one−instantaneous
Average number of replacements per request
Figure 5: Figure 5 shows the replacement overhead with
zebroids for the cases when N is varied keeping α = 2.
10 20 30 40 50 60 70 80 90 100
0
0.5
1
1.5
2
2.5
3
3.5
4
Prediction percentage
no−zebroids (N=50)
one−instantaneous (N=50)
z−relays (N=50)
no−zebroids (N=200)
one−instantaneous (N=200) z−relays (N=200)
Aggregate Availability Latency (δagg
)
Figure 6: Figure 6 shows δagg for different car densities as a
function of the prediction accuracy metric with α = 2 and T =
25.
6.4 What happens to the availability latency
with zebroids in scenarios with
inaccuracies in the car route predictions?
We find that zebroids continue to provide improvements in
availability latency even with lower accuracy in the car route
predictions. We use a single parameter p to quantify the accuracy of the
car route predictions.
6.4.1 Analysis
Since p represents the probability that a car route predicted by the
dispatcher matches the actual one, hence, the latency with zebroids
can be approximated by,
δerr
agg = p · δagg(zeb) + (1 − p) · δagg(no − zeb) (9)
δerr
agg = p · δagg(zeb) + (1 − p) ·
C
ri
(10)
Expressions for δagg(zeb) can be obtained from Equations 4
(one-instantaneous) or 5 (z-relay zebroids).
6.4.2 Simulation
Figure 6 shows the variation in δagg as a function of this route
prediction accuracy metric. We observe a smooth reduction in the
81
improvement in δagg as the prediction accuracy metric reduces. For
zebroids that are scheduled but fail to rendezvous with the client
due to the prediction error, we tag any such replacements made by
the zebroids as failed. It is seen that failed replacements gradually
increase as the prediction accuracy reduces.
6.5 Under what conditions are the
improvements in availability latency with zebroids
maximized?
Surprisingly, we find that the improvements in latency obtained
with one-instantaneous zebroids are independent of the input
distribution of the popularity of the data items.
6.5.1 Analysis
The fractional difference (labelled ω) in the latency between the
no-zebroids and one-instantaneous zebroids is obtained from
equations 2, 3, and 4 as
ω =
T
i=1
fi·C
ri
− T
i=1
fi·C
ri+(N−ri)·(1−(1− 1
G )ri
)
T
i=1
fi·C
ri
(11)
Here C = c·G·log G. This captures the fractional improvement
in the availability latency obtained by employing one-instantaneous
zebroids. Let α = 1, making the total storage in the system ST =
N. Assuming the initial replica distribution is as per the
squareroot replication scheme, we have, ri =
√
fi·N
T
j=1
√
fj
. Hence, we get
fi =
K2
·r2
i
N2 , where K = T
j=1 fj. Using this, along with the
approximation (1 − x)n
1 − n · x for small x, we simplify the
above equation to get, ω = 1 −
T
i=1
ri
1+
N−ri
G
T
i=1 ri
In order to determine when the gains with one-instantaneous
zebroids are maximized, we can frame an optimization problem as
follows: Maximize ω, subject to T
i=1 ri = ST
THEOREM 1. With a square-root replication scheme,
improvements obtained with one-instantaneous zebroids are independent
of the input popularity distribution of the data items. (See [9] for
proof)
6.5.2 Simulation
We perform simulations with two different frequency
distribution of data items: Uniform and Zipfian (with mean= 0.27).
Similar latency improvements with one-instantaneous zebroids are
obtained in both cases. This result has important implications. In
cases with biased popularity toward certain data items, the
aggregate improvements in latency across all data item requests still
remain the same. Even in scenarios where the frequency of access
to the data items changes dynamically, zebroids will continue to
provide similar latency improvements.
7. CONCLUSIONS AND
FUTURE RESEARCH DIRECTIONS
In this study, we examined the improvements in latency that can
be obtained in the presence of carriers that deliver a data item from
a server to a client. We quantified the variation in availability
latency as a function of a rich set of parameters such as car density,
storage per car, title database size, and replacement policies
employed by zebroids.
Below we summarize some key future research directions we
intend to pursue. To better reflect reality we would like to validate the
observations obtained from this study with some real world
simulation traces of vehicular movements (for example using
CORSIM [1]). This will also serve as a validation for the utility of the
Markov mobility model used in this study. We are currently
analyzing the performance of zebroids on a real world data set
comprising of an ad-hoc network of buses moving around a small
neighborhood in Amherst [4]. Zebroids may also be used for delivery
of data items that carry delay sensitive information with a certain
expiry. Extensions to zebroids that satisfy such application
requirements presents an interesting future research direction.
8. ACKNOWLEDGMENTS
This research was supported in part by an Annenberg fellowship and NSF
grants numbered CNS-0435505 (NeTS NOSS), CNS-0347621 (CAREER),
and IIS-0307908.
9. REFERENCES
[1] Federal Highway Administration. Corridor simulation. Version 5.1,
http://www.ops.fhwa.dot.gov/trafficanalysistools/cors im.htm.
[2] D. Aldous and J. Fill. Reversible markov chains and random walks
on graphs. Under preparation.
[3] A. Bar-Noy, I. Kessler, and M. Sidi. Mobile Users: To Update or Not
to Update. In IEEE Infocom, 1994.
[4] J. Burgess, B. Gallagher, D. Jensen, and B. Levine. MaxProp:
Routing for Vehicle-Based Disruption-Tolerant Networking. In IEEE
Infocom, April 2006.
[5] E. Cohen and S. Shenker. Replication Strategies in Unstructured
Peer-to-Peer Networks. In SIGCOMM, 2002.
[6] A. Dan, D. Dias, R. Mukherjee, D. Sitaram, and R. Tewari. Buffering
and Caching in Large-Scale Video Servers. In COMPCON, 1995.
[7] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. PAVAN: A
Policy Framework for Content Availabilty in Vehicular ad-hoc
Networks. In VANET, New York, NY, USA, 2004. ACM Press.
[8] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari.
Comparison of Replication Strategies for Content Availability in
C2P2 networks. In MDM, May 2005.
[9] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. An
Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc
Networks. Technical report, Department of Computer Science,
University of Southern California,CENG-2006-1, 2006.
[10] S. Ghandeharizadeh and B. Krishnamachari. C2p2: A peer-to-peer
network for on-demand automobile information services. In Globe.
IEEE, 2004.
[11] T. Hara. Effective Replica Allocation in ad-hoc Networks for
Improving Data Accessibility. In IEEE Infocom, 2001.
[12] H. Hayashi, T. Hara, and S. Nishio. A Replica Allocation Method
Adapting to Topology Changes in ad-hoc Networks. In DEXA, 2005.
[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein.
Energy-efficient computing for wildlife tracking: design tradeoffs
and early experiences with ZebraNet. SIGARCH Comput. Archit.
News, 2002.
[14] A. Pentland, R. Fletcher, and A. Hasson. DakNet: Rethinking
Connectivity in Developing Nations. Computer, 37(1):78-83, 2004.
[15] F. Sailhan and V. Issarny. Cooperative Caching in ad-hoc Networks.
In MDM, 2003.
[16] R. Shah, S. Roy, S. Jain, and W. Brunette. Data mules: Modeling and
analysis of a three-tier architecture for sparse sensor networks.
Elsevier ad-hoc Networks Journal, 1, September 2003.
[17] T. Spyropoulos, K. Psounis, and C. Raghavendra. Single-Copy
Routing in Intermittently Connected Mobile Networks. In SECON,
April 2004.
[18] A. Tanenbaum. Modern Operating Systems, 2nd Edition, Chapter 4,
Section 4.4 . Prentice Hall, 2001.
[19] A. Vahdat and D. Becker. Epidemic routing for partially-connected
ad-hoc networks. Technical report, Department of Computer Science,
Duke University, 2000.
[20] W. Zhao, M. Ammar, and E. Zegura. A message ferrying approach
for data delivery in sparse mobile ad-hoc networks. In MobiHoc,
pages 187-198, New York, NY, USA, 2004. ACM Press.
[21] M. Zonoozi and P. Dassanayake. User Mobility Modeling and
Characterization of Mobility Pattern. IEEE Journal on Selected
Areas in Communications, 15:1239-1252, September 1997.
82 | naive random replacement policy;zebroid;vehicular network;termed zebroid;audio and video clip;mobility;car density;storage per device;datum carrier;latency;simplified instantiation of zebroid;automaton;availability latency;repository size;markov model;zebroid simplified instantiation;mobile device;peer-to-peer vehicular ad-hoc network;replacement policy |
train_C-69 | pTHINC: A Thin-Client Architecture for Mobile Wireless Web | Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. | 1. INTRODUCTION
The increasing ubiquity of wireless networks and
decreasing cost of hardware is fueling a proliferation of mobile
wireless handheld devices, both as standalone wireless Personal
Digital Assistants (PDA) and popular integrated PDA/cell
phone devices. These devices are enabling new forms of
mobile computing and communication. Service providers are
leveraging these devices to deliver pervasive web access, and
mobile web users already often use these devices to access
web-enabled information such as news, email, and localized
travel guides and maps. It is likely that within a few years,
most of the devices accessing the web will be mobile.
Users typically access web content by running a web browser
and associated helper applications locally on the PDA.
Although native web browsers exist for PDAs, they deliver
subpar performance and have a much smaller feature set
and more limited functionality than their desktop
computing counterparts [10]. As a result, PDA web browsers are
often not able to display web content from web sites that
leverage more advanced web technologies to deliver a richer web
experience. This fundamental problem arises for two
reasons. First, because PDAs have a completely different
hardware/software environment from traditional desktop
computers, web applications need to be rewritten and customized
for PDAs if at all possible, duplicating development costs.
Because the desktop application market is larger and more
mature, most development effort generally ends up being
spent on desktop applications, resulting in greater
functionality and performance than their PDA counterparts.
Second, PDAs have a more resource constrained environment
than traditional desktop computers to provide a smaller
form factor and longer battery life. Desktop web browsers
are large, complex applications that are unable to run on a
PDA. Instead, developers are forced to significantly strip
down these web browsers to provide a usable PDA web
browser, thereby crippling PDA browser functionality.
Thin-client computing provides an alternative approach
for enabling pervasive web access from handheld devices.
A thin-client computing system consists of a server and a
client that communicate over a network using a remote
display protocol. The protocol enables graphical displays to be
virtualized and served across a network to a client device,
while application logic is executed on the server. Using the
remote display protocol, the client transmits user input to
the server, and the server returns screen updates of the
applications from the server to the client. Using a thin-client
model for mobile handheld devices, PDAs can become
simple stateless clients that leverage the remote server
capabilities to execute web browsers and other helper applications.
The thin-client model provides several important
benefits for mobile wireless web. First, standard desktop web
applications can be used to deliver web content to PDAs
without rewriting or adapting applications to execute on
a PDA, reducing development costs and leveraging existing
software investments. Second, complex web applications can
be executed on powerful servers instead of running stripped
down versions on more resource constrained PDAs,
providing greater functionality and better performance [10]. Third,
web applications can take advantage of servers with faster
networks and better connectivity, further boosting
application performance. Fourth, PDAs can be even simpler
devices since they do not need to perform complex application
logic, potentially reducing energy consumption and
extend143
ing battery life. Finally, PDA thin clients can be essentially
stateless appliances that do not need to be backed up or
restored, require almost no maintenance or upgrades, and do
not store any sensitive data that can be lost or stolen. This
model provides a viable avenue for medical organizations to
comply with HIPAA regulations [6] while embracing mobile
handhelds in their day to day operations.
Despite these potential advantages, thin clients have been
unable to provide the full range of these benefits in delivering
web applications to mobile handheld devices. Existing thin
clients were not designed for PDAs and do not account for
important usability issues in the context of small form factor
devices, resulting in difficulty in navigating displayed web
content. Furthermore, existing thin clients are ineffective at
providing seamless mobility across the heterogeneous mix
of device display sizes and resolutions. While existing thin
clients can already provide faster performance than native
PDA web browsers in delivering HTML web content, they
do not effectively support more display-intensive web helper
applications such as multimedia video, which is increasingly
an integral part of available web content.
To harness the full potential of thin-client computing in
providing mobile wireless web on PDAs, we have developed
pTHINC (PDA THin-client InterNet Computing). pTHINC
builds on our previous work on THINC [1] to provide a
thinclient architecture for mobile handheld devices. pTHINC
virtualizes and resizes the display on the server to efficiently
deliver high-fidelity screen updates to a broad range of
different clients, screen sizes, and screen orientations, including
both portrait and landscape viewing modes. This enables
pTHINC to provide the same persistent web session across
different client devices. For example, pTHINC can provide
the same web browsing session appropriately scaled for
display on a desktop computer and a PDA so that the same
cookies, bookmarks, and other meta-data are continuously
available on both machines simultaneously. pTHINC"s
virtual display approach leverages semantic information
available in display commands, and client-side video hardware to
provide more efficient remote display mechanisms that are
crucial for supporting more display-intensive web
applications. Given limited display resolution on PDAs, pTHINC
maximizes the use of screen real estate for remote display
by moving control functionality from the screen to readily
available PDA control buttons, improving system usability.
We have implemented pTHINC on Windows Mobile and
demonstrated that it works transparently with existing
applications, window systems, and operating systems, and does
not require modifying, recompiling, or relinking existing
software. We have quantitatively evaluated pTHINC against
local PDA web browsers and other thin-client approaches on
Pocket PC devices. Our experimental results demonstrate
that pTHINC provides superior web browsing performance
and is the only PDA thin client that effectively supports
crucial browser helper applications such as video playback.
This paper presents the design and implementation of
pTHINC. Section 2 describes the overall usage model and
usability characteristics of pTHINC. Section 3 presents the
design and system architecture of pTHINC. Section 4 presents
experimental results measuring the performance of pTHINC
on web applications and comparing it against native PDA
browsers and other popular PDA thin-client systems.
Section 5 discusses related work. Finally, we present some
concluding remarks.
2. PTHINC USAGE MODEL
pTHINC is a thin-client system that consists of a simple
client viewer application that runs on the PDA and a server
that runs on a commodity PC. The server leverages more
powerful PCs to to run web browsers and other application
logic. The client takes user input from the PDA stylus and
virtual keyboard and sends them to the server to pass to
the applications. Screen updates are then sent back from
the server to the client for display to the user.
When the pTHINC PDA client is started, the user is
presented with a simple graphical interface where information
such as server address and port, user authentication
information, and session settings can be provided. pTHINC first
attempts to connect to the server and perform the
necessary handshaking. Once this process has been completed,
pTHINC presents the user with the most recent display of
his session. If the session does not exist, a new session is
created. Existing sessions can be seamlessly continued without
changes in the session setting or server configuration.
Unlike other thin-client systems, pTHINC provides a user
with a persistent web session model in which a user can
launch a session running a web browser and associated
applications at the server, then disconnect from that session
and reconnect to it again anytime. When a user reconnects
to the session, all of the applications continue running where
the user left off, so that the user can continue working as
though he or she never disconnected. The ability to
disconnect and reconnect to a session at anytime is an important
benefit for mobile wireless PDA users which may have
intermittent network connectivity.
pTHINC"s persistent web session model enables a user to
reconnect to a web session from devices other than the one
on which the web session was originally initiated. This
provides users with seamless mobility across different devices.
If a user loses his PDA, he can easily use another PDA to
access his web session. Furthermore, pTHINC allows users
to use non-PDA devices to access web sessions as well. A
user can access the same persistent web session on a
desktop PC as on a PDA, enabling a user to use the same web
session from any computer.
pTHINC"s persistent web session model addresses a key
problem encountered by mobile web users, the lack of a
common web environment across computers. Web browsers
often store important information such as bookmarks, cookies,
and history, which enable them to function in a much more
useful manner. The problem that occurs when a user moves
between computers is that this data, which is specific to a
web browser installation, cannot move with the user.
Furthermore, web browsers often need helper applications to
process different media content, and those applications may
not be consistently available across all computers. pTHINC
addresses this problem by enabling a user to remotely use
the exact same web browser environment and helper
applications from any computer. As a result, pTHINC can
provide a common, consistent web browsing environment for
mobile users across different devices without requiring them
to attempt to repeatedly synchronize different web browsing
environments across multiple machines.
To enable a user to access the same web session on
different devices, pTHINC must provide mechanisms to
support different display sizes and resolutions. Toward this end,
pTHINC provides a zoom feature that enables a user to
zoom in and out of a display and allows the display of a web
144
Figure 1: pTHINC shortcut keys
session to be resized to fit the screen of the device being
used. For example, if the server is running a web session at
1024×768 but the client is a PDA with a display resolution
of 640×480, pTHINC will resize the desktop display to fit
the full display in the smaller screen of the PDA. pTHINC
provides the PDA user with the option to increase the size
of the display by zooming in to different parts of the display.
Users are often familiar with the general layout of commonly
visited websites, and are able to leverage this resizing
feature to better navigate through web pages. For example,
a user can zoom out of the display to view the entire page
content and navigate hyperlinks, then zoom in to a region
of interest for a better view.
To enable a user to access the same web session on
different devices, pTHINC must also provide mechanisms to
support different display orientations. In a desktop
environment, users are typically accustomed to having displays
presented in landscape mode where the screen width is larger
than its height. However, in a PDA environment, the choice
is not always obvious. Some users may prefer having the
display in portrait mode, as it is easier to hold the device
in their hands, while others may prefer landscape mode in
order to minimize the amount of side-scrolling necessary
to view a web page. To accommodate PDA user
preferences, pTHINC provides an orientation feature that enables
it to seamless rotate the display between landscape and
portrait mode. The landscape mode is particularly useful for
pTHINC users who frequently access their web sessions on
both desktop and PDA devices, providing those users with
the same familiar landscape setting across different devices.
Because screen space is a relatively scarce resource on
PDAs, pTHINC runs in fullscreen mode to maximize the
screen area available to display the web session. To be able
to use all of the screen on the PDA and still allow the user
to control and interact with it, pTHINC reuses the typical
shortcut buttons found on PDAs to perform all the
control functions available to the user. The buttons used by
pTHINC do not require any OS environment changes; they
are simply intercepted by the pTHINC client application
when they are pressed. Figure 1 shows how pTHINC
utilizes the shortcut buttons to provide easy navigation and
improve the overall user experience. These buttons are not
device specific, and the layout shown is common to
widelyused PocketPC devices. pTHINC provides six shortcuts to
support its usage model:
• Rotate Screen: The record button on the left edge is
used to rotate the screen between portrait and
landscape mode each time the button is pressed.
• Zoom Out: The leftmost button on the bottom front
is used to zoom out the display of the web session
providing a bird"s eye view of the web session.
• Zoom In: The second leftmost button on the bottom
front is used to zoom in the display of the web session
to more clearly view content of interest.
• Directional Scroll: The middle button on the bottom
front is used to scroll around the display using a single
control button in a way that is already familiar to PDA
users. This feature is particularly useful when the user
has zoomed in to a region of the display such that only
part of the display is visible on the screen.
• Show/Hide Keyboard: The second rightmost button on
the bottom front is used to bring up a virtual keyboard
drawn on the screen for devices which have no physical
keyboard. The virtual keyboard uses standard PDA
OS mechanisms, providing portability across different
PDA environments.
• Close Session: The rightmost button on the bottom
front is used to disconnect from the pTHINC session.
pTHINC uses the PDA touch screen, stylus, and standard
user interface mechanisms to provide a user interface
pointand-click metaphor similar to that provided by the mouse
in a traditional desktop computing environment. pTHINC
does not use a cursor since PDA environments do not
provide one. Instead, a user can use the stylus to tap on
different sections of the touch screen to indicate input focus. A
single tap on the touch screen generates a corresponding
single click mouse event. A double tap on the touch screen
generates a corresponding double click mouse event. pTHINC
provides two-button mouse emulation by using the stylus to
press down on the screen for one second to generate a right
mouse click. All of these actions are identical to the way
users already interact with PDA applications in the common
PocketPC environment. In web browsing, users can click on
hyperlinks and focus on input boxes by simply tapping on
the desired screen area of interest. Unlike local PDA web
browsers and other PDA applications, pTHINC leverages
more powerful desktop user interface metaphors to enable
users to manipulate multiple open application windows
instead of being limited to a single application window at any
given moment. This provides increased browsing flexibility
beyond what is currently available on PDA devices. Similar
to a desktop environment, browser windows and other
application windows can be moved around by pressing down
and dragging the stylus similar to a mouse.
3. PTHINC SYSTEM ARCHITECTURE
pTHINC builds on the THINC [1] remote display
architecture to provide a thin-client system for PDAs. pTHINC
virtualizes the display at the server by leveraging the video
device abstraction layer, which sits below the window server
and above the framebuffer. This is a well-defined, low-level,
device-dependent layer that exposes the video hardware to
the display system. pTHINC accomplishes this through a
simple virtual display driver that intercepts drawing
commands, packetizes, and sends them over the network.
145
While other thin-client approaches intercept display
commands at other layers of the display subsystem, pTHINC"s
display virtualization approach provides some key benefits
in efficiently supporting PDA clients. For example,
intercepting display commands at a higher layer between
applications and the window system as is done by X [17] requires
replicating and running a great deal of functionality on the
PDA that is traditionally provided by the desktop window
system. Given both the size and complexity of traditional
window systems, attempting to replicate this functionality
in the restricted PDA environment would have proven to
be a daunting, and perhaps unfeasible task. Furthermore,
applications and the window system often require tight
synchronization in their operation and imposing a wireless
network between them by running the applications on the server
and the window system on the client would significantly
degrade performance. On the other hand, intercepting at a
lower layer by extracting pixels out of the framebuffer as
they are rendered provides a simple solution that requires
very little functionality on the PDA client, but can also
result in degraded performance. The reason is that by the
time the remote display server attempts to send screen
updates, it has lost all semantic information that may have
helped it encode efficiently, and it must resort to using a
generic and expensive encoding mechanism on the server,
as well as a potentially expensive decoding mechanism on
the limited PDA client. In contrast to both the high and
low level interception approaches, pTHINC"s approach of
intercepting at the device driver provides an effective
balance between client and server simplicity, and the ability to
efficiently encode and decode screen updates.
By using a low-level virtual display approach, pTHINC
can efficiently encode application display commands using
only a small set of low-level commands. In a PDA
environment, this set of commands provides a crucial component
in maintaining the simplicity of the client in the
resourceconstrained PDA environment. The display commands are
shown in Table 1, and work as follows. COPY instructs the
client to copy a region of the screen from its local framebuffer
to another location. This command improves the user
experience by accelerating scrolling and opaque window
movement without having to resend screen data from the server.
SFILL, PFILL, and BITMAP are commands that paint a
fixed-size region on the screen. They are useful for
accelerating the display of solid window backgrounds, desktop
patterns, backgrounds of web pages, text drawing, and
certain operations in graphics manipulation programs. SFILL
fills a sizable region on the screen with a single color. PFILL
replicates a tile over a screen region. BITMAP performs a
fill using a bitmap of ones and zeros as a stipple to apply
a foreground and background color. Finally, RAW is used
to transmit unencoded pixel data to be displayed verbatim
on a region of the screen. This command is invoked as a
last resort if the server is unable to employ any other
command, and it is the only command that may be compressed
to mitigate its impact on network bandwidth.
pTHINC delivers its commands using a non-blocking,
serverpush update mechanism, where as soon as display updates
are generated on the server, they are sent to the client.
Clients are not required to explicitly request display
updates, thus minimizing the impact that the typical
varying network latency of wireless links may have on the
responsiveness of the system. Keeping in mind that resource
Command Description
COPY Copy a frame buffer area to specified
coordinates
SFILL Fill an area with a given pixel color value
PFILL Tile an area with a given pixel pattern
BITMAP Fill a region using a bit pattern
RAW Display raw pixel data at a given location
Table 1: pTHINC Protocol Display Commands
constrained PDAs and wireless networks may not be able
to keep up with a fast server generating a large number of
updates, pTHINC is able to coalesce, clip, and discard
updates automatically if network loss or congestion occurs, or
the client cannot keep up with the rate of updates. This
type of behavior proves crucial in a web browsing
environment, where for example, a page may be redrawn multiple
times as it is rendered on the fly by the browser. In this
case, the PDA will only receive and render the final result,
which clearly is all the user is interesting in seeing.
pTHINC prioritizes the delivery of updates to the PDA
using a Shortest-Remaining-Size-First (SRSF) preemptive
update scheduler. SRSF is analogous to
Shortest-RemainingProcessing-Time scheduling, which is known to be optimal
for minimizing mean response time in an interactive system.
In a web browsing environment, short jobs are associated
with text and basic page layout components such as the
page"s background, which are critical web content for the
user. On the other hand, large jobs are often lower priority
beautifying elements, or, even worse, web page banners
and advertisements, which are of questionable value to the
user as he or she is browsing the page. Using SRSF, pTHINC
is able to maximize the utilization of the relatively scarce
bandwidth available on the wireless connection between the
PDA and the server.
3.1 Display Management
To enable users to just as easily access their web browser
and helper applications from a desktop computer at home
as from a PDA while on the road, pTHINC provides a
resize mechanism to zoom in and out of the display of a web
session. pTHINC resizing is completely supported by the
server, not the client. The server resamples updates to fit
within the PDAs viewport before they are transmitted over
the network. pTHINC uses Fant"s resampling algorithm to
resize pixel updates. This provides smooth, visually
pleasing updates with properly antialiasing and has only modest
computational requirements.
pTHINC"s resizing approach has a number of advantages.
First, it allows the PDA to leverage the vastly superior
computational power of the server to use high quality resampling
algorithms and produce higher quality updates for the PDA
to display. Second, resizing the screen does not translate into
additional resource requirements for the PDA, since it does
not need to perform any additional work. Finally, better
utilization of the wireless network is attained since rescaling
the updates reduces their bandwidth requirements.
To enable users to orient their displays on a PDA to
provide a viewing experience that best accommodates user
preferences and the layout of web pages or applications,
pTHINC provides a display rotation mechanism to switch
between landscape and portrait viewing modes. pTHINC
display rotation is completely supported by the client, not
the server. pTHINC does not explicitly recalculate the
ge146
ometry of display updates to perform rotation, which would
be computationally expensive. Instead, pTHINC simply
changes the way data is copied into the framebuffer to switch
between display modes. When in portrait mode, data is
copied along the rows of the framebuffer from left to right.
When in landscape mode, data is copied along the columns
of the framebuffer from top to bottom. These very fast and
simple techniques replace one set of copy operations with
another and impose no performance overhead. pTHINC
provides its own rotation mechanism to support a wide range of
devices without imposing additional feature requirements on
the PDA. Although some newer PDA devices provide native
support for different orientations, this mechanism is not
dynamic and requires the user to rotate the PDA"s entire user
interface before starting the pTHINC client. Windows
Mobile provides native API mechanisms for PDA applications
to rotate their UI on the fly, but these mechanisms deliver
poor performance and display quality as the rotation is
performed naively and is not completely accurate.
3.2 Video Playback
Video has gradually become an integral part of the World
Wide Web, and its presence will only continue to increase.
Web sites today not only use animated graphics and flash
to deliver web content in an attractive manner, but also
utilize streaming video to enrich the web interface. Users are
able to view pre-recorded and live newscasts on CNN, watch
sports highlights on ESPN, and even search through large
collection of videos on Google Video. To allow applications
to provide efficient video playback, interfaces have been
created in display systems that allow video device drivers to
expose their hardware capabilities back to the applications.
pTHINC takes advantage of these interfaces and its virtual
device driver approach to provide a virtual bridge between
the remote client and its hardware and the applications, and
transparently support video playback.
On top of this architecture, pTHINC uses the YUV
colorspace to encode the video content, which provides a
number of benefits. First, it has become increasingly common
for PDA video hardware to natively support YUV and be
able to perform the colorspace conversion and scaling
automatically. As a result, pTHINC is able to provide fullscreen
video playback without any performance hits. Second, the
use of YUV allows for a more efficient representation of RGB
data without loss of quality, by taking advantage of the
human eye"s ability to better distinguish differences in
brightness than in color. In particular, pTHINC uses the YV12
format, which allows full color RGB data to be encoded
using just 12 bits per pixel. Third, YUV data is produced
as one of the last steps of the decoding process of most
video codecs, allowing pTHINC to provide video playback
in a manner that is format independent. Finally, even if the
PDA"s video hardware is unable to accelerate playback, the
colorspace conversion process is simple enough that it does
not impose unreasonable requirements on the PDA.
A more concrete example of how pTHINC leverages the
PDA video hardware to support video playback can be seen
in our prototype implementation on the popular Dell Axim
X51v PDA, which is equipped with the Intel 2700G
multimedia accelerator. In this case, pTHINC creates an
offscreen buffer in video memory and writes and reads from
this memory region data on the YV12 format. When a new
video frame arrives, video data is copied from the buffer to
Figure 2: Experimental Testbed
an overlay surface in video memory, which is independent
of the normal surface used for traditional drawing. As the
YV12 data is put onto the overlay, the Intel accelerator
automatically performs both colorspace conversion and scaling.
By using the overlay surface, pTHINC has no need to redraw
the screen once video playback is over since the overlapped
surface is unaffected. In addition, specific overlay regions
can be manipulated by leveraging the video hardware, for
example to perform hardware linear interpolation to smooth
out the frame and display it fullscreen, and to do automatic
rotation when the client runs in landscape mode.
4. EXPERIMENTAL RESULTS
We have implemented a pTHINC prototype that runs the
client on widely-used Windows Mobile-based Pocket PC
devices and the server on both Windows and Linux operating
systems. To demonstrate its effectiveness in supporting
mobile wireless web applications, we have measured its
performance on web applications. We present experimental results
on different PDA devices for two popular web applications,
browsing web pages and playing video content from the web.
We compared pTHINC against native web applications
running locally on the PDA to demonstrate the improvement
that pTHINC can provide over the traditional fat-client
approach. We also compared pTHINC against three of the
most widely used thin clients that can run on PDAs, Citrix
Meta-FrameXP [2], Microsoft Remote Desktop [3] and VNC
(Virtual Network Computing) [16]. We follow common
practice and refer to Citrix MetaFrameXP and Microsoft Remote
Desktop by their respective remote display protocols, ICA
(Independent Computing Architecture) and RDP (Remote
Desktop Protocol).
4.1 Experimental Testbed
We conducted our web experiments using two different
wireless Pocket PC PDAs in an isolated Wi-Fi network
testbed, as shown in Figure 2. The testbed consisted of two
PDA client devices, a packet monitor, a thin-client server,
and a web server. Except for the PDAs, all of the other
machines were IBM Netfinity 4500R servers with dual 933 MHz
Intel PIII CPUs and 512 MB RAM and were connected on
a switched 100 Mbps FastEthernet network. The web server
used was Apache 1.3.27, the network emulator was
NISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9. The
PDA clients connected to the testbed through a 802.11b
Lucent Orinoco AP-2000 wireless access point. All experiments
using the wireless network were conducted within ten feet
of the access point, so we considered the amount of packet
loss to be negligible in our experiments.
Two Pocket PC PDAs were used to provide results across
both older, less powerful models and newer higher
performance models. The older model was a Dell Axim X5 with
147
Client 1024×768 640×480 Depth Resize Clip
RDP no yes 8-bit no yes
VNC yes yes 16-bit no no
ICA yes yes 16-bit yes no
pTHINC yes yes 24-bit yes no
Table 2: Thin-client Testbed Configuration Setting
a 400 MHz Intel XScale PXA255 CPU and 64 MB RAM
running Windows Mobile 2003 and a Dell TrueMobile 1180
2.4Ghz CompactFlash card for wireless networking. The
newer model was a Dell Axim X51v with a 624 MHz Intel
XScale XPA270 CPU and 64 MB RAM running Windows
Mobile 5.0 and integrated 802.11b wireless networking. The
X51v has an Intel 2700G multimedia accelerator with 16MB
video memory. Both PDAs are capable of 16-bit color but
have different screen sizes and display resolutions. The X5
has a 3.5 inch diagonal screen with 240×320 resolution. The
X51v has a 3.7 inch diagonal screen with 480×640.
The four thin clients that we used support different
levels of display quality as summarized in Table 2. The RDP
client only supports a fixed 640×480 display resolution on
the server with 8-bit color depth, while other platforms
provide higher levels of display quality. To provide a fair
comparison across all platforms, we conducted our experiments
with thin-client sessions configured for two possible
resolutions, 1024×768 and 640×480. Both ICA and VNC were
configured to use the native PDA resolution of 16-bit color
depth. The current pTHINC prototype uses 24-bit color
directly and the client downsamples updates to the 16-bit color
depth available on the PDA. RDP was configured using only
8-bit color depth since it does not support any better color
depth. Since both pTHINC and ICA provide the ability to
view the display resized to fit the screen, we measured both
clients with and without the display resized to fit the PDA
screen. Each thin client was tested using landscape rather
than portrait mode when available. All systems run on the
X51v could run in landscape mode because the hardware
provides a landscape mode feature. However, the X5 does
not provide this functionality. Only pTHINC directly
supports landscape mode, so it was the only system that could
run in landscape mode on both the X5 and X51v.
To provide a fair comparison, we also standardized on
common hardware and operating systems whenever possible.
All of the systems used the Netfinity server as the thin-client
server. For the two systems designed for Windows servers,
ICA and RDP, we ran Windows 2003 Server on the server.
For the other systems which support X-based servers, VNC
and pTHINC, we ran the Debian Linux Unstable
distribution with the Linux 2.6.10 kernel on the server. We used the
latest thin-client server versions available on each platform
at the time of our experiments, namely Citrix MetaFrame
XP Server for Windows Feature Release 3, Microsoft
Remote Desktop built into Windows XP and Windows 2003
using RDP 5.2, and VNC 4.0.
4.2 Application Benchmarks
We used two web application benchmarks for our
experiments based on two common application scenarios, browsing
web pages and playing video content from the web. Since
many thin-client systems including two of the ones tested
are closed and proprietary, we measured their performance
in a noninvasive manner by capturing network traffic with
a packet monitor and using a variant of slow-motion
benchmarking [13] previously developed to measure thin-client
performance in PDA environments [10]. This measurement
methodology accounts for both the display decoupling that
can occur between client and server in thin-client systems
as well as client processing time, which may be significant
in the case of PDAs.
To measure web browsing performance, we used a web
browsing benchmark based on the Web Text Page Load Test
from the Ziff-Davis i-Bench benchmark suite [7]. The
benchmark consists of JavaScript controlled load of 55 pages from
the web server. The pages contain both text and
graphics with pages varying in size. The graphics are embedded
images in GIF and JPEG formats. The original i-Bench
benchmark was modified for slow-motion benchmarking by
introducing delays of several seconds between the pages
using JavaScript. Then two tests were run, one where
delays where added between each page, and one where pages
where loaded continuously without waiting for them to be
displayed on the client. In the first test, delays were
sufficiently adjusted in each case to ensure that each page could
be received and displayed on the client completely without
temporal overlap in transferring the data belonging to two
consecutive pages. We used the packet monitor to record
the packet traffic for each run of the benchmark, then used
the timestamps of the first and last packet in the trace to
obtain our latency measures [10]. The packet monitor also
recorded the amount of data transmitted between the client
and the server. The ratio between the data traffic in the two
tests yields a scale factor. This scale factor shows the loss
of data between the server and the client due to inability of
the client to process the data quickly enough. The product
of the scale factor with the latency measurement produces
the true latency accounting for client processing time.
To run the web browsing benchmark, we used Mozilla
Firefox 1.0.4 running on the thin-client server for the thin
clients, and Windows Internet Explorer (IE) Mobile for 2003
and Mobile for 5.0 for the native browsers on the X5 and
X51v PDAs, respectively. In all cases, the web browser used
was sized to fill the entire display region available.
To measure video playback performance, we used a video
benchmark that consisted of playing a 34.75s MPEG-1 video
clip containing a mix of news and entertainment
programming at full-screen resolution. The video clip is 5.11 MB and
consists of 834 352x240 pixel frames with an ideal frame rate
of 24 frames/sec. We measured video performance using
slow-motion benchmarking by monitoring resulting packet
traffic at two playback rates, 1 frames/second (fps) and 24
fps, and comparing the results to determine playback
delays and frame drops that occur at 24 fps to measure overall
video quality [13]. For example, 100% quality means that all
video frames were played at real-time speed. On the other
hand, 50% quality could mean that half the video data was
dropped, or that the clip took twice as long to play even
though all of the video data was displayed.
To run the video benchmark, we used Windows Media
Player 9 for Windows-based thin-client servers, MPlayer 1.0
pre 6 for X-based thin-client servers, and Windows Media
Player 9 Mobile and 10 Mobile for the native video players
running locally on the X5 and X51v PDAs, respectively. In
all cases, the video player used was sized to fill the entire
display region available.
4.3 Measurements
Figures 3 and 4 show the results of running the web
brows148
0
1
10
100
pTHINC
Resized
pTHINCICA
Resized
ICAVNCRDPLOCAL
Latency(s)
Platform
Axim X5 (640x480 or less)
Axim X51v (640x480)
Axim X5 (1024x768)
Axim X51v (1024x768)
Figure 3: Browsing Benchmark: Average Page Latency
ing benchmark. For each platform, we show results for up to
four different configurations, two on the X5 and two on the
X51v, depending on whether each configuration was
supported. However, not all platforms could support all
configurations. The local browser only runs at the display
resolution of the PDA, 480×680 or less for the X51v and the
X5. RDP only runs at 640×480. Neither platform could
support 1024×768 display resolution. ICA only ran on the
X5 and could not run on the X51v because it did not work
on Windows Mobile 5.
Figure 3 shows the average latency per web page for each
platform. pTHINC provides the lowest average web
browsing latency on both PDAs. On the X5, pTHINC performs
up to 70 times better than other thin-client systems and 8
times better than the local browser. On the X51v, pTHINC
performs up to 80 times better than other thin-client
systems and 7 times better than the native browser. In fact,
all of the thin clients except VNC outperform the local
PDA browser, demonstrating the performance benefits of
the thin-client approach. Usability studies have shown that
web pages should take less than one second to download
for the user to experience an uninterrupted web browsing
experience [14]. The measurements show that only the thin
clients deliver subsecond web page latencies. In contrast, the
local browser requires more than 3 seconds on average per
web page. The local browser performs worse since it needs
to run a more limited web browser to process the HTML,
JavaScript, and do all the rendering using the limited
capabilities of the PDA. The thin clients can take advantage of
faster server hardware and a highly tuned web browser to
process the web content much faster.
Figure 3 shows that RDP is the next fastest platform after
pTHINC. However, RDP is only able to run at a fixed
resolution of 640×480 and 8-bit color depth. Furthermore, RDP
also clips the display to the size of the PDA screen so that
it does not need to send updates that are not visible on the
PDA screen. This provides a performance benefit
assuming the remaining web content is not viewed, but degrades
performance when a user scrolls around the display to view
other web content. RDP achieves its performance with
significantly lower display quality compared to the other thin
clients and with additional display clipping not used by other
systems. As a result, RDP performance alone does not
provide a complete comparison with the other platforms. In
contrast, pTHINC provides the fastest performance while
at the same time providing equal or better display quality
than the other systems.
0
1
10
100
1000
pTHINC
Resized
pTHINCICA
Resized
ICAVNCRDPLOCAL
DataSize(KB)
Platform
Axim X5 (640x480 or less)
Axim X51v (640x480)
Axim X5 (1024x768)
Axim X51v (1024x768)
Figure 4: Browsing Benchmark: Average Page Data
Transferred
Since VNC and ICA provide similar display quality to
pTHINC, these systems provide a more fair comparison of
different thin-client approaches. ICA performs worse in part
because it uses higher-level display primitives that require
additional client processing costs. VNC performs worse in
part because it loses display data due to its client-pull
delivery mechanism and because of the client processing costs
in decompressing raw pixel primitives. In both cases, their
performance was limited in part because their PDA clients
were unable to keep up with the rate at which web pages
were being displayed.
Figure 3 also shows measurements for those thin clients
that support resizing the display to fit the PDA screen,
namely ICA and pTHINC. Resizing requires additional
processing, which results in slower average web page latencies.
The measurements show that the additional delay incurred
by ICA when resizing versus not resizing is much more
substantial than for pTHINC. ICA performs resizing on the
slower PDA client. In contrast, pTHINC leverage the more
powerful server to do resizing, reducing the performance
difference between resizing and not resizing. Unlike ICA,
pTHINC is able to provide subsecond web page download
latencies in both cases.
Figure 4 shows the data transferred in KB per page when
running the slow-motion version of the tests. All of the
platforms have modest data transfer requirements of roughly
100 KB per page or less. This is well within the
bandwidth capacity of Wi-Fi networks. The measurements show
that the local browser does not transfer the least amount of
data. This is surprising as HTML is often considered to be
a very compact representation of content. Instead, RDP is
the most bandwidth efficient platform, largely as a result of
using only 8-bit color depth and screen clipping so that it
does not transfer the entire web page to the client. pTHINC
overall has the largest data requirements, slightly more than
VNC. This is largely a result of the current pTHINC
prototype"s lack of native support for 16-bit color data in the wire
protocol. However, this result also highlights pTHINC"s
performance as it is faster than all other systems even while
transferring more data. Furthermore, as newer PDA models
support full 24-bit color, these results indicate that pTHINC
will continue to provide good web browsing performance.
Since display usability and quality are as important as
performance, Figures 5 to 8 compare screenshots of the
different thin clients when displaying a web page, in this case
from the popular BBC news website. Except for ICA, all of
the screenshots were taken on the X51v in landscape mode
149
Figure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768
Figure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768
using the maximum display resolution settings for each
platform given in Table 2. The ICA screenshot was taken on the
X5 since ICA does not run on the X51v. While the
screenshots lack the visual fidelity of the actual device display,
several observations can be made. Figure 5 shows that RDP
does not support fullscreen mode and wastes lots of screen
space for controls and UI elements, requiring the user to
scroll around in order to access the full contents of the web
browsing session. Figure 6 shows that VNC makes better
use of the screen space and provides better display quality,
but still forces the user to scroll around to view the web
page due to its lack of resizing support. Figure 7 shows
ICA"s ability to display the full web page given its resizing
support, but that its lack of landscape capability and poorer
resize algorithm significantly compromise display quality. In
contrast, Figure 8 shows pTHINC using resizing to provide
a high quality fullscreen display of the full width of the web
page. pTHINC maximizes the entire viewing region by
moving all controls to the PDA buttons. In addition, pTHINC
leverages the server computational power to use a high
quality resizing algorithm to resize the display to fit the PDA
screen without significant overhead.
Figures 9 and 10 show the results of running the video
playback benchmark. For each platform except ICA, we
show results for an X5 and X51v configuration. ICA could
not run on the X51v as noted earlier. The measurements
were done using settings that reflected the environment a
user would have to access a web session from both a
desktop computer and a PDA. As such, a 1024×768 server
display resolution was used whenever possible and the video
was shown at fullscreen. RDP was limited to 640×480
display resolution as noted earlier. Since viewing the entire
video display is the only really usable option, we resized
the display to fit the PDA screen for those platforms that
supported this feature, namely ICA and pTHINC.
Figure 9 shows the video quality for each platform. pTHINC
is the only thin client able to provide perfect video playback
quality, similar to the native PDA video player. All of the
other thin clients deliver very poor video quality. With the
exception of RDP on the X51v which provided unacceptable
35% video quality, none of the other systems were even able
to achieve 10% video quality. VNC and ICA have the worst
quality at 8% on the X5 device.
pTHINC"s native video support enables superior video
performance, while other thin clients suffer from their
inability to distinguish video from normal display updates.
They attempt to apply ineffective and expensive
compression algorithms on the video data and are unable to keep up
with the stream of updates generated, resulting in dropped
frames or long playback times. VNC suffers further from
its client-pull update model because video frames are
generated faster than the rate at which the client can process
and send requests to the server to obtain the next display
update. Figure 10 shows the total data transferred during
150
0%
20%
40%
60%
80%
100%
pTHINCICAVNCRDPLOCAL
VideoQuality
Platform
Axim X5
Axim X51v
Figure 9: Video Benchmark: Fullscreen Video Quality
0
1
10
100
pTHINCICAVNCRDPLOCAL
VideoDataSize(MB)
Platform
Axim X5
Axim X51v
Figure 10: Video Benchmark: Fullscreen Video Data
video playback for each system. The native player is the
most bandwidth efficient platform, sending less than 6 MB
of data, which corresponds to about 1.2 Mbps of bandwidth.
pTHINC"s 100% video quality requires about 25 MB of data
which corresponds to a bandwidth usage of less than 6 Mbps.
While the other thin clients send less data than THINC,
they do so because they are dropping video data, resulting
in degraded video quality.
Figures 11 to 14 compare screenshots of the different thin
clients when displaying the video clip. Except for ICA, all of
the screenshots were taken on the X51v in landscape mode
using the maximum display resolution settings for each
platform given in Table 2. The ICA screenshot was taken on the
X5 since ICA does not run on the X51v. Figures 11 and 12
show that RDP and VNC are unable to display the entire
video frame on the PDA screen. RDP wastes screen space
for UI elements and VNC only shows the top corner of the
video frame on the screen. Figure 13 shows that ICA
provides resizing to display the entire video frame, but did not
proportionally resize the video data, resulting in strange
display artifacts. In contrast, Figure 14 shows pTHINC using
resizing to provide a high quality fullscreen display of the
entire video frame. pTHINC provides visually more appealing
video display than RDP, VNC, or ICA.
5. RELATED WORK
Several studies have examined the web browsing
performance of thin-client computing [13, 19, 10]. The ability for
thin clients to improve web browsing performance on
wireless PDAs was first quantitatively demonstrated in a
previous study by one of the authors [10]. This study
demonstrated that thin clients can provide both faster web
browsing performance and greater web browsing functionality.
The study considered a wide range of web content including
content from medical information systems. Our work builds
on this previous study and consider important issues such as
how usable existing thin clients are in PDA environments,
the trade-offs between thin-client usability and performance,
performance across different PDA devices, and the
performance of thin clients on common web-related applications
such as video.
Many thin clients have been developed and some have
PDA clients, including Microsoft"s Remote Desktop [3],
Citrix MetraFrame XP [2], Virtual Network Computing [16,
12], GoToMyPC [5], and Tarantella [18]. These systems
were first designed for desktop computing and retrofitted
for PDAs. Unlike pTHINC, they do not address key
system architecture and usability issues important for PDAs.
This limits their display quality, system performance,
available screen space, and overall usability on PDAs. pTHINC
builds on previous work by two of the authors on THINC [1],
extending the server architecture and introducing a client
interface and usage model to efficiently support PDA devices
for mobile web applications.
Other approaches to improve the performance of mobile
wireless web browsing have focused on using transcoding
and caching proxies in conjunction with the fat client model
[11, 9, 4, 8]. They work by pushing functionality to external
proxies, and using specialized browsing applications on the
PDA device that communicate with the proxy. Our
thinclient approach differs fundamentally from these fat-client
approaches by pushing all web browser logic to the server,
leveraging existing investments in desktop web browsers and
helper applications to work seamlessly with production
systems without any additional proxy configuration or web
browser modifications.
With the emergence of web browsing on small display
devices, web sites have been redesigned using mechanisms like
WAP and specialized native web browsers have been
developed to tailor the needs of these devices. Recently, Opera
has developed the Opera Mini [15] web browser, which uses
an approach similar to the thin-client model to provide
access across a number of mobile devices that would normally
be incapable of running a web browser. Instead of requiring
the device to process web pages, it uses a remote server to
pre-process the page before sending it to the phone.
6. CONCLUSIONS
We have introduced pTHINC, a thin-client architecture
for wireless PDAs. pTHINC provides key architectural and
usability mechanisms such as server-side screen resizing,
clientside screen rotation using simple copy techniques, YUV video
support, and maximizing screen space for display updates
and leveraging existing PDA control buttons for UI
elements. pTHINC transparently supports traditional
desktop browsers and their helper applications on PDA devices
and desktop machines, providing mobile users with
ubiquitous access to a consistent, personalized, and full-featured
web environment across heterogeneous devices. We have
implemented pTHINC and measured its performance on
web applications compared to existing thin-client systems
and native web applications. Our results on multiple
mobile wireless devices demonstrate that pTHINC delivers web
browsing performance up to 80 times better than existing
thin-client systems, and 8 times better than a native PDA
browser. In addition, pTHINC is the only PDA thin client
151
Figure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768
Figure 13: Video Screenshot: ICA Resized 1024x768
Figure 14: Video Screenshot: pTHINC Resized 1024x768
that transparently provides full-screen, full frame rate video
playback, making web sites with multimedia content
accessible to mobile web users.
7. ACKNOWLEDGEMENTS
This work was supported in part by NSF ITR grants
CCR0219943 and CNS-0426623, and an IBM SUR Award.
8. REFERENCES
[1] R. Baratto, L. Kim, and J. Nieh. THINC: A Virtual
Display Architecture for Thin-Client Computing. In
Proceedings of the 20th ACM Symposium on Operating
Systems Principles (SOSP), Oct. 2005.
[2] Citrix Metaframe. http://www.citrix.com.
[3] B. C. Cumberland, G. Carius, and A. Muir. Microsoft
Windows NT Server 4.0, Terminal Server Edition:
Technical Reference. Microsoft Press, Redmond, WA, 1999.
[4] A. Fox, I. Goldberg, S. D. Gribble, and D. C. Lee.
Experience With Top Gun Wingman: A Proxy-Based
Graphical Web Browser for the 3Com PalmPilot. In
Proceedings of Middleware "98, Lake District, England,
September 1998, 1998.
[5] GoToMyPC. http://www.gotomypc.com/.
[6] Health Insurance Portability and Accountability Act.
http://www.hhs.gov/ocr/hipaa/.
[7] i-Bench version 1.5. http:
//etestinglabs.com/benchmarks/i-bench/i-bench.asp.
[8] A. Joshi. On proxy agents, mobility, and web access.
Mobile Networks and Applications, 5(4):233-241, 2000.
[9] J. Kangasharju, Y. G. Kwon, and A. Ortega. Design and
Implementation of a Soft Caching Proxy. Computer
Networks and ISDN Systems, 30(22-23):2113-2121, 1998.
[10] A. Lai, J. Nieh, B. Bohra, V. Nandikonda, A. P. Surana,
and S. Varshneya. Improving Web Browsing on Wireless
PDAs Using Thin-Client Computing. In Proceedings of the
13th International World Wide Web Conference (WWW),
May 2004.
[11] A. Maheshwari, A. Sharma, K. Ramamritham, and
P. Shenoy. TranSquid: Transcoding and caching proxy for
heterogenous ecommerce environments. In Proceedings of
the 12th IEEE Workshop on Research Issues in Data
Engineering (RIDE "02), Feb. 2002.
[12] .NET VNC Viewer for PocketPC.
http://dotnetvnc.sourceforge.net/.
[13] J. Nieh, S. J. Yang, and N. Novik. Measuring Thin-Client
Performance Using Slow-Motion Benchmarking. ACM
Trans. Computer Systems, 21(1):87-115, Feb. 2003.
[14] J. Nielsen. Designing Web Usability. New Riders
Publishing, Indianapolis, IN, 2000.
[15] Opera Mini Browser.
http://www.opera.com/products/mobile/operamini/.
[16] T. Richardson, Q. Stafford-Fraser, K. R. Wood, and
A. Hopper. Virtual Network Computing. IEEE Internet
Computing, 2(1), Jan./Feb. 1998.
[17] R. W. Scheifler and J. Gettys. The X Window System.
ACM Trans. Gr., 5(2):79-106, Apr. 1986.
[18] Sun Secure Global Desktop.
http://www.sun.com/software/products/sgd/.
[19] S. J. Yang, J. Nieh, S. Krishnappa, A. Mohla, and
M. Sajjadpour. Web Browsing Performance of Wireless
Thin-Client Computing. In Proceedings of the 12th
International World Wide Web Conference (WWW), May
2003.
152 | pervasive web;remote display;pda thinclient solution;system usability;web browser;thin-client computing;mobility;full-function web browser;pthinc;seamless mobility;functionality;thin-client;local pda web browser;web application;video playback;screen resolution;web browsing performance;high-fidelity display;mobile wireless pda;crucial browser helper application |
train_C-71 | A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks | We propose ι, a novel index for evaluation of point-distribution. ι is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of ι result in a honeycomb structure. We propose that ι can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing ι at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingι Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing ι in coverage-related problems for WSNs. | 1. INTRODUCTION
A wireless sensor network (WSN) consists of a large number of
in-situ battery-powered sensor nodes. A WSN can collect the data
about physical phenomena of interest [1]. There are many
potential applications of WSNs, including environmental monitoring and
surveillance, etc. [1][11].
In many application scenarios, WSNs are employed to conduct
surveillance tasks in adverse, or even worse, in hostile working
environments. One major problem caused is that sensor nodes are
subjected to failures. Therefore, fault tolerance of a WSN is
critical.
One way to achieve fault tolerance is that a WSN should contain
a large number of redundant nodes in order to tolerate node
failures. It is vital to provide a mechanism that redundant nodes can be
working in sleeping mode (i.e., major power-consuming units such
as the transceiver of a redundant sensor node can be shut off) to
save energy, and thus to prolong the network lifetime. Redundancy
should be exploited as much as possible for the set of sensors that
are currently taking charge in the surveillance work of the network
area [6].
We find that the minimum distance between each pair of points
normalized by the average distance between each pair of points
serves as a good index to evaluate the distribution of the points. We
call this index, denoted by ι, the normalized minimum distance. If
points are moveable, we find that maximizing ι results in a
honeycomb structure. The honeycomb structure poses that the coverage
efficiency is the best if each point represents a sensor node that
is providing surveillance work. Employing ι in coverage-related
problems is thus deemed promising.
This enlightens us that maximizing ι is a good approach to
select a set of sensors that are currently taking charge in the
surveillance work of the network area. To explore the effectiveness of
employing ι in coverage-related problems, we formulate a
sensorgrouping problem for high-redundancy WSNs. An algorithm called
Maximizing-ι Node-Deduction (MIND) is proposed in which
redundant sensor nodes are removed to obtain a large ι for each set of
sensors that are currently taking charge in the surveillance work of
the network area. We also introduce another greedy solution called
Incremental Coverage Quality Algorithm (ICQA) for this problem,
which serves as a benchmark to evaluate MIND.
The main contribution of this paper is twofold. First, we
introduce a novel index ι for evaluation of point-distribution. We show
that maximizing ι of a WSN results in low redundancy of the
network. Second, we formulate a general sensor-grouping problem
for WSNs and provide a general sensing model. With the MIND
algorithm we show that locally maximizing ι among each sensor
node and its neighbors is a good approach to solve this problem.
This demonstrates a good application of employing ι in
coveragerelated problems.
The rest of the paper is organized as follows. In Section 2, we
introduce our point-distribution index ι. We survey related work
and formulate a sensor-grouping problem together with a general
sensing model in Section 3. Section 4 investigates the application
of ι in this grouping problem. We propose MIND for this problem
1171
and introduce ICQA as a benchmark. In Section 5, we present
our simulation results in which MIND and ICQA are compared.
Section 6 provides conclusion remarks.
2. THE NORMALIZED MINIMUM DISTANCE
ι: A POINT-DISTRIBUTION INDEX
Suppose there are n points in a Euclidean space Ω. The
coordinates of these points are denoted by xi (i = 1, ..., n).
It may be necessary to evaluate how the distribution of these
points is. There are many metrics to achieve this goal. For
example, the Mean Square Error from these points to their mean value
can be employed to calculate how these points deviate from their
mean (i.e., their central). In resource-sharing evaluation, the Global
Fairness Index (GFI) is often employed to measure how even the
resource distributes among these points [8], when xi represents the
amount of resource that belong to point i. In WSNs, GFI is usually
used to calculate how even the remaining energy of sensor nodes
is.
When n is larger than 2 and the points do not all overlap (That
points all overlap means xi = xj, ∀ i, j = 1, 2, ..., n). We propose
a novel index called the normalized minimum distance, namely ι,
to evaluate the distribution of the points. ι is the minimum distance
between each pair of points normalized by the average distance
between each pair of points. It is calculated by:
ι =
min(||xi − xj||)
µ
(∀ i, j = 1, 2, ..., n; and i = j) (1)
where ||xi − xj|| denotes the Euclidean distance between point
i and point j in Ω, the min(·) function calculates the minimum
distance between each pair of points, and µ is the average distance
between each pair of points, which is:
µ =
(
Pn
i=1
Pn
j=1,j=i ||xi − xj||)
n(n − 1)
(2)
ι measures how well the points separate from one another.
Obviously, ι is in interval [0, 1]. ι is equal to 1 if and only if n is equal
to 3 and these three points forms an equilateral triangle. ι is equal
to zero if any two points overlap. ι is a very interesting value of a
set of points. If we consider each xi (∀i = 1, ..., n) is a variable in
Ω, how these n points would look like if ι is maximized?
An algorithm is implemented to generate the topology in which
ι is locally maximized (The algorithm can be found in [19]). We
consider a 2-dimensional space. We select n = 10, 20, 30, ..., 100
and perform this algorithm. In order to avoid that the algorithm
converge to local optimum, we select different random seeds to
generate the initial points for 1000 time and obtain the best one
that results in the largest ι when the algorithm converges. Figure 1
demonstrates what the resulting topology looks like when n = 20
as an example.
Suppose each point represents a sensor node. If the sensor
coverage model is the Boolean coverage model [15][17][18][14] and
the coverage radius of each node is the same. It is exciting to see
that this topology results in lowest redundancy because the Vonoroi
diagram [2] formed by these nodes (A Vonoroi diagram formed by
a set of nodes partitions a space into a set of convex polygons such
that points inside a polygon are closest to only one particular node)
is a honeycomb-like structure1
.
This enlightens us that ι may be employed to solve problems
related to sensor-coverage of an area. In WSNs, it is desirable
1
This is how base stations of a wireless cellular network are
deployed and why such a network is called a cellular one.
0 20 40 60 80 100 120 140 160
0
20
40
60
80
100
120
140
160
X
Y
Figure 1: Node Number = 20, ι = 0.435376
that the active sensor nodes that are performing surveillance task
should separate from one another. Under the constraint that the
sensing area should be covered, the more each node separates from
the others, the less the redundancy of the coverage is. ι indicates
the quality of such separation. It should be useful for approaches
on sensor-coverage related problems.
In our following discussions, we will show the effectiveness of
employing ι in sensor-grouping problem.
3. THE SENSOR-GROUPING PROBLEM
In many application scenarios, to achieve fault tolerance, a WSN
contains a large number of redundant nodes in order to tolerate
node failures. A node sleeping-working schedule scheme is
therefore highly desired to exploit the redundancy of working sensors
and let as many nodes as possible sleep.
Much work in the literature is on this issue [6]. Yan et al
introduced a differentiated service in which a sensor node finds out
its responsible working duration with cooperation of its neighbors
to ensure the coverage of sampled points [17]. Ye et al developed
PEAS in which sensor nodes wake up randomly over time, probe
their neighboring nodes, and decide whether they should begin to
take charge of surveillance work [18]. Xing et al exploited a
probabilistic distributed detection model with a protocol called
Coordinating Grid (Co-Grid) [16]. Wang et al designed an approach called
Coverage Configuration Protocol (CCP) which introduced the
notion that the coverage degree of intersection-points of the
neighboring nodes" sensing-perimeters indicates the coverage of a convex
region [15]. In our recent work [7], we also provided a sleeping
configuration protocol, namely SSCP, in which sleeping eligibility
of a sensor node is determined by a local Voronoi diagram. SSCP
can provide different levels of redundancy to maintain different
requirements of fault tolerance.
The major feature of the aforementioned protocols is that they
employ online distributed and localized algorithms in which a
sensor node determines its sleeping eligibility and/or sleeping time
based on the coverage requirement of its sensing area with some
information provided by its neighbors.
Another major approach for sensor node sleeping-working
scheduling issue is to group sensor nodes. Sensor nodes in a network are
divided into several disjoint sets. Each set of sensor nodes are able
to maintain the required area surveillance work. The sensor nodes
are scheduled according to which set they belong to. These sets
work successively. Only one set of sensor nodes work at any time.
We call the issue sensor-grouping problem.
The major advantage of this approach is that it avoids the
overhead caused by the processes of coordination of sensor nodes to
make decision on whether a sensor node is a candidate to sleep or
1172
work and how long it should sleep or work. Such processes should
be performed from time to time during the lifetime of a network in
many online distributed and localized algorithms. The large
overhead caused by such processes is the main drawback of the
online distributed and localized algorithms. On the contrary, roughly
speaking, this approach groups sensor nodes in one time and
schedules when each set of sensor nodes should be on duty. It does not
require frequent decision-making on working/sleeping eligibility2
.
In [13] by Slijepcevic et al, the sensing area is divided into
regions. Sensor nodes are grouped with the most-constrained
leastconstraining algorithm. It is a greedy algorithm in which the
priority of selecting a given sensor is determined by how many
uncovered regions this sensor covers and the redundancy caused by
this sensor. In [5] by Cardei et al, disjoint sensor sets are
modeled as disjoint dominating sets. Although maximum dominating
sets computation is NP-complete, the authors proposed a
graphcoloring based algorithm. Cardei et al also studied similar problem
in the domain of covering target points in [4]. The NP-completeness
of the problem is proved and a heuristic that computes the sets are
proposed. These algorithms are centralized solutions of
sensorgrouping problem.
However, global information (e.g., the location of each in-network
sensor node) of a large scale WSN is also very expensive to
obtained online. Also it is usually infeasible to obtain such
information before sensor nodes are deployed. For example, sensor nodes
are usually deployed in a random manner and the location of each
in-network sensor node is determined only after a node is deployed.
The solution of sensor-grouping problem should only base on
locally obtainable information of a sensor node. That is to say, nodes
should determine which group they should join in a fully
distributed way. Here locally obtainable information refers to a node"s
local information and the information that can be directly obtained
from its adjacent nodes, i.e., nodes within its communication range.
In Subsection 3.1, we provide a general problem formulation of
the sensor-grouping problem. Distributed-solution requirement is
formulated in this problem. It is followed by discussion in
Subsection 3.2 on a general sensing model, which serves as a given
condition of the sensor-grouping problem formulation.
To facilitate our discussions, the notations in our following
discussions are described as follows.
• n: The number in-network sensor nodes.
• S(j) (j = 1, 2, ..., m): The jth set of sensor nodes where m
is the number of sets.
• L(i) (i = 1, 2, ..., n): The physical location of node i.
• φ: The area monitored by the network: i.e., the sensing area
of the network.
• R: The sensing radius of a sensor node. We assume that
a sensor node can only be responsible to monitor a circular
area centered at the node with a radius equal to R. This is
a usual assumption in work that addresses sensor-coverage
related problems. We call this circular area the sensing area
of a node.
3.1 Problem Formulation
We assume that each sensor node can know its approximate
physical location. The approximate location information is obtainable
if each sensor node carries a GPS receiver or if some localization
algorithms are employed (e.g., [3]).
2
Note that if some nodes die, a re-grouping process might also be
performed to exploit the remaining nodes in a set of sensor nodes.
How to provide this mechanism is beyond the scope of this paper
and yet to be explored.
Problem 1. Given:
• The set of each sensor node i"s sensing neighbors N(i) and
the location of each member in N(i);
• A sensing model which quantitatively describes how a point
P in area φ is covered by sensor nodes that are responsible to
monitor this point. We call this quantity the coverage quality
of P.
• The coverage quality requirement in φ, denoted by s. When
the coverage of a point is larger than this threshold, we say
this point is covered.
For each sensor node i, make a decision on which group S(j) it
should join so that:
• Area φ can be covered by sensor nodes in each set S(j)
• m, the number of sets S(j) is maximized.
In this formulation, we call sensor nodes within a circular area
centered at a sensor node i with a radius equal to 2 · R the sensing
neighbors of node i. This is because sensors nodes in this area,
together with node i, may be cooperative to ensure the coverage of
a point inside node i"s sensing area.
We assume that the communication range of a sensor node is
larger than 2 · R, which is also a general assumption in work that
addresses sensor-coverage related problems. That is to say, the first
given condition in Problem 1 is the information that can be obtained
directly from a node"s adjacent nodes. It is therefore locally
obtainable information. The last two given conditions in this problem
formulation can be programmed into a node before it is deployed
or by a node-programming protocol (e.g., [9]) during network
runtime. Therefore, the given conditions can all be easily obtained by
a sensor-grouping scheme with fully distributed implementation.
We reify this problem with a realistic sensing model in next
subsection.
3.2 A General Sensing Model
As WSNs are usually employed to monitor possible events in a
given area, it is therefore a design requirement that an event
occurring in the network area must/may be successfully detected by
sensors.
This issue is usually formulated as how to ensure that an event
signal omitted in an arbitrary point in the network area can be
detected by sensor nodes. Obviously, a sensing model is required to
address this problem so that how a point in the network area is
covered can be modeled and quantified. Thus the coverage quality of
a WSN can be evaluated.
Different applications of WSNs employ different types of
sensors, which surely have widely different theoretical and physical
characteristics. Therefore, to fulfill different application
requirements, different sensing models should be constructed based on the
characteristics of the sensors employed.
A simple theoretical sensing model is the Boolean sensing model
[15][18][17][14]. Boolean sensing model assumes that a sensor
node can always detect an event occurring in its responsible
sensing area. But most sensors detect events according to the signal
strength sensed. Event signals usually fade in relation to the
physical distance between an event and the sensor. The larger the
distance, the weaker the event signals that can be sensed by the sensor,
which results in a reduction of the probability that the event can be
successfully detected by the sensor.
As in WSNs, event signals are usually electromagnetic, acoustic,
or photic signals, they fade exponentially with the increasing of
1173
their transmit distance. Specifically, the signal strength E(d) of an
event that is received by a sensor node satisfies:
E(d) =
α
dβ
(3)
where d is the physical distance from the event to the sensor node;
α is related to the signal strength omitted by the event; and β is
signal fading factor which is typically a positive number larger than
or equal to 2. Usually, α and β are considered as constants.
Based on this notion, to be more reasonable, researchers propose
collaborative sensing model to capture application requirements:
Area coverage can be maintained by a set of collaborative sensor
nodes: For a point with physical location L, the point is considered
covered by the collaboration of i sensors (denoted by k1, ..., ki) if
and only if the following two equations holds [7][10][12].
∀j = 1, ..., i; L(kj) − L < R. (4)
C(L) =
iX
j=1
(E( L(kj) − L ) > s. (5)
C(L) is regarded as the coverage quality of location L in the
network area [7][10][12].
However, we notice that defining the sensibility as the sum of the
sensed signal strength by each collaborative sensor implies a very
special application: Applications must employ the sum of the
signal strength to achieve decision-making. To capture generally
realistic application requirement, we modify the definition described
in Equation (5). The model we adopt in this paper is described in
details as follows.
We consider the probability P(L, kj ) that an event located at L
can be detected by sensor kj is related to the signal strength sensed
by kj. Formally,
P(L, kj) = γE(d) =
δ
( L(kj) − L / + 1)β
, (6)
where γ is a constant and δ = γα is a constant too. normalizes
the distance to a proper scale and the +1 item is to avoid infinite
value of P(L, kj).
The probability that an event located at L can be detected by any
collaborative sensors that satisfied Equation (4) is:
P (L) = 1 −
iY
j=1
(1 − P(L, kj )). (7)
As the detection probability P (L) reasonably determines how
an event occurring at location L can be detected by the networks, it
is a good measure of the coverage quality of location L in a WSN.
Specifically, Equation (5) is modified to:
C(L) = P (L)
= 1 −
iY
j=1
[1 −
δ
( L(kj) − L / + 1)β
] > s. (8)
To sum it up, we consider a point at location L is covered if
Equation (4) and (8) hold.
4. MAXIMIZING-ι NODE-DEDUCTION
ALGORITHM FOR SENSOR-GROUPING
PROBLEM
Before we process to introduce algorithms to solve the sensor
grouping problem, let us define the margin (denoted by θ) of an
area φ monitored by the network as the band-like marginal area
of φ and all the points on the outer perimeter of θ is ρ distance
away from all the points on the inner perimeter of θ. ρ is called the
margin length.
In a practical network, sensor nodes are usually evenly deployed
in the network area. Obviously, the number of sensor nodes that
can sense an event occurring in the margin of the network is smaller
than the number of sensor nodes that can sense an event occurring
in other area of the network. Based on this consideration, in our
algorithm design, we ensure the coverage quality of the network
area except the margin. The information on φ and ρ is
networkbased. Each in-network sensor node can be pre-programmed or
on-line informed about φ and ρ, and thus calculate whether a point
in its sensing area is in the margin or not.
4.1 Maximizing-ι Node-Deduction Algorithm
The node-deduction process of our Maximizing-ι Node-Deduction
Algorithm (MIND) is simple. A node i greedily maximizes ι of the
sub-network composed by itself, its ungrouped sensing neighbors,
and the neighbors that are in the same group of itself. Under the
constraint that the coverage quality of its sensing area should be
ensured, node i deletes nodes in this sub-network one by one. The
candidate to be pruned satisfies that:
• It is an ungrouped node.
• The deletion of the node will not result in uncovered-points
inside the sensing area of i.
A candidate is deleted if the deletion of the candidate results in
largest ι of the sub-network compared to the deletion of other
candidates. This node-deduction process continues until no candidate
can be found. Then all the ungrouped sensing neighbors that are
not deleted are grouped into the same group of node i. We call the
sensing neighbors that are in the same group of node i the group
sensing neighbors of node i. We then call node i a finished node,
meaning that it has finished the above procedure and the sensing
area of the node is covered. Those nodes that have not yet finished
this procedure are called unfinished nodes.
The above procedure initiates at a random-selected node that is
not in the margin. The node is grouped to the first group. It
calculates the resulting group sensing neighbors of it based on the above
procedure. It informs these group sensing neighbors that they are
selected in the group. Then it hands over the above procedure to
an unfinished group sensing neighbors that is the farthest from
itself. This group sensing neighbor continues this procedure until no
unfinished neighbor can be found. Then the first group is formed
(Algorithmic description of this procedure can be found at [19]).
After a group is formed, another random-selected ungrouped
node begins to group itself to the second group and initiates the
above procedure. In this way, groups are formed one by one. When
a node that involves in this algorithm found out that the coverage
quality if its sensing area, except what overlaps the network margin,
cannot be ensured even if all its ungrouped sensing neighbors are
grouped into the same group as itself, the algorithm stops. MIND
is based on locally obtainable information of sensor nodes. It is
a distributed algorithm that serves as an approximate solution of
Problem 1.
4.2 Incremental Coverage Quality Algorithm:
A Benchmark for MIND
To evaluate the effectiveness of introducing ι in the sensor-group
problem, another algorithm for sensor-group problem called
Incremental Coverage Quality Algorithm (ICQA) is designed. Our aim
1174
is to evaluate how an idea, i.e., MIND, based on locally maximize
ι performs.
In ICQA, a node-selecting process is as follows. A node i
greedily selects an ungrouped sensing neighbor in the same group as
itself one by one, and informs the neighbor it is selected in the group.
The criterion is:
• The selected neighbor is responsible to provide surveillance
work for some uncovered parts of node i"s sensing area. (i.e.,
the coverage quality requirement of the parts is not fulfilled
if this neighbor is not selected.)
• The selected neighbor results in highest improvement of the
coverage quality of the neighbor"s sensing area.
The improvement of the coverage quality, mathematically, should
be the integral of the the improvements of all points inside the
neighbor"s sensing area. A numerical approximation is employed
to calculate this improvement. Details are presented in our
simulation study.
This node-selecting process continues until the sensing area of
node i is entirely covered. In this way, node i"s group sensing
neighbors are found. The above procedure is handed over as what
MIND does and new groups are thus formed one by one. And
the condition that ICQA stops is the same as MIND. ICQA is also
based on locally obtainable information of sensor nodes. ICQA is
also a distributed algorithm that serves as an approximate solution
of Problem 1.
5. SIMULATION RESULTS
To evaluate the effectiveness of employing ι in sensor-grouping
problem, we build simulation surveillance networks. We employ
MIND and ICQA to group the in-network sensor nodes. We
compare the grouping results with respect to how many groups both
algorithms find and how the performance of the resulting groups
are.
Detailed settings of the simulation networks are shown in Table
1. In simulation networks, sensor nodes are randomly deployed in
a uniform manner in the network area.
Table 1: The settings of the simulation networks
Area of sensor field 400m*400m
ρ 20m
R 80m
α, β, γ and 1.0, 2.0, 1.0 and 100.0
s 0.6
For evaluating the coverage quality of the sensing area of a node,
we divide the sensing area of a node into several regions and regard
the coverage quality of the central point in each region as a
representative of the coverage quality of the region. This is a numerical
approximation. Larger number of such regions results in better
approximation. As sensor nodes are with low computational
capacity, there is a tradeoff between the number of such regions and the
precision of the resulting coverage quality of the sensing area of a
node. In our simulation study, we set this number 12. For
evaluating the improvement of coverage quality in ICQA, we sum up all
the improvements at each region-center as the total improvement.
5.1 Number of Groups Formed by MIND and
ICQA
We set the total in-network node number to different values and
let the networks perform MIND and ICQA. For each n,
simulations run with several random seeds to generate different networks.
Results are averaged. Figure 2 shows the group numbers found in
networks with different n"s.
500 1000 1500 2000
0
5
10
15
20
25
30
35
40
45
50
Total in−network node number
Totalnumberofgroupsfound
ICQA
MMNP
Figure 2: The number of groups found by MIND and ICQA
We can see that MIND always outperforms ICQA in terms of
the number of groups formed. Obviously, the larger the number of
groups can be formed, the more the redundancy of each group is
exploited. This output shows that an approach like MIND that aim
to maximize ι of the resulting topology can exploits redundancy
well.
As an example, in case that n = 1500, the results of five
networks are listed in Table 2.
Table 2: The grouping results of five networks with n = 1500
Net MIND ICQA MIND ICQA
Group Number Group Number Average ι Average ι
1 34 31 0.145514 0.031702
2 33 30 0.145036 0.036649
3 33 31 0.156483 0.033578
4 32 31 0.152671 0.029030
5 33 32 0.146560 0.033109
The difference between the average ι of the groups in each
network shows that groups formed by MIND result in topologies with
larger ι"s. It demonstrates that ι is good indicator of redundancy in
different networks.
5.2 The Performance of the Resulting Groups
Although MIND forms more groups than ICQA does, which
implies longer lifetime of the networks, another importance
consideration is how these groups formed by MIND and ICQA perform.
We let 10000 events randomly occur in the network area except
the margin. We compare how many events happen at the locations
where the quality is less than the requirement s = 0.6 when each
resulting group is conducting surveillance work (We call the
number of such events the failure number of group). Figure 3 shows
the average failure numbers of the resulting groups when different
node numbers are set.
We can see that the groups formed by MIND outperform those
formed by ICQA because the groups formed by MIND result in
lower failure numbers. This further demonstrates that MIND is a
good approach for sensor-grouping problem.
1175
500 1000 1500 2000
0
10
20
30
40
50
60
Total in−network node number
averagefailurenumbers
ICQA
MMNP
Figure 3: The failure numbers of MIND and ICQA
6. CONCLUSION
This paper proposes ι, a novel index for evaluation of
pointdistribution. ι is the minimum distance between each pair of points
normalized by the average distance between each pair of points.
We find that a set of points that achieve a maximum value of ι
result in a honeycomb structure. We propose that ι can serve as a
good index to evaluate the distribution of the points, which can be
employed in coverage-related problems in wireless sensor networks
(WSNs). We set out to validate this idea by employing ι to a
sensorgrouping problem. We formulate a general sensor-grouping
problem for WSNs and provide a general sensing model. With an
algorithm called Maximizing-ι Node-Deduction (MIND), we show that
maximizing ι at sensor nodes is a good approach to solve this
problem. Simulation results verify that MIND outperforms a greedy
algorithm that exploits sensor-redundancy we design in terms of the
number and the performance of the groups formed. This
demonstrates a good application of employing ι in coverage-related
problems.
7. ACKNOWLEDGEMENT
The work described in this paper was substantially supported by
two grants, RGC Project No. CUHK4205/04E and UGC Project
No. AoE/E-01/99, of the Hong Kong Special Administrative
Region, China.
8. REFERENCES
[1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci.
A survey on wireless sensor networks. IEEE
Communications Magazine, 40(8):102-114, 2002.
[2] F. Aurenhammer. Vononoi diagram - a survey of a
fundamental geometric data structure. ACM Computing
Surveys, 23(2):345-405, September 1991.
[3] N. Bulusu, J. Heidemann, and D. Estrin. GPS-less low-cost
outdoor localization for very small devices. IEEE Personal
Communication, October 2000.
[4] M. Cardei and D.-Z. Du. Improving wireless sensor network
lifetime through power aware organization. ACM Wireless
Networks, 11(3), May 2005.
[5] M. Cardei, D. MacCallum, X. Cheng, M. Min, X. Jia, D. Li,
and D.-Z. Du. Wireless sensor networks with energy efficient
organization. Journal of Interconnection Networks, 3(3-4),
December 2002.
[6] M. Cardei and J. Wu. Coverage in wireless sensor networks.
In Handbook of Sensor Networks, (eds. M. Ilyas and I.
Magboub), CRC Press, 2004.
[7] X. Chen and M. R. Lyu. A sensibility-based sleeping
configuration protocol for dependable wireless sensor
networks. CSE Technical Report, The Chinese University of
Hong Kong, 2005.
[8] R. Jain, W. Hawe, and D. Chiu. A quantitative measure of
fairness and discrimination for resource allocation in shared
computer systems. Technical Report DEC-TR-301,
September 1984.
[9] S. S. Kulkarni and L. Wang. MNP: Multihop network
reprogramming service for sensor networks. In Proc. of the
25th International Conference on Distributed Computing
Systems (ICDCS), June 2005.
[10] B. Liu and D. Towsley. A study on the coverage of
large-scale sensor networks. In Proc. of the 1st IEEE
International Conference on Mobile ad-hoc and Sensor
Systems, Fort Lauderdale, FL, October 2004.
[11] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and
J. Anderson. Wireless sensor networks for habitat
monitoring. In Proc. of the ACM International Workshop on
Wireless Sensor Networks and Applications, 2002.
[12] S. Megerian, F. Koushanfar, G. Qu, G. Veltri, and
M. Potkonjak. Explosure in wirless sensor networks: Theory
and pratical solutions. Wireless Networks, 8, 2002.
[13] S. Slijepcevic and M. Potkonjak. Power efficient
organization of wireless sensor networks. In Proc. of the
IEEE International Conference on Communications (ICC),
volume 2, Helsinki, Finland, June 2001.
[14] D. Tian and N. D. Georganas. A node scheduling scheme for
energy conservation in large wireless sensor networks.
Wireless Communications and Mobile Computing,
3:272-290, May 2003.
[15] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill.
Integrated coverage and connectivity configuration in
wireless sensor networks. In Proc. of the 1st ACM
International Conference on Embedded Networked Sensor
Systems (SenSys), Los Angeles, CA, November 2003.
[16] G. Xing, C. Lu, R. Pless, and J. A. O´ Sullivan. Co-Grid: an
efficient converage maintenance protocol for distributed
sensor networks. In Proc. of the 3rd International
Symposium on Information Processing in Sensor Networks
(IPSN), Berkeley, CA, April 2004.
[17] T. Yan, T. He, and J. A. Stankovic. Differentiated
surveillance for sensor networks. In Proc. of the 1st ACM
International Conference on Embedded Networked Sensor
Systems (SenSys), Los Angeles, CA, November 2003.
[18] F. Ye, G. Zhong, J. Cheng, S. Lu, and L. Zhang. PEAS: A
robust energy conserving protocol for long-lived sensor
networks. In Proc. of the 23rd International Conference on
Distributed Computing Systems (ICDCS), Providence, Rhode
Island, May 2003.
[19] Y. Zhou, H. Yang, and M. R. Lyu. A point-distribution index
and its application in coverage-related problems. CSE
Technical Report, The Chinese University of Hong Kong,
2006.
1176 | sensor-grouping;sensor group;fault tolerance;sleeping configuration protocol;surveillance;redundancy;sensor coverage;incremental coverage quality algorithm;node-deduction process;point-distribution index;wireless sensor network;honeycomb structure |
train_C-72 | GUESS: Gossiping Updates for Efficient Spectrum Sensing | Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches. | 1. INTRODUCTION
There has recently been a huge surge in the growth of
wireless technology, driven primarily by the availability of
unlicensed spectrum. However, this has come at the cost
of increased RF interference, which has caused the Federal
Communications Commission (FCC) in the United States to
re-evaluate its strategy on spectrum allocation. Currently,
the FCC has licensed RF spectrum to a variety of public and
private institutions, termed primary users. New spectrum
allocation regimes implemented by the FCC use dynamic
spectrum access schemes to either negotiate or
opportunistically allocate RF spectrum to unlicensed secondary users
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
D1
D2
D5
D3
D4
Primary User
Shadowed
Secondary Users
Secondary Users detect
Primary's Signal
Shadowed
Secondary User
Figure 1: Without cooperation, shadowed users are not
able to detect the presence of the primary user.
that can use it when the primary user is absent. The second
type of allocation scheme is termed opportunistic spectrum
sharing. The FCC has already legislated this access method
for the 5 GHz band and is also considering the same for
TV broadcast bands [1]. As a result, a new wave of
intelligent radios, termed cognitive radios (or software defined
radios), is emerging that can dynamically re-tune their
radio parameters based on interactions with their surrounding
environment.
Under the new opportunistic allocation strategy,
secondary users are obligated not to interfere with primary
users (senders or receivers). This can be done by sensing
the environment to detect the presence of primary users.
However, local sensing is not always adequate, especially in
cases where a secondary user is shadowed from a primary
user, as illustrated in Figure 1. Here, coordination between
secondary users is the only way for shadowed users to
detect the primary. In general, cooperation improves sensing
accuracy by an order of magnitude when compared to not
cooperating at all [5].
To realize this vision of dynamic spectrum access, two
fundamental problems must be solved: (1) Efficient and
coordinated spectrum sensing and (2) Distributed spectrum
allocation. In this paper, we propose strategies for coordinated
spectrum sensing that are low cost, operate on timescales
comparable to the agility of the RF environment, and are
resilient to network failures and alterations. We defer the
problem of spectrum allocation to future work.
Spectrum sensing techniques for cognitive radio networks
[4, 17] are broadly classified into three regimes; (1)
centralized coordinated techniques, (2) decentralized coordinated
techniques, and (3) decentralized uncoordinated techniques.
We advocate a decentralized coordinated approach, similar
in spirit to OSPF link-state routing used in the Internet.
This is more effective than uncoordinated approaches
because making decisions based only on local information is
fallible (as shown in Figure 1). Moreover, compared to
cen12
tralized approaches, decentralized techniques are more
scalable, robust, and resistant to network failures and security
attacks (e.g. jamming).
Coordinating sensory data between cognitive radio devices
is technically challenging because accurately assessing
spectrum usage requires exchanging potentially large amounts of
data with many radios at very short time scales. Data size
grows rapidly due to the large number (i.e. thousands) of
spectrum bands that must be scanned. This data must also
be exchanged between potentially hundreds of neighboring
secondary users at short time scales, to account for rapid
changes in the RF environment.
This paper presents GUESS, a novel approach to
coordinated spectrum sensing for cognitive radio networks. Our
approach is motivated by the following key observations:
1. Low-cost sensors collect approximate data: Most
devices have limited sensing resolution because they are
low-cost and low duty-cycle devices and thus cannot
perform complex RF signal processing (e.g. matched
filtering). Many are typically equipped with simple
energy detectors that gather only approximate
information.
2. Approximate summaries are sufficient for coordination:
Approximate statistical summaries of sensed data are
sufficient for correlating sensed information between
radios, as relative usage information is more
important than absolute usage data. Thus, exchanging
exact RF information may not be necessary, and more
importantly, too costly for the purposes of spectrum
sensing.
3. RF spectrum changes incrementally: On most bands,
RF spectrum utilization changes infrequently.
Moreover, utilization of a specific RF band affects only that
band and not the entire spectrum. Therefore, if the
usage pattern of a particular band changes
substantially, nodes detecting that change can initiate an
update protocol to update the information for that band
alone, leaving in place information already collected
for other bands. This allows rapid detection of change
while saving the overhead of exchanging unnecessary
information.
Based on these observations, GUESS makes the following
contributions:
1. A novel approach that applies randomized gossiping
algorithms to the problem of coordinated spectrum
sensing. These algorithms are well suited to coordinated
spectrum sensing due to the unique characteristics of
the problem: i.e. radios are power-limited, mobile and
have limited bandwidth to support spectrum sensing
capabilities.
2. An application of in-network aggregation for
dissemination of spectrum summaries. We argue that
approximate summaries are adequate for performing accurate
radio parameter tuning.
3. An extension of in-network aggregation and
randomized gossiping to support incremental maintenance of
spectrum summaries. Compared to standard
gossiping approaches, incremental techniques can further
reduce overhead and protocol execution time by
requiring fewer radio resources.
The rest of the paper is organized as follows. Section 2
motivates the need for a low cost and efficient approach to
coordinated spectrum sensing. Section 3 discusses related
work in the area, while Section 4 provides a background on
in-network aggregation and randomized gossiping. Sections
5 and 6 discuss extensions and protocol details of these
techniques for coordinated spectrum sensing. Section 7 presents
simulation results showcasing the benefits of GUESS, and
Section 8 presents a discussion and some directions for
future work.
2. MOTIVATION
To estimate the scale of the problem, In-stat predicts that
the number of WiFi-enabled devices sold annually alone will
grow to 430 million by 2009 [2]. Therefore, it would be
reasonable to assume that a typical dense urban environment
will contain several thousand cognitive radio devices in range
of each other. As a result, distributed spectrum sensing and
allocation would become both important and fundamental.
Coordinated sensing among secondary radios is essential
due to limited device sensing resolution and physical RF
effects such as shadowing. Cabric et al. [5] illustrate the gains
from cooperation and show an order of magnitude reduction
in the probability of interference with the primary user when
only a small fraction of secondary users cooperate.
However, such coordination is non-trivial due to: (1) the
limited bandwidth available for coordination, (2) the need to
communicate this information on short timescales, and (3)
the large amount of sensory data that needs to be exchanged.
Limited Bandwidth: Due to restrictions of cost and
power, most devices will likely not have dedicated hardware
for supporting coordination. This implies that both data
and sensory traffic will need to be time-multiplexed onto a
single radio interface. Therefore, any time spent
communicating sensory information takes away from the device"s
ability to perform its intended function. Thus, any such
coordination must incur minimal network overhead.
Short Timescales: Further compounding the problem
is the need to immediately propagate updated RF sensory
data, in order to allow devices to react to it in a timely
fashion. This is especially true due to mobility, as rapid changes
of the RF environment can occur due to device and obstacle
movements. Here, fading and multi-path interference
heavily impact sensing abilities. Signal level can drop to a deep
null with just a λ/4 movement in receiver position (3.7 cm
at 2 GHz), where λ is the wavelength [14]. Coordination
which does not support rapid dissemination of information
will not be able to account for such RF variations.
Large Sensory Data: Because cognitive radios can
potentially use any part of the RF spectrum, there will be
numerous channels that they need to scan. Suppose we wish to
compute the average signal energy in each of 100 discretized
frequency bands, and each signal can have up to 128 discrete
energy levels. Exchanging complete sensory information
between nodes would require 700 bits per transmission (for
100 channels, each requiring seven bits of information).
Exchanging this information among even a small group of 50
devices each second would require (50 time-steps × 50
devices × 700 bits per transmission) = 1.67 Mbps of aggregate
network bandwidth.
Contrast this to the use of a randomized gossip protocol to
disseminate such information, and the use of FM bit vectors
to perform in-network aggregation. By applying gossip and
FM aggregation, aggregate bandwidth requirements drop to
(c·logN time-steps × 50 devices × 700 bits per transmission)
= 0.40 Mbps, since 12 time-steps are needed to propagate
the data (with c = 2, for illustrative purpoes1
). This is
explained further in Section 4.
Based on these insights, we propose GUESS, a low-overhead
approach which uses incremental extensions to FM
aggregation and randomized gossiping for efficient coordination
within a cognitive radio network. As we show in Section 7,
1
Convergence time is correlated with the connectivity topology
of the devices, which in turn depends on the environment.
13
X
A
A
X
B
B
X
Figure 2: Using FM aggregation to compute average signal level measured by a group of devices.
these incremental extensions can further reduce bandwidth
requirements by up to a factor of 2.4 over the standard
approaches discussed above.
3. RELATED WORK
Research in cognitive radio has increased rapidly [4, 17]
over the years, and it is being projected as one of the leading
enabling technologies for wireless networks of the future [9].
As mentioned earlier, the FCC has already identified new
regimes for spectrum sharing between primary users and
secondary users and a variety of systems have been proposed
in the literature to support such sharing [4, 17].
Detecting the presence of a primary user is non-trivial,
especially a legacy primary user that is not cognitive
radio aware. Secondary users must be able to detect the
primary even if they cannot properly decode its signals. This
has been shown by Sahai et al. [16] to be extremely
difficult even if the modulation scheme is known. Sophisticated
and costly hardware, beyond a simple energy detector, is
required to improve signal detection accuracy [16]. Moreover,
a shadowed secondary user may not even be able to detect
signals from the primary. As a result, simple local
sensing approaches have not gained much momentum. This has
motivated the need for cooperation among cognitive radios
[16].
More recently, some researchers have proposed approaches
for radio coordination. Liu et al. [11] consider a centralized
access point (or base station) architecture in which
sensing information is forwarded to APs for spectrum allocation
purposes. APs direct mobile clients to collect such
sensing information on their behalf. However, due to the need
of a fixed AP infrastructure, such a centralized approach is
clearly not scalable.
In other work, Zhao et al. [17] propose a distributed
coordination approach for spectrum sensing and allocation.
Cognitive radios organize into clusters and coordination
occurs within clusters. The CORVUS [4] architecture proposes
a similar clustering method that can use either a centralized
or decentralized approach to manage clusters. Although an
improvement over purely centralized approaches, these
techniques still require a setup phase to generate the clusters,
which not only adds additional delay, but also requires many
of the secondary users to be static or quasi-static. In
contrast, GUESS does not place such restrictions on secondary
users, and can even function in highly mobile environments.
4. BACKGROUND
This section provides the background for our approach.
We present the FM aggregation scheme that we use to
generate spectrum summaries and perform in-network
aggregation. We also discuss randomized gossiping techniques for
disseminating aggregates in a cognitive radio network.
4.1 FM Aggregation
Aggregation is the process where nodes in a distributed
network combine data received from neighboring nodes with
their local value to generate a combined aggregate. This
aggregate is then communicated to other nodes in the
network and this process repeats until the aggregate at all
nodes has converged to the same value, i.e. the global
aggregate. Double-counting is a well known problem in this
process, where nodes may contribute more than once to the
aggregate, causing inaccuracy in the final result. Intuitively,
nodes can tag the aggregate value they transmit with
information about which nodes have contributed to it. However,
this approach is not scalable. Order and Duplicate
Insensitive (ODI) techniques have been proposed in the literature
[10, 15]. We adopt the ODI approach pioneered by Flajolet
and Martin (FM) for the purposes of aggregation. Next we
outline the FM approach; for full details, see [7].
Suppose we want to compute the number of nodes in the
network, i.e. the COUNT query. To do so, each node
performs a coin toss experiment as follows: toss an unbiased
coin, stopping after the first head is seen. The node then
sets the ith bit in a bit vector (initially filled with zeros),
where i is the number of coin tosses it performed. The
intuition is that as the number of nodes doing coin toss
experiments increases, the probability of a more significant bit
being set in one of the nodes" bit vectors increases.
These bit vectors are then exchanged among nodes. When
a node receives a bit vector, it updates its local bit vector
by bitwise OR-ing it with the received vector (as shown in
Figure 2 which computes AVERAGE). At the end of the
aggregation process, every node, with high probability, has
the same bit vector. The actual value of the count aggregate
is then computed using the following formula, AGGF M =
2j−1
/0.77351, where j represents the bit position of the least
significant zero in the aggregate bit vector [7].
Although such aggregates are very compact in nature,
requiring only O(logN) state space (where N is the number
of nodes), they may not be very accurate as they can only
approximate values to the closest power of 2, potentially
causing errors of up to 50%. More accurate aggregates can
be computed by maintaining multiple bit vectors at each
node, as explained in [7]. This decreases the error to within
O(1/
√
m), where m is the number of such bit vectors.
Queries other than count can also be computed using
variants of this basic counting algorithm, as discussed in [3] (and
shown in Figure 2). Transmitting FM bit vectors between
nodes is done using randomized gossiping, discussed next.
4.2 Gossip Protocols
Gossip-based protocols operate in discrete time-steps; a
time-step is the required amount of time for all
transmissions in that time-step to complete. At every time-step, each
node having something to send randomly selects one or more
neighboring nodes and transmits its data to them. The
randomized propagation of information provides fault-tolerance
and resilience to network failures and outages. We
emphasize that this characteristic of the protocol also allows it to
operate without relying on any underlying network
structure. Gossip protocols have been shown to provide
exponentially fast convergence2
, on the order of O(log N)
[10], where N is the number of nodes (or radios). These
protocols can therefore easily scale to very dense
environments.
2
Convergence refers to the state in which all nodes have the most
up-to-date view of the network.
14
Two types of gossip protocols are:
• Uniform Gossip: In uniform gossip, at each
timestep, each node chooses a random neighbor and sends
its data to it. This process repeats for O(log(N)) steps
(where N is the number of nodes in the network).
Uniform gossip provides exponentially fast convergence,
with low network overhead [10].
• Random Walk: In random walk, only a subset of
the nodes (termed designated nodes) communicate in a
particular time-step. At startup, k nodes are randomly
elected as designated nodes. In each time-step, each
designated node sends its data to a random neighbor,
which becomes designated for the subsequent
timestep (much like passing a token). This process repeats
until the aggregate has converged in the network.
Random walk has been shown to provide similar
convergence bounds as uniform gossip in problems of similar
context [8, 12].
5. INCREMENTAL PROTOCOLS
5.1 Incremental FM Aggregates
One limitation of FM aggregation is that it does not
support updates. Due to the probabilistic nature of FM, once
bit vectors have been ORed together, information cannot
simply be removed from them as each node"s contribution
has not been recorded. We propose the use of delete vectors,
an extension of FM to support updates. We maintain a
separate aggregate delete vector whose value is subtracted from
the original aggregate vector"s value to obtain the resulting
value as follows.
AGGINC = (2a−1
/0.77351) − (2b−1
/0.77351) (1)
Here, a and b represent the bit positions of the least
significant zero in the original and delete bit vectors respectively.
Suppose we wish to compute the average signal level
detected in a particular frequency. To compute this, we
compute the SUM of all signal level measurements and divide
that by the COUNT of the number of measurements. A
SUM aggregate is computed similar to COUNT (explained
in Section 4.1), except that each node performs s coin toss
experiments, where s is the locally measured signal level.
Figure 2 illustrates the sequence by which the average signal
energy is computed in a particular band using FM
aggregation.
Now suppose that the measured signal at a node changes
from s to s . The vectors are updated as follows.
• s > s: We simply perform (s − s) more coin toss
experiments and bitwise OR the result with the original
bit vector.
• s < s: We increase the value of the delete vector by
performing (s − s ) coin toss experiments and bitwise
OR the result with the current delete vector.
Using delete vectors, we can now support updates to the
measured signal level. With the original implementation of
FM, the aggregate would need to be discarded and a new one
recomputed every time an update occurred. Thus, delete
vectors provide a low overhead alternative for applications
whose data changes incrementally, such as signal level
measurements in a coordinated spectrum sensing environment.
Next we discuss how these aggregates can be communicated
between devices using incremental routing protocols.
5.2 Incremental Routing Protocol
We use the following incremental variants of the routing
protocols presented in Section 4.2 to support incremental
updates to previously computed aggregates.
Update Received OR
Local Update Occurs
Recovered
Susceptible
Time-stamp Expires
Initial State
Additional
Update
Received
Infectious
Clean Up
Figure 3: State diagram each device passes through as
updates proceed in the system
• Incremental Gossip Protocol (IGP): When an
update occurs, the updated node initiates the gossiping
procedure. Other nodes only begin gossiping once they
receive the update. Therefore, nodes receiving the
update become active and continue communicating with
their neighbors until the update protocol terminates,
after O(log(N)) time steps.
• Incremental Random Walk Protocol (IRWP):
When an update (or updates) occur in the system,
instead of starting random walks at k random nodes in
the network, all k random walks are initiated from the
updated node(s). The rest of the protocol proceeds in
the same fashion as the standard random walk
protocol. The allocation of walks to updates is discussed
in more detail in [3], where the authors show that the
number of walks has an almost negligible impact on
network overhead.
6. PROTOCOL DETAILS
Using incremental routing protocols to disseminate
incremental FM aggregates is a natural fit for the problem of
coordinated spectrum sensing. Here we outline the
implementation of such techniques for a cognitive radio network.
We continue with the example from Section 5.1, where we
wish to perform coordination between a group of wireless
devices to compute the average signal level in a particular
frequency band.
Using either incremental random walk or incremental
gossip, each device proceeds through three phases, in order to
determine the global average signal level for a particular
frequency band. Figure 3 shows a state diagram of these
phases.
Susceptible: Each device starts in the susceptible state
and becomes infectious only when its locally measured signal
level changes, or if it receives an update message from a
neighboring device. If a local change is observed, the device
updates either the original or delete bit vector, as described
in Section 5.1, and moves into the infectious state. If it
receives an update message, it ORs the received original
and delete bit vectors with its local bit vectors and moves
into the infectious state.
Note, because signal level measurements may change
sporadically over time, a smoothing function, such as an
exponentially weighted moving average, should be applied to
these measurements.
Infectious: Once a device is infectious it continues to
send its up-to-date bit vectors, using either incremental
random walk or incremental gossip, to neighboring nodes. Due
to FM"s order and duplicate insensitive (ODI) properties,
simultaneously occurring updates are handled seamlessly by
the protocol.
Update messages contain a time stamp indicating when
the update was generated, and each device maintains a
lo15
0
200
400
600
800
1000
1 10 100
Number of Measured Signal Changes
Executiontime(ms)
Incremental Gossip Uniform Gossip
(a) Incremental Gossip and Uniform
Gossip on Clique
0
200
400
600
800
1000
1 10 100
Number of Measured Signal Changes
ExecutionTime(ms).
Incremental Random Walk Random Walk
(b) Incremental Random Walk and
Random Walk on Clique
0
400
800
1200
1600
2000
1 10 100
Number of Measured Signal Changes
ExecutionTime(ms).
Random Walk Incremental Random Walk
(c) Incremental Random Walk and
Random Walk on Power-Law Random Graph
Figure 4: Execution times of Incremental Protocols
0.9
1.4
1.9
2.4
2.9
1 10 100
Number of Measured Signal Changes
OverheadImprovementRatio.
(NormalizedtoUniformGossip)
Incremental Gossip Uniform Gossip
(a) Incremental Gossip and Uniform
Gossip on Clique
0.9
1.4
1.9
2.4
2.9
1 10 100
Number of Measured Signal Changes
OverheadImprovementRatio.
(NormalizedtoRandomWalk) Incremental Random Walk Random Walk
(b) Incremental Random Walk and
Random Walk on Clique
0.9
1.1
1.3
1.5
1.7
1.9
1 10 100
Number of Measured Signal Changes
OverheadImprovementRatio.
(NormalizedtoRandomWalk)
Random Walk Incremental Random Walk
(c) Incremental Random Walk and
Random Walk on Power-Law Random Graph
Figure 5: Network overhead of Incremental Protocols
cal time stamp of when it received the most recent update.
Using this information, a device moves into the recovered
state once enough time has passed for the most recent
update to have converged. As discussed in Section 4.2, this
happens after O(log(N)) time steps.
Recovered: A recovered device ceases to propagate any
update information. At this point, it performs clean-up and
prepares for the next infection by entering the susceptible
state. Once all devices have entered the recovered state, the
system will have converged, and with high probability, all
devices will have the up-to-date average signal level. Due
to the cumulative nature of FM, even if all devices have not
converged, the next update will include all previous updates.
Nevertheless, the probability that gossip fails to converge is
small, and has been shown to be O(1/N) [10].
For coordinated spectrum sensing, non-incremental
routing protocols can be implemented in a similar fashion.
Random walk would operate by having devices periodically drop
the aggregate and re-run the protocol. Each device would
perform a coin toss (biased on the number of walks) to
determine whether or not it is a designated node. This is
different from the protocol discussed above where only
updated nodes initiate random walks. Similar techniques can
be used to implement standard gossip.
7. EVALUATION
We now provide a preliminary evaluation of GUESS in
simulation. A more detailed evaluation of this approach can
be found in [3]. Here we focus on how incremental
extensions to gossip protocols can lead to further improvements
over standard gossiping techniques, for the problem of
coordinated spectrum sensing.
Simulation Setup: We implemented a custom
simulator in C++. We study the improvements of our
incremental gossip protocols over standard gossiping in two
dimensions: execution time and network overhead. We use two
topologies to represent device connectivity: a clique, to
eliminate the effects of the underlying topology on protocol
performance, and a BRITE-generated [13] power-law random
graph (PLRG), to illustrate how our results extend to more
realistic scenarios. We simulate a large deployment of 1,000
devices to analyze protocol scalability.
In our simulations, we compute the average signal level in
a particular band by disseminating FM bit vectors. In each
run of the simulation, we induce a change in the measured
signal at one or more devices. A run ends when the new
average signal level has converged in the network.
For each data point, we ran 100 simulations and 95%
confidence intervals (error bars) are shown.
Simulation Parameters: Each transmission involves
sending 70 bits of information to a neighboring node. To
compute the AVERAGE aggregate, four bit vectors need to
be transmitted: the original SUM vector, the SUM delete
vector, the original COUNT vector, and the COUNT delete
vector. Non-incremental protocols do not transmit the delete
vectors. Each transmission also includes a time stamp of
when the update was generated.
We assume nodes communicate on a common control
channel at 2 Mbps. Therefore, one time-step of protocol
execution corresponds to the time required for 1,000 nodes to
sequentially send 70 bits at 2 Mbps. Sequential use of the
control channel is a worst case for our protocols; in practice,
multiple control channels could be used in parallel to reduce
execution time. We also assume nodes are loosely time
synchronized, the implications of which are discussed further in
[3]. Finally, in order to isolate the effect of protocol
operation on performance, we do not model the complexities of
the wireless channel in our simulations.
Incremental Protocols Reduce Execution Time:
Figure 4(a) compares the performance of incremental gossip
(IGP) with uniform gossip on a clique topology. We observe
that both protocols have almost identical execution times.
This is expected as IGP operates in a similar fashion to
16
uniform gossip, taking O(log(N)) time-steps to converge.
Figure 4(b) compares the execution times of
incremental random walk (IRWP) and standard random walk on a
clique. IRWP reduces execution time by a factor of 2.7 for a
small number of measured signal changes. Although random
walk and IRWP both use k random walks (in our simulations
k = number of nodes), IRWP initiates walks only from
updated nodes (as explained in Section 5.2), resulting in faster
information convergence. These improvements carry over to
a PLRG topology as well (as shown in Figure 4(c)), where
IRWP is 1.33 times faster than random walk.
Incremental Protocols Reduce Network Overhead:
Figure 5(a) shows the ratio of data transmitted using
uniform gossip relative to incremental gossip on a clique. For
a small number of signal changes, incremental gossip incurs
2.4 times less overhead than uniform gossip. This is because
in the early steps of protocol execution, only devices which
detect signal changes communicate. As more signal changes
are introduced into the system, gossip and incremental
gossip incur approximately the same overhead.
Similarly, incremental random walk (IRWP) incurs much
less overhead than standard random walk. Figure 5(b) shows
a 2.7 fold reduction in overhead for small numbers of
signal changes on a clique. Although each protocol uses the
same number of random walks, IRWP uses fewer network
resources than random walk because it takes less time to
converge. This improvement also holds true on more
complex PLRG topologies (as shown in Figure 5(c)), where we
observe a 33% reduction in network overhead.
From these results it is clear that incremental techniques
yield significant improvements over standard approaches to
gossip, even on complex topologies. Because spectrum
utilization is characterized by incremental changes to usage,
incremental protocols are ideally suited to solve this
problem in an efficient and cost effective manner.
8. DISCUSSION AND FUTURE WORK
We have only just scratched the surface in addressing the
problem of coordinated spectrum sensing using incremental
gossiping. Next, we outline some open areas of research.
Spatial Decay: Devices performing coordinated sensing
are primarily interested in the spectrum usage of their local
neighborhood. Therefore, we recommend the use of
spatially decaying aggregates [6], which limits the impact of an
update on more distant nodes. Spatially decaying
aggregates work by successively reducing (by means of a decay
function) the value of the update as it propagates further
from its origin. One challenge with this approach is that
propagation distance cannot be determined ahead of time
and more importantly, exhibits spatio-temporal variations.
Therefore, finding the optimal decay function is non-trivial,
and an interesting subject of future work.
Significance Threshold: RF spectrum bands
continually experience small-scale changes which may not
necessarily be significant. Deciding if a change is significant can be
done using a significance threshold β, below which any
observed change is not propagated by the node. Choosing an
appropriate operating value for β is application dependent,
and explored further in [3].
Weighted Readings: Although we argued that most
devices will likely be equipped with low-cost sensing
equipment, there may be situations where there are some special
infrastructure nodes that have better sensing abilities than
others. Weighting their measurements more heavily could
be used to maintain a higher degree of accuracy.
Determining how to assign such weights is an open area of research.
Implementation Specifics: Finally, implementing
gossip for coordinated spectrum sensing is also open. If
implemented at the MAC layer, it may be feasible to piggy-back
gossip messages over existing management frames (e.g.
networking advertisement messages). As well, we also require
the use of a control channel to disseminate sensing
information. There are a variety of alternatives for
implementing such a channel, some of which are outlined in [4]. The
trade-offs of different approaches to implementing GUESS
is a subject of future work.
9. CONCLUSION
Spectrum sensing is a key requirement for dynamic
spectrum allocation in cognitive radio networks. The nature of
the RF environment necessitates coordination between
cognitive radio devices. We propose GUESS, an approximate
yet low overhead approach to perform efficient coordination
between cognitive radios. The fundamental contributions of
GUESS are: (1) an FM aggregation scheme for efficient
innetwork aggregation, (2) a randomized gossiping approach
which provides exponentially fast convergence and
robustness to network alterations, and (3) incremental variations
of FM and gossip which we show can reduce the
communication time by up to a factor of 2.7 and reduce network
overhead by up to a factor of 2.4. Our preliminary
simulation results showcase the benefits of this approach and we
also outline a set of open problems that make this a new
and exciting area of research.
10. REFERENCES
[1] Unlicensed Operation in the TV Broadcast Bands and
Additional Spectrum for Unlicensed Devices Below 900 MHz in
the 3 GHz band, May 2004. Notice of Proposed Rule-Making
04-186, Federal Communications Commission.
[2] In-Stat: Covering the Full Spectrum of Digital Communications
Market Research, from Vendor to End-user, December 2005.
http://www.in-stat.com/catalog/scatalogue.asp?id=28.
[3] N. Ahmed, D. Hadaller, and S. Keshav. Incremental
Maintenance of Global Aggregates. UW. Technical Report
CS-2006-19, University of Waterloo, ON, Canada, 2006.
[4] R. W. Brodersen, A. Wolisz, D. Cabric, S. M. Mishra, and
D. Willkomm. CORVUS: A Cognitive Radio Approach for
Usage of Virtual Unlicensed Spectrum. Technical report, July
2004.
[5] D. Cabric, S. M. Mishra, and R. W. Brodersen. Implementation
Issues in Spectrum Sensing for Cognitive Radios. In Asilomar
Conference, 2004.
[6] E. Cohen and H. Kaplan. Spatially-Decaying Aggregation Over
a Network: Model and Algorithms. In Proceedings of SIGMOD
2004, pages 707-718, New York, NY, USA, 2004. ACM Press.
[7] P. Flajolet and G. N. Martin. Probabilistic Counting
Algorithms for Data Base Applications. J. Comput. Syst. Sci.,
31(2):182-209, 1985.
[8] C. Gkantsidis, M. Mihail, and A. Saberi. Random Walks in
Peer-to-Peer Networks. In Proceedings of INFOCOM 2004,
pages 1229-1240, 2004.
[9] E. Griffith. Previewing Intel"s Cognitive Radio Chip, June 2005.
http://www.internetnews.com/wireless/article.php/3513721.
[10] D. Kempe, A. Dobra, and J. Gehrke. Gossip-Based
Computation of Aggregate Information. In FOCS 2003, page
482, Washington, DC, USA, 2003. IEEE Computer Society.
[11] X. Liu and S. Shankar. Sensing-based Opportunistic Channel
Access. In ACM Mobile Networks and Applications
(MONET) Journal, March 2005.
[12] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker. Search and
Replication in Unstructured Peer-to-Peer Networks. In
Proceedings of ICS, 2002.
[13] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE: an
Approach to Universal Topology Generation. In Proceedings of
MASCOTS conference, Aug. 2001.
[14] S. M. Mishra, A. Sahai, and R. W. Brodersen. Cooperative
Sensing among Cognitive Radios. In ICC 2006, June 2006.
[15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson.
Synopsis Diffusion for Robust Aggregation in Sensor Networks.
In Proceedings of SenSys 2004, pages 250-262, 2004.
[16] A. Sahai, N. Hoven, S. M. Mishra, and R. Tandra. Fundamental
Tradeoffs in Robust Spectrum Sensing for Opportunistic
Frequency Reuse. Technical Report UC Berkeley, 2006.
[17] J. Zhao, H. Zheng, and G.-H. Yang. Distributed Coordination
in Dynamic Spectrum Allocation Networks. In Proceedings of
DySPAN 2005, Baltimore (MD), Nov. 2005.
17 | coordinate spectrum sense;coordinated sensing;cognitive radio;spatially decaying aggregate;rf interference;spectrum allocation;opportunistic spectrum sharing;innetwork aggregation;incremental algorithm;rf spectrum;spectrum sensing;incremental gossip protocol;fm aggregation;gossip protocol |
train_C-74 | Adapting Asynchronous Messaging Middleware to ad-hoc Networking | The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years. | 1. INTRODUCTION
With the increasing popularity of mobile devices and their
widespread adoption, there is a clear need to allow the
development of a broad spectrum of applications that operate
effectively over such an environment. Unfortunately, this is far
from simple: mobile devices are increasingly heterogeneous
in terms of processing capabilities, memory size, battery
capacity, and network interfaces. Each such configuration has
substantially different characteristics that are both statically
different - for example, there is a major difference in
capability between a Berkeley mote and an 802.11g-equipped
laptop - and that vary dynamically, as in situations of
fluctuating bandwidth and intermittent connectivity. Mobile ad
hoc environments have an additional element of complexity
in that they are entirely decentralised.
In order to craft applications for such complex
environments, an appropriate form of middleware is essential if cost
effective development is to be achieved. In this paper, we
examine one of the foundational aspects of middleware for
mobile ad-hoc environments: that of the communication
primitives.
Traditionally, the most frequently used middleware
primitives for communication assume the simultaneous presence
of both end points on a network, since the stability and
pervasiveness of the networking infrastructure is not an
unreasonable assumption for most wired environments. In other
words, most communication paradigms are synchronous:
object oriented middleware such as CORBA and Java RMI are
typical examples of middleware based on synchronous
communication.
In recent years, there has been growing interest in
platforms based on asynchronous communication paradigms, such
as publish-subscribe systems [6]: these have been exploited
very successfully where there is application level
asynchronicity. From a Gartner Market Report [7]: Given
messageoriented-middleware"s (MOM) popularity, scalability,
flexibility, and affinity with mobile and wireless architectures,
by 2004, MOM will emerge as the dominant form of
communication middleware for linking mobile and enterprise
applications (0.7 probability).... Moreover, in mobile ad-hoc
systems, the likelihood of network fragmentation means that
synchronous communication may in any case be
impracticable, giving situations in which delay tolerant asynchronous
traffic is the only form of traffic that could be supported.
121 Middleware 2004 Companion
Middleware for mobile ad-hoc environments must therefore
support semi-synchronous or completely asynchronous
communication primitives if it is to avoid substantial
limitations to its utility. Aside from the intellectual challenge in
supporting this model, this work is also interesting because
there are a number of practical application domains in
allowing inter-community communication in undeveloped
areas of the globe. Thus, for example, projects that have been
carried out to help populations that live in remote places of
the globe such as Lapland [3] or in poor areas that lack fixed
connectivity infrastructure [9].
There have been attempts to provide mobile middleware
with these properties, including STEAM, LIME,
XMIDDLE, Bayou (see [11] for a more complete review of mobile
middleware). These models differ quite considerably from
the existing traditional middleware in terms of primitives
provided. Furthermore, some of them fail in providing a
solution for the true ad-hoc scenarios.
If the projected success of MOM becomes anything like
a reality, there will be many programmers with experience
of it. The ideal solution to the problem of middleware for
ad-hoc systems is, then, to allow programmers to utilise the
same paradigms and models presented by common forms of
MOM and to ensure that these paradigms are supportable
within the mobile environment. This approach has clear
advantages in allowing applications developed on standard
middleware platforms to be easily deployed on mobile
devices. Indeed, some research has already led to the
adaptation of traditional middleware platforms to mobile settings,
mainly to provide integration between mobile devices and
existing fixed networks in a nomadic (i.e., mixed)
environment [4]. With respect to message oriented middleware, the
current implementations, however, either assume the
existence of a backbone network to which the mobile hosts
connect from time to time while roaming [10], or assume that
nodes are always somehow reachable through a path [18].
No adaptation to heterogeneous or completely ad-hoc
scenarios, with frequent disconnection and periodically isolated
clouds of hosts, has been attempted.
In the remainder of this paper we describe an initial
attempt to adapt message oriented middleware to suit mobile
and, more specifically, mobile ad-hoc networks. In our case,
we elected to examine JMS, as one of the most widely known
MOM systems. In the latter part of this paper, we explore
the limitations of our results and describe the plans we have
to take the work further.
2. MESSAGE ORIENTED MIDDLEWARE
AND JAVA MESSAGE SERVICE (JMS)
Message-oriented middleware systems support
communication between distributed components via message-passing:
the sender sends a message to identified queues, which
usually reside on a server. A receiver retrieves the message from
the queue at a different time and may acknowledge the reply
using the same asynchronous mechanism. Message-oriented
middleware thus supports asynchronous communication in
a very natural way, achieving de-coupling of senders and
receivers. A sender is able to continue processing as soon
as the middleware has accepted the message; eventually,
the receiver will send an acknowledgment message and the
sender will be able to collect it at a convenient time.
However, given the way they are implemented, these middleware
systems usually require resource-rich devices, especially in
terms of memory and disk space, where persistent queues
of messages that have been received but not yet processed,
are stored. Sun Java Message Service [5], IBM WebSphere
MQ [6], Microsoft MSMQ [12] are examples of very
successful message-oriented middleware for traditional distributed
systems.
The Java Messaging Service (JMS) is a collection of
interfaces for asynchronous communication between distributed
components. It provides a common way for Java programs
to create, send and receive messages. JMS users are usually
referred to as clients. The JMS specification further defines
providers as the components in charge of implementing the
messaging system and providing the administrative and
control functionality (i.e., persistence and reliability) required
by the system. Clients can send and receive messages,
asynchronously, through the JMS provider, which is in charge of
the delivery and, possibly, of the persistence of the messages.
There are two types of communication supported: point
to point and publish-subscribe models. In the point to point
model, hosts send messages to queues. Receivers can be
registered with some specific queues, and can asynchronously
retrieve the messages and then acknowledge them. The
publish-subscribe model is based on the use of topics that
can be subscribed to by clients. Messages are sent to topics
by other clients and are then received in an asynchronous
mode by all the subscribed clients. Clients learn about the
available topics and queues through Java Naming and
Directory Interface (JNDI) [14]. Queues and topics are created
by an administrator on the provider and are registered with
the JNDI interface for look-up.
In the next section, we introduce the challenges of mobile
networks, and show how JMS can be adapted to cope with
these requirements.
3. JMS FOR MOBILE COMPUTING
Mobile networks vary very widely in their characteristics,
from nomadic networks in which modes relocate whilst
offline through to ad-hoc networks in which modes move freely
and in which there is no infrastructure. Mobile ad-hoc
networks are most generally applicable in situations where
survivability and instant deployability are key: most notably in
military applications and disaster relief. In between these
two types of "mobile" networks, there are, however, a number
of possible heterogeneous combinations, where nomadic and
ad-hoc paradigms are used to interconnect totally unwired
areas to more structured networks (such as a LAN or the
Internet).
Whilst the JMS specification has been extensively
implemented and used in traditional distributed systems,
adaptations for mobile environments have been proposed only
recently. The challenges of porting JMS to mobile settings
are considerable; however, in view of its widespread
acceptance and use, there are considerable advantages in allowing
the adaptation of existing applications to mobile
environments and in allowing the interoperation of applications in
the wired and wireless regions of a network.
In [10], JMS was adapted to a nomadic mobile setting,
where mobile hosts can be JMS clients and communicate
through the JMS provider that, however, sits on a
backbone network, providing reliability and persistence. The
client prototype presented in [10] is very lightweight, due
to the delegation of all the heavyweight functionality to the
Middleware for Pervasive and ad-hoc Computing 122
provider on the wired network. However, this approach is
somewhat limited in terms of widespread applicability and
scalability as a consequence of the concentration of
functionality in the wired portion of the network.
If JMS is to be adapted to completely ad-hoc
environments, where no fixed infrastructure is available, and where
nodes change location and status very dynamically, more
issues must be taken into consideration. Firstly, discovery
needs to use a resilient but distributed model: in this
extremely dynamic environment, static solutions are
unacceptable. As discussed in Section 2, a JMS administrator defines
queues and topics on the provider. Clients can then learn
about them using the Java Naming and Directory Interface
(JNDI). However, due to the way JNDI is designed, a JNDI
node (or more than one) needs to be in reach in order to
obtain a binding of a name to an address (i.e., knowing where
a specific queue/topic is). In mobile ad-hoc environments,
the discovery process cannot assume the existence of a fixed
set of discovery servers that are always reachable, as this
would not match the dynamicity of ad-hoc networks.
Secondly, a JMS Provider, as suggested by the JMS
specification, also needs to be reachable by each node in the
network, in order to communicate. This assumes a very
centralised architecture, which again does not match the
requirements of a mobile ad-hoc setting, in which nodes may
be moving and sparse: a more distributed and dynamic
solution is needed. Persistence is, however, essential
functionality in asynchronous communication environments as hosts
are, by definition, connected at different times.
In the following section, we will discuss our experience
in designing and implementing JMS for mobile ad-hoc
networks.
4. JMSFOR MOBILE ad-hoc NETWORKS
4.1 Adaptation of JMS for Mobile ad-hoc
Networks
Developing applications for mobile networks is yet more
challenging: in addition to the same considerations as for
infrastructured wireless environments, such as the limited
device capabilities and power constraints, there are issues
of rate of change of network connectivity, and the lack of a
static routing infrastructure. Consequently, we now describe
an initial attempt to adapt the JMS specification to target
the particular requirements related to ad-hoc scenarios. As
discussed in Section 3, a JMS application can use either the
point to point and the publish-subscribe styles of messaging.
Point to Point Model The point to point model is based
on the concept of queues, that are used to enable
asynchronous communication between the producer of a message
and possible different consumers. In our solution, the
location of queues is determined by a negotiation process that
is application dependent. For example, let us suppose that
it is possible to know a priori, or it is possible to determine
dynamically, that a certain host is the receiver of the most
part of messages sent to a particular queue. In this case, the
optimum location of the queue may well be on this
particular host. In general, it is worth noting that, according to the
JMS specification and suggested design patterns, it is
common and preferable for a client to have all of its messages
delivered to a single queue.
Queues are advertised periodically to the hosts that are
within transmission range or that are reachable by means of
the underlying synchronous communication protocol, if
provided. It is important to note that, at the middleware level,
it is logically irrelevant whether or not the network layer
implements some form of ad-hoc routing (though considerably
more efficient if it does); the middleware only considers
information about which nodes are actively reachable at any
point in time. The hosts that receive advertisement
messages add entries to their JNDI registry. Each entry is
characterized by a lease (a mechanism similar to that present
in Jini [15]). A lease represents the time of validity of a
particular entry. If a lease is not refreshed (i.e, its life is
not extended), it can expire and, consequently, the entry
is deleted from the registry. In other words, the host
assumes that the queue will be unreachable from that point
in time. This may be caused, for example, if a host storing
the queue becomes unreachable. A host that initiates a
discovery process will find the topics and the queues present
in its connected portion of the network in a straightforward
manner.
In order to deliver a message to a host that is not
currently in reach1
, we use an asynchronous epidemic routing
protocol that will be discussed in detail in Section 4.2. If two
hosts are in the same cloud (i.e., a connected path exists
between them), but no synchronous protocol is available, the
messages are sent using the epidemic protocol. In this case,
the delivery latency will be low as a result of the rapidity of
propagation of the infection in the connected cloud (see also
the simulation results in Section 5). Given the existence of
an epidemic protocol, the discovery mechanism consists of
advertising the queues to the hosts that are currently
unreachable using analogous mechanisms.
Publish-Subscribe Model In the publish-subscribe model,
some of the hosts are similarly designated to hold topics and
store subscriptions, as before. Topics are advertised through
the registry in the same way as are queues, and a client
wishing to subscribe to a topic must register with the client
holding the topic. When a client wishes to send a message
to the topic list, it sends it to the topic holder (in the same
way as it would send a message to a queue). The topic
holder then forwards the message to all subscribers, using
the synchronous protocol if possible, the epidemic protocol
otherwise. It is worth noting that we use a single message
with multiple recipients, instead of multiple messages with
multiple recipients. When a message is delivered to one of
the subscribers, this recipient is deleted from the list. In
order to delete the other possible replicas, we employ
acknowledgment messages (discussed in Section 4.4), returned
in the same way as a normal message.
We have also adapted the concepts of durable and non
durable subscriptions for ad-hoc settings. In fixed platforms,
durable subscriptions are maintained during the
disconnections of the clients, whether these are intentional or are the
result of failures. In traditional systems, while a durable
subscriber is disconnected from the server, it is responsible
for storing messages. When the durable subscriber
reconnects, the server sends it all unexpired messages. The
problem is that, in our scenario, disconnections are the norm
1
In theory, it is not possible to send a message to a peer that
has never been reachable in the past, since there can be no
entry present in the registry. However, to overcome this
possible limitation, we provide a primitive through which
information can be added to the registry without using the
normal channels.
123 Middleware 2004 Companion
rather than the exception. In other words, we cannot
consider disconnections as failures. For these reasons, we adopt
a slightly different semantics. With respect to durable
subscriptions, if a subscriber becomes disconnected,
notifications are not stored but are sent using the epidemic
protocol rather than the synchronous protocol. In other words,
durable notifications remain valid during the possible
disconnections of the subscriber.
On the other hand, if a non-durable subscriber becomes
disconnected, its subscription is deleted; in other words,
during disconnections, notifications are not sent using the
epidemic protocol but exploit only the synchronous protocol. If
the topic becomes accessible to this host again, it must make
another subscription in order to receive the notifications.
Unsubscription messages are delivered in the same way
as are subscription messages. It is important to note that
durable subscribers have explicitly to unsubscribe from a
topic in order to stop the notification process; however, all
durable subscriptions have a predefined expiration time in
order to cope with the cases of subscribers that do not meet
again because of their movements or failures. This feature
is clearly provided to limit the number of the unnecessary
messages sent around the network.
4.2 Message Delivery using Epidemic Routing
In this section, we examine one possible mechanism that
will allow the delivery of messages in a partially connected
network. The mechanism we discuss is intended for the
purposes of demonstrating feasibility; more efficient
communication mechanisms for this environment are themselves
complex, and are the subject of another paper [13].
The asynchronous message delivery described above is
based on a typical pure epidemic-style routing protocol [16].
A message that needs to be sent is replicated on each host in
reach. In this way, copies of the messages are quickly spread
through connected networks, like an infection. If a host
becomes connected to another cloud of mobile nodes, during
its movement, the message spreads through this collection
of hosts. Epidemic-style replication of data and messages
has been exploited in the past in many fields starting with
the distributed database systems area [2].
Within epidemic routing, each host maintains a buffer
containing the messages that it has created and the replicas
of the messages generated by the other hosts. To improve
the performance, a hash-table indexes the content of the
buffer. When two hosts connect, the host with the smaller
identifier initiates a so-called anti-entropy session, sending
a list containing the unique identifiers of the messages that
it currently stores. The other host evaluates this list and
sends back a list containing the identifiers it is storing that
are not present in the other host, together with the messages
that the other does not have. The host that has started the
session receives the list and, in the same way, sends the
messages that are not present in the other host. Should buffer
overflow occur, messages are dropped.
The reliability offered by this protocol is typically best
effort, since there is no guarantee that a message will
eventually be delivered to its recipient. Clearly, the delivery ratio
of the protocol increases proportionally to the maximum
allowed delay time and the buffer size in each host (interesting
simulation results may be found in [16]).
4.3 Adaptation of the JMS Message Model
In this section, we will analyse the aspects of our
adaptation of the specification related to the so-called JMS Message
Model [5]. According to this, JMS messages are
characterised by some properties defined using the header field,
which contains values that are used by both clients and
providers for their delivery. The aspects discussed in the
remainder of this section are valid for both models (point to
point and publish-subscribe).
A JMS message can be persistent or non-persistent.
According to the JMS specification, persistent messages must
be delivered with a higher degree of reliability than the
nonpersistent ones. However, it is worth noting that it is not
possible to ensure once-and-only-once reliability for
persistent messages as defined in the specification, since, as we
discussed in the previous subsection, the underlying epidemic
protocol can guarantee only best-effort delivery. However,
clients maintain a list of the identifiers of the recently
received messages to avoid the delivery of message duplicates.
In other words, we provide the applications with
at-mostonce reliability for both types of messages.
In order to implement different levels of reliability, EMMA
treats persistent and non-persistent messages differently,
during the execution of the anti-entropy epidemic protocol. Since
the message buffer space is limited, persistent messages are
preferentially replicated using the available free space. If
this is insufficient and non-persistent messages are present
in the buffer, these are replaced. Only the successful
deliveries of the persistent messages are notified to the senders.
According to the JMS specification, it is possible to assign
a priority to each message. The messages with higher
priorities are delivered in a preferential way. As discussed above,
persistent messages are prioritised above the non-persistent
ones. Further selection is based on their priorities. Messages
with higher priorities are treated in a preferential way. In
fact, if there is not enough space to replicate all the
persistent messages, a mechanism based on priorities is used to
delete and replicate non-persistent messages (and, if
necessary, persistent messages).
Messages are deleted from the buffers using the expiration
time value that can be set by senders. This is a way to free
space in the buffers (one preferentially deletes older
messages in cases of conflict); to eliminate stale replicas in the
system; and to limit the time for which destinations must
hold message identifiers to dispose of duplicates.
4.4 Reliability and Acknowledgment
Mechanisms
As already discussed, at-most-once message delivery is the
best that can be achieved in terms of delivery semantics in
partially connected ad-hoc settings. However, it is
possible to improve the reliability of the system with efficient
acknowledgment mechanisms. EMMA provides a
mechanism for failure notification to applications if the
acknowledgment is not received within a given timeout (that can
be configured by application developers). This mechanism
is the one that distinguishes the delivery of persistent and
non-persistent messages in our JMS implementation: the
deliveries of the former are notified to the senders, whereas
the latter are not.
We use acknowledgment messages not only to inform senders
about the successful delivery of messages but also to delete
the replicas of the delivered messages that are still present
in the network. Each host maintains a list of the messages
Middleware for Pervasive and ad-hoc Computing 124
successfully delivered that is updated as part of the normal
process of information exchange between the hosts. The lists
are exchanged during the first steps of the anti-entropic
epidemic protocol with a certain predefined frequency. In the
case of messages with multiple recipients, a list of the actual
recipients is also stored. When a host receives the list, it
checks its message buffer and updates it according to the
following rules: (1) if a message has a single recipient and
it has been delivered, it is deleted from the buffer; (2) if a
message has multiple recipients, the identifiers of the
delivered hosts are deleted from the associated list of recipients.
If the resulting length of the list of recipients is zero, the
message is deleted from the buffer.
These lists have, clearly, finite dimensions and are
implemented as circular queues. This simple mechanism, together
with the use of expiration timestamps, guarantees that the
old acknowledgment notifications are deleted from the
system after a limited period of time.
In order to improve the reliability of EMMA, a design
mechanism for intelligent replication of queues and topics
based on the context information could be developed.
However this is not yet part of the current architecture of EMMA.
5. IMPLEMENTATION AND PRELIMINARY
EVALUATION
We implemented a prototype of our platform using the
J2ME Personal Profile. The size of the executable is about
250KB including the JMS 1.1 jar file; this is a perfectly
acceptable figure given the available memory of the current
mobile devices on the market. We tested our prototype on
HP IPaq PDAs running Linux, interconnected with
WaveLan, and on a number of laptops with the same network
interface.
We also evaluated the middleware platform using the
OMNET++ discrete event simulator [17] in order to explore a
range of mobile scenarios that incorporated a more realistic
number of hosts than was achievable experimentally. More
specifically, we assessed the performance of the system in
terms of delivery ratio and average delay, varying the
density of population and the buffer size, and using persistent
and non-persistent messages with different priorities.
The simulation results show that the EMMA"s
performance, in terms of delivery ratio and delay of persistent
messages with higher priorities, is good. In general, it is
evident that the delivery ratio is strongly related to the
correct dimensioning of the buffers to the maximum acceptable
delay. Moreover, the epidemic algorithms are able to
guarantee a high delivery ratio if one evaluates performance over
a time interval sufficient for the dissemination of the replicas
of messages (i.e., the infection spreading) in a large portion
of the ad-hoc network.
One consequence of the dimensioning problem is that
scalability may be seriously impacted in peer-to-peer
middleware for mobile computing due to the resource poverty of
the devices (limited memory to store temporarily messages)
and the number of possible interconnections in ad-hoc
settings. What is worse is that common forms of commercial
and social organisation (six degrees of separation) mean that
even modest TTL values on messages will lead to widespread
flooding of epidemic messages. This problem arises because
of the lack of intelligence in the epidemic protocol, and can
be addressed by selecting carrier nodes for messages with
greater care. The details of this process are, however,
outside the scope of this paper (but may be found in [13]) and do
not affect the foundation on which the EMMA middleware
is based: the ability to deliver messages asynchronously.
6. CRITICAL VIEW OF THE STATE OF
THE ART
The design of middleware platforms for mobile
computing requires researchers to answer new and fundamentally
different questions; simply assuming the presence of wired
portions of the network on which centralised functionality
can reside is not generalisable. Thus, it is necessary to
investigate novel design principles and to devise architectural
patterns that differ from those traditionally exploited in the
design of middleware for fixed systems.
As an example, consider the recent cross-layering trend in
ad-hoc networking [1]. This is a way of re-thinking software
systems design, explicitly abandoning the classical forms of
layering, since, although this separation of concerns afford
portability, it does so at the expense of potential efficiency
gains. We believe that it is possible to view our approach
as an instance of cross-layering. In fact, we have added the
epidemic network protocol at middleware level and, at the
same time, we have used the existing synchronous network
protocol if present both in delivering messages (traditional
layering) and in informing the middleware about when
messages may be delivered by revealing details of the forwarding
tables (layer violation). For this reason, we prefer to
consider them jointly as the communication layer of our
platform together providing more efficient message delivery.
Another interesting aspect is the exploitation of context
and system information to improve the performance of
mobile middleware platforms. Again, as a result of adopting
a cross-layering methodology, we are able to build systems
that gather information from the underlying operating
system and communication components in order to allow for
adaptation of behaviour. We can summarise this conceptual
design approach by saying that middleware platforms must
be not only context-aware (i.e., they should be able to
extract and analyse information from the surrounding context)
but also system-aware (i.e., they should be able to gather
information from the software and hardware components of
the mobile system).
A number of middleware systems have been developed to
support ad-hoc networking with the use of asynchronous
communication (such as LIME, XMIDDLE, STEAM [11]).
In particular, the STEAM platform is an interesting
example of event-based middleware for ad-hoc networks,
providing location-aware message delivery and an effective solution
for event filtering.
A discussion of JMS, and its mobile realisation, has
already been conducted in Sections 4 and 2. The Swiss
company Softwired has developed the first JMS middleware for
mobile computing, called iBus Mobile [10]. The main
components of this typically infrastructure-based architecture
are the JMS provider, the so-called mobile JMS gateway,
which is deployed on a fixed host and a lightweight JMS
client library. The gateway is used for the communication
between the application server and mobile hosts. The
gateway is seen by the JMS provider as a normal JMS client. The
JMS provider can be any JMS-enabled application server,
such as BEA Weblogic. Pronto [19] is an example of
mid125 Middleware 2004 Companion
dleware system based on messaging that is specifically
designed for mobile environments. The platform is composed
of three classes of components: mobile clients implementing
the JMS specification, gateways that control traffic,
guaranteeing efficiency and possible user customizations using
different plug-ins and JMS servers. Different configurations
of these components are possible; with respect to mobile ad
hoc networks applications, the most interesting is
Serverless JMS. The aim of this configuration is to adapt JMS
to a decentralized model. The publish-subscribe model
exploits the efficiency and the scalability of the underlying IP
multicast protocol. Unreliable and reliable message delivery
services are provided: reliability is provided through a
negative acknowledgment-based protocol. Pronto represents a
good solution for infrastructure-based mobile networks but
it does not adequately target ad-hoc settings, since mobile
nodes rely on fixed servers for the exchange of messages.
Other MOM implemented for mobile environments exist;
however, they are usually straightforward extensions of
existing middleware [8]. The only implementation of MOM
specifically designed for mobile ad-hoc networks was
developed at the University of Newcastle [18]. This work is again
a JMS adaptation; the focus of that implementation is on
group communication and the use of application level
routing algorithms for topic delivery of messages. However, there
are a number of differences in the focus of our work. The
importance that we attribute to disconnections makes
persistence a vital requirement for any middleware that needs
to be used in mobile ad-hoc networks. The authors of [18]
signal persistence as possible future work, not considering
the fact that routing a message to a non-connected host will
result in delivery failure. This is a remarkable limitation in
mobile settings where unpredictable disconnections are the
norm rather than the exception.
7. ROADMAP AND CONCLUSIONS
Asynchronous communication is a useful communication
paradigm for mobile ad-hoc networks, as hosts are allowed to
come, go and pick up messages when convenient, also taking
account of their resource availability (e.g., power,
connectivity levels). In this paper we have described the state of the
art in terms of MOM for mobile systems. We have also
shown a proof of concept adaptation of JMS to the extreme
scenario of partially connected mobile ad-hoc networks.
We have described and discussed the characteristics and
differences of our solution with respect to traditional JMS
implementations and the existing adaptations for mobile
settings. However, trade-offs between application-level routing
and resource usage should also be investigated, as mobile
devices are commonly power/resource scarce. A key
limitation of this work is the poorly performing epidemic
algorithm and an important advance in the practicability of
this work requires an algorithm that better balances the
needs of efficiency and message delivery probability. We
are currently working on algorithms and protocols that,
exploiting probabilistic and statistical techniques on the basis
of small amounts of exchanged information, are able to
improve considerably the efficiency in terms of resources
(memory, bandwidth, etc) and the reliability of our middleware
platform [13].
One futuristic research development, which may take these
ideas of adaptation of messaging middleware for mobile
environments further is the introduction of more mobility
oriented communication extensions, for instance the support
of geocast (i.e., the ability to send messages to specific
geographical areas).
8. REFERENCES
[1] M. Conti, G. Maselli, G. Turi, and S. Giordano.
Cross-layering in Mobile ad-hoc Network Design. IEEE
Computer, 37(2):48-51, February 2004.
[2] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson,
S. Shenker, H. Sturgis, D. Swinehart, and D. Terry.
Epidemic Algorithms for Replicated Database
Maintenance. In Sixth Symposium on Principles of
Distributed Computing, pages 1-12, August 1987.
[3] A. Doria, M. Uden, and D. P. Pandey. Providing
connectivity to the Saami nomadic community. In
Proceedings of the Second International Conference on
Open Collaborative Design for Sustainable Innovation,
December 2002.
[4] M. Haahr, R. Cunningham, and V. Cahill. Supporting
CORBA applications in a Mobile Environment. In 5th
International Conference on Mobile Computing and
Networking (MOBICOM99), pages 36-47. ACM, August
1999.
[5] M. Hapner, R. Burridge, R. Sharma, J. Fialli, and
K. Stout. Java Message Service Specification Version 1.1.
Sun Microsystems, Inc., April 2002.
http://java.sun.com/products/jms/.
[6] J. Hart. WebSphere MQ: Connecting your applications
without complex programming. IBM WebSphere Software
White Papers, 2003.
[7] S. Hayward and M. Pezzini. Marrying Middleware and
Mobile Computing. Gartner Group Research Report,
September 2001.
[8] IBM. WebSphere MQ EveryPlace Version 2.0, November
2002. http://www-3.ibm.com/software/integration/wmqe/.
[9] ITU. Connecting remote communities. Documents of the
World Summit on Information Society, 2003.
http://www.itu.int/osg/spu/wsis-themes.
[10] S. Maffeis. Introducing Wireless JMS. Softwired AG,
www.sofwired-inc.com, 2002.
[11] C. Mascolo, L. Capra, and W. Emmerich. Middleware for
Mobile Computing. In E. Gregori, G. Anastasi, and
S. Basagni, editors, Advanced Lectures on Networking,
volume 2497 of Lecture Notes in Computer Science, pages
20-58. Springer Verlag, 2002.
[12] Microsoft. Microsoft Message Queuing (MSMQ) Version
2.0 Documentation.
[13] M. Musolesi, S. Hailes, and C. Mascolo. Adaptive routing
for intermittently connected mobile ad-hoc networks.
Technical report, UCL-CS Research Note, July 2004.
Submitted for Publication.
[14] Sun Microsystems. Java Naming and Directory Interface
(JNDI) Documentation Version 1.2. 2003.
http://java.sun.com/products/jndi/.
[15] Sun Microsystems. Jini Specification Version 2.0, 2003.
http://java.sun.com/products/jini/.
[16] A. Vahdat and D. Becker. Epidemic routing for Partially
Connected ad-hoc Networks. Technical Report CS-2000-06,
Department of Computer Science, Duke University, 2000.
[17] A. Vargas. The OMNeT++ discrete event simulation
system. In Proceedings of the European Simulation
Multiconference (ESM"2001), Prague, June 2001.
[18] E. Vollset, D. Ingham, and P. Ezhilchelvan. JMS on Mobile
ad-hoc Networks. In Personal Wireless Communications
(PWC), pages 40-52, Venice, September 2003.
[19] E. Yoneki and J. Bacon. Pronto: Mobilegateway with
publish-subscribe paradigm over wireless network.
Technical Report 559, University of Cambridge, Computer
Laboratory, February 2003.
Middleware for Pervasive and ad-hoc Computing 126 | mobile ad-hoc network;asynchronous messaging middleware;context awareness;epidemic protocol;message-oriented middleware;message orient middleware;asynchronous communication;middleware for mobile computing;mobile ad-hoc environment;cross-layering;group communication;application level routing;java messaging service;epidemic messaging middleware |
train_C-75 | Composition of a DIDS by Integrating Heterogeneous IDSs on Grids | This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation. | 1. INTRODUCTION
Solutions for integrating heterogeneous IDSs (Intrusion Detection
Systems) have been proposed by several groups [6],[7],[11],[2].
Some reasons for integrating IDSs are described by the IDWG
(Intrusion Detection Working Group) from the IETF (Internet
Engineering Task Force) [12] as follows:
• Many IDSs available in the market have strong and weak
points, which generally make necessary the deployment of
more than one IDS to provided an adequate solution.
• Attacks and intrusions generally originate from multiple
networks spanning several administrative domains; these
domains usually utilize different IDSs. The integration of
IDSs is then needed to correlate information from multiple
networks to allow the identification of distributed attacks and
or intrusions.
• The interoperability/integration of different IDS components
would benefit the research on intrusion detection and speed
up the deployment of IDSs as commercial products.
DIDSs (Distributed Intrusion Detection Systems) therefore started
to emerge in early 90s [9] to allow the correlation of intrusion
information from multiple hosts, networks or domains to detect
distributed attacks. Research on DIDSs has then received much
interest, mainly because centralised IDSs are not able to provide
the information needed to prevent such attacks [13].
However, the realization of a DIDS requires a high degree of
coordination. Computational Grids are appealing as they enable
the development of distributed application and coordination in a
distributed environment. Grid computing aims to enable
coordinate resource sharing in dynamic groups of individuals
and/or organizations. Moreover, Grid middleware provides means
for secure access, management and allocation of remote resources;
resource information services; and protocols and mechanisms for
transfer of data [4].
According to Foster et al. [4], Grids can be viewed as a set of
aggregate services defined by the resources that they share. OGSA
(Open Grid Service Architecture) provides the foundation for this
service orientation in computational Grids. The services in OGSA
are specified through well-defined, open, extensible and
platformindependent interfaces, which enable the development of
interoperable applications.
This article proposes a model for integration of IDSs by using
computational Grids. The proposed model enables heterogeneous
IDSs to work in a cooperative way; this integration is termed
DIDSoG (Distributed Intrusion Detection System on Grid). Each
of the integrated IDSs is viewed by others as a resource accessed
through the services that it exposes. A Grid middleware provides
several features for the realization of a DIDSoG, including [3]:
decentralized coordination of resources; use of standard protocols
and interfaces; and the delivery of optimized QoS (Quality of
Service).
The service oriented architecture followed by Grids (OGSA)
allows the definition of interfaces that are adaptable to different
platforms. Different implementations can be encapsulated by a
service interface; this virtualisation allows the consistent access to
resources in heterogeneous environments [3]. The virtualisation of
the environment through service interfaces allows the use of
services without the knowledge of how they are actually
implemented. This characteristic is important for the integration
of IDSs as the same service interfaces can be exposed by different
IDSs.
Grid middleware can thus be used to implement a great variety of
services. Some functions provided by Grid middleware are [3]: (i)
data management services, including access services, replication,
and localisation; (ii) workflow services that implement coordinate
execution of multiple applications on multiple resources; (iii)
auditing services that perform the detection of frauds or
intrusions; (iv) monitoring services which implement the
discovery of sensors in a distributed environment and generate
alerts under determined conditions; (v) services for identification
of problems in a distributed environment, which implement the
correlation of information from disparate and distributed logs.
These services are important for the implementation of a DIDSoG.
A DIDS needs services for the location of and access to
distributed data from different IDSs. Auditing and monitoring
services take care of the proper needs of the DIDSs such as:
secure storage, data analysis to detect intrusions, discovery of
distributed sensors, and sending of alerts. The correlation of
distributed logs is also relevant because the detection of
distributed attacks depends on the correlation of the alert
information generated by the different IDSs that compose the
DIDSoG.
The next sections of this article are organized as follows. Section
2 presents related work. The proposed model is presented in
Section 3. Section 4 describes the development and a case study.
Results and discussion are presented in Section 5. Conclusions
and future work are discussed in Section 6.
2. RELATED WORK
DIDMA [5] is a flexible, scalable, reliable, and
platformindependent DIDS. DIDMA architecture allows distributed
analysis of events and can be easily extended by developing new
agents. However, the integration with existing IDSs and the
development of security components are presented as future work
[5]. The extensibility of DIDS DIDMA and the integration with
other IDSs are goals pursued by DIDSoG. The flexibility,
scalability, platform independence, reliability and security
components discussed in [5] are achieved in DIDSoG by using a
Grid platform.
More efficient techniques for analysis of great amounts of data in
wide scale networks based on clustering and applicable to DIDSs
are presented in [13]. The integration of heterogeneous IDSs to
increase the variety of intrusion detection techniques in the
environment is mentioned as future work [13] DIDSoG thus aims
at integrating heterogeneous IDSs [13].
Ref. [10] presents a hierarchical architecture for a DIDS;
information is collected, aggregated, correlated and analysed as it
is sent up in the hierarchy. The architecture comprises of several
components for: monitoring, correlation, intrusion detection by
statistics, detection by signatures and answers. Components in the
same level of the hierarchy cooperate with one another. The
integration proposed by DIDSoG also follows a hierarchical
architecture. Each IDS integrated to the DIDSoG offers
functionalities at a given level of the hierarchy and requests
functionalities from IDSs from another level. The hierarchy
presented in [10] integrates homogeneous IDSs whereas the
hierarchical architecture of DIDSoG integrates heterogeneous
IDSs.
There are proposals on integrating computational Grids and IDSs
[6],[7],[11],[2]. Ref. [6] and [7] propose the use of Globus
Toolkit for intrusion detection, especially for DoS (Denial of
Service) and DDoS (Distributed Denial of Service) attacks;
Globus is used due to the need to process great amounts of data to
detect these kinds of attack. A two-phase processing architecture
is presented. The first phase aims at the detection of momentary
attacks, while the second phase is concerned with chronic or
perennial attacks.
Traditional IDSs or DIDSs are generally coordinated by a central
point; a characteristic that leaves them prone to attacks. Leu et al.
[6] point out that IDSs developed upon Grids platforms are less
vulnerable to attacks because of the distribution provided for such
platforms. Leu et al. [6],[7] have used tools to generate several
types of attacks - including TCP, ICMP and UDP flooding - and
have demonstrated through experimental results the advantages of
applying computational Grids to IDSs.
This work proposes the development of a DIDS upon a Grid
platform. However, the resulting DIDS integrates heterogeneous
IDSs whereas the DIDSs upon Grids presented by Leu et al.
[6][7] do not consider the integration of heterogeneous IDSs. The
processing in phases [6][7] is also contemplated by DIDSoG,
which is enabled by the specification of several levels of
processing allowed by the integration of heterogeneous IDSs.
The DIDS GIDA (Grid Intrusion Detection Architecture) targets
at the detection of intrusions in a Grid environment [11]. GridSim
Grid simulator was used for the validation of DIDS GIDA.
Homogeneous resources were used to simplify the development
[11]. However, the possibility of applying heterogeneous
detection systems is left for future work
Another DIDS for Grids is presented by Choon and Samsudim
[2]. Scenarios demonstrating how a DIDS can execute on a Grid
environment are presented.
DIDSoG does not aim at detecting intrusions in a Grid
environment. In contrast, DIDSoG uses the Grid to compose a
DIDS by integrating specific IDSs; the resulting DIDS could
however be used to identify attacks in a Grid environment. Local
and distributed attacks can be detected through the integration of
traditional IDSs while attacks particular to Grids can be detected
through the integration of Grid IDSs.
3. THE PROPOSED MODEL
DIDSoG presents a hierarchy of intrusion detection services; this
hierarchy is organized through a two-dimensional vector defined
by Scope:Complexity. The IDSs composing DIDSoG can be
organized in different levels of scope or complexity, depending on
its functionalities, the topology of the target environment and
expected results.
Figure 1 presents a DIDSoG composed by different intrusion
detection services (i.e. data gathering, data aggregation, data
correlation, analysis, intrusion response and management)
provided by different IDSs. The information flow and the
relationship between the levels of scope and complexity are
presented in this figure.
Information about the environment (host, network or application)
is collected by Sensors located both in user 1"s and user 2"s
computers in domain 1. The information is sent to both simple
Analysers that act on the information from a single host (level
1:1), and to aggregation and correlation services that act on
information from multiple hosts from the same domain (level 2:1).
Simple Analysers in the first scope level send the information to
more complex Analysers in the next levels of complexity (level 1:
N). When an Analyser detects an intrusion, it communicates with
Countermeasure and Monitoring services registered to its scope.
An Analyser can invoke a Countermeasure service that replies to a
detected attack, or informs a Monitoring service about the
ongoing attack, so the administrator can act accordingly.
Aggregation and correlation resources in the second scope receive
information from Sensors from different users" computers (user
1"s and user 2"s) in the domain 1. These resources process the
received information and send it to the analysis resources
registered to the first level of complexity in the second scope
(level 2:1). The information is also sent to the aggregation and
correlation resources registered in the first level of complexity in
the next scope (level 3:1).
User 1
Domain 1
Analysers
Level 1:1
Local
Sensors
Analysers
Level 1:N
Aggreg.
Correlation
Level 2:1
User 2
Domain 1
Local
Sensors
Analysers
Level 2:1
Analysers
Level 2:N
Aggreg.
Correlation
Level 3:1
Domain 2
Monitor
Level 1
Monitor
Level 2
Analysers
Level 3:1
Analysers
Level 3:N
Monitor
Level 3
Response
Level 1
Response
Level 2
Response
Level 3
Fig. 1. How DIDSoG works.
The analysis resources in the second scope act like the analysis
resources in the first scope, directing the information to a more
complex analysis resource and putting the Countermeasure and
Monitoring resources in action in case of detected attacks.
Aggregation and correlation resources in the third scope receive
information from domains 1 and 2. These resources then carry out
the aggregation and correlation of the information from different
domains and send it to the analysis resources in the first level of
complexity in the third scope (level 3:1). The information could
also be sent to the aggregate service in the next scope in case of
any resources registered to such level.
The analysis resources in the third scope act similar to the analysis
resources in the first and second scopes, except that the analysis
resources in the third scope act on information from multiple
domains.
The functionalities of the registered resources in each of the
scopes and complexity level can vary from one environment to
another. The model allows the development of N levels of scope
and complexity.
Figure 2 presents the architecture of a resource participating in the
DIDSoG. Initially, the resource registers itself to GIS (Grid
Information Service) so other participating resources can query
the services provided. After registering itself, the resource
requests information about other intrusion detection resources
registered to the GIS.
A given resource of DIDSoG interacts with other resources, by
receiving data from the Source Resources, processing it, and
sending the results to the Destination Resources, therefore
forming a grid of intrusion detection resources.
Grid Resource
BaseNative
IDS
Grid Origin Resources
Grid Destination Resources
Grid Information Service
Descri
ptor
Connec
tor
Fig. 2. Architecture of a resource participating of the DIDSoG.
A resource is made up of four components: Base, Connector,
Descriptor and Native IDS. Native IDS corresponds to the IDS
being integrated to the DIDSoG. This component process the data
received from the Origin Resources and generates new data to be
sent to the Destination Resources. A Native IDS component can
be any tool processes information related to intrusion detection,
including analysis, data gathering, data aggregation, data
correlation, intrusion response or management.
The Descriptor is responsible for the information that identifies a
resource and its respective Destination Resources in the DIDSoG.
Figure 3 presents the class diagram of the stored information by
the Descriptor. The ResourceDescriptor class has Feature, Level,
DataType and Target Resources type members. Feature class
represents the functionalities that a resource has. Type, name and
version attributes refer to the functions offered by the Native IDS
component, its name and version, respectively. Level class
identifies the level of target and complexity in which the resource
acts. DataType class represents the data format that the resource
accepts to receive. DataType class is specialized by classes Text,
XML and Binary. Class XML contains the DTDFile attribute to
specify the DTD file that validates the received XML.
-ident
-version
-description
ResourceDescriptor
-featureType
-name
-version
Feature
1
1
-type
-version
DataType
-escope
-complex
Level
1
1
Text Binary
-DTDFile
XML
1
1
TargetResources
1
1 10..*
-featureType
Resource11
1
1
Fig. 3. Class Diagram of the Descriptor component.
TargetResources class represents the features of the Destination
Resources of a determined resource. This class aggregates
Resource. The Resource class identifies the characteristics of a
Destination Resource. This identification is made through the
featureType attribute and the Level and DataType classes.
A given resource analyses the information from Descriptors from
other resources, and compares this information with the
information specified in TargetResources to know to which
resources to send the results of its processing.
The Base component is responsible for the communication of a
resource with other resources of the DIDSoG and with the Grid
Information Service. It is this component that registers the
resource and the queries other resources in the GIS.
The Connector component is the link between Base and Native
IDS. The information that Base receives from Origin Resources is
passed to Connector component. The Connector component
performs the necessary changes in the data so that it is understood
by Native IDS, and sends this data to Native IDS for processing.
The Connector component also has the responsibility of collecting
the information processed by Native IDS, and making the
necessary changes so the information can pass through the
DIDSoG again. After these changes, Connector sends the
information to the Base, which in turn sends it to the Destination
Resources in accordance with the specifications of the Descriptor
component.
4. IMPLEMENTATION
We have used GridSim Toolkit 3 [1] for development and
evaluation of the proposed model. We have used and extended
GridSim features to model and simulate the resources and
components of DIDSoG.
Figure 4 presents the Class diagram of the simulated DIDSoG.
The Simulation_DIDSoG class starts the simulation components.
The Simulation_User class represents a user of DIDSoG. This
class" function is to initiate the processing of a resource Sensor,
from where the gathered information will be sent to other
resources. DIDSoG_GIS keeps a registry of the DIDSoG
resources.The DIDSoG_BaseResource class implements the Base
component (see Figure 2). DIDSoG_BaseResource interacts with
DIDSoG_Descriptor class, which represents the Descriptor
component. The DIDSoG_Descriptor class is created from an
XML file that specifies a resource descriptor (see Figure 3).
DIDSoG_BaseResource
DIDSoG_Descriptor
11
DIDSoG_GIS
Simulation_User
Simulation_DIDSoG
1
*1*
1
1
GridInformationService
GridSim GridResource
Fig. 4. Class Diagram of the simulatated DIDSoG.
A Connector component must be developed for each Native IDS
integrated to DIDSoG. The Connector component is implemented
by creating a class derived from DIDSoG_BaseResource. The new
class will implement new functionalities in accordance with the
needs of the corresponding Native IDS.
In the simulation environment, data collection resources, analysis,
aggregation/correlation and generation of answers were
integrated. Classes were developed to simulate the processing of
each Native IDS components associated to the resources. For each
simulated Native IDS a class derived from
DIDSoG_BaseResource was developed. This class corresponds to
the Connector component of the Native IDS and aims at the
integrating the IDS to DIDSoG.
A XML file describing each of the integrated resources is chosen
by using the Connector component. The resulting relationship
between the resources integrated to the DIDSoG, in accordance
with the specification of its respective descriptors, is presented in
Figure 5.
The Sensor_1 and Sensor_2 resources generate simulated data in
the TCPDump [8] format. The generated data is directed to
Analyser_1 and Aggreg_Corr_1 resources, in the case of
Sensor_1, and to Aggreg_Corr_1 in the case of Sensor_2,
according to the specification of their descriptors.
User_1
Analyser_
1
Level 1:1
Sensor_1
Aggreg_
Corr_1
Level 2:1
User_2
Sensor_2
Analyser_2
Level 2:1
Analyser_3
Level 2:2
TCPDump
TCPDump
TCPDumpAg
TCPDumpAg
IDMEF
IDMEF
IDMEF
TCPDump
Countermeasure_1
Level 1
Countermeasure_2
Level 2
Fig. 5. Flow of the execution of the simulation.
The Native IDS of Analyser_1 generates alerts for any attempt of
connection to port 23. The data received from Analyser_1 had
presented such features, generating an IDMEF (Intrusion
Detection Message Exchange Format) alert [14]. The generated
alert was sent to Countermeasure_1 resource, where a warning
was dispatched to the administrator informing him of the alert
received.
The Aggreg_Corr_1 resource received the information generated
by sensors 1 and 2. Its processing activities consist in correlating
the source IP addresses with the received data. The resultant
information of the processing of Aggreg_Corr_1 was directed to
the Analyser_2 resource.
The Native IDS component of the Analyser_2 generates alerts
when a source tries to connect to the same port number of
multiple destinations. This situation is identified by the
Analyser_2 in the data received from Aggreg_Corr_1 and an alert
in IDMEF format is then sent to the Countermeasures_2 resource.
In addition to generating alerts in IDMEF format, Analyser_2 also
directs the received data to the Analyser_3, in the level of
complexity 2. The Native IDS component of Analyser_3
generates alerts when the transmission of ICMP messages from a
given source to multiple destinations is detected. This situation is
detected in the data received from Analyser_2, and an IDMEF
alert is then sent to the Countermeasure_2 resource.
The Countermeasure_2 resource receives the alerts generated by
analysers 3 and 2, in accordance with the implementation of its
Native IDS component. Warnings on alerts received are
dispatched to the administrator.
The simulation carried out demonstrates how DIDSoG works.
Simulated data was generated to be the input for a grid of
intrusion detection systems composed by several distinct
resources. The resources carry out tasks such as data collection,
aggregation and analysis, and generation of alerts and warnings in
an integrated manner.
5. EXPERIMENT RESULTS
The hierarchic organization of scope and complexity provides a
high degree of flexibility to the model. The DIDSoG can be
modelled in accordance with the needs of each environment. The
descriptors define data flow desired for the resulting DIDS.
Each Native IDS is integrated to the DIDSoG through a
Connector component. The Connector component is also flexible
in the DIDSoG. Adaptations, conversions of data types and
auxiliary processes that Native IDSs need are provided by the
Connector. Filters and generation of Specific logs for each Native
IDS or environment can also be incorporated to the Connector.
If the integration of a new IDS to an environment already
configured is desired, it is enough to develop the Connector for
the desired IDS and to specify the resource Descriptor. After the
specification of the Connector and the Descriptor the new IDS is
integrated to the DIDSoG.
Through the definition of scopes, resources can act on data of
different source groups. For example, scope 1 can be related to a
given set of hosts, scope 2 to another set of hosts, while scope 3
can be related to hosts from scopes 1 and 2. Scopes can be defined
according to the needs of each environment.
The complexity levels allow the distribution of the processing
between several resources inside the same scope. In an analysis
task, for example, the search for simple attacks can be made by
resources of complexity 1, whereas the search for more complex
attacks, that demands more time, can be performed by resources
of complexity 2. With this, the analysis of the data is made by two
resources.
The distinction between complexity levels can also be organized
in order to integrate different techniques of intrusion detection.
The complexity level 1 could be defined for analyses based on
signatures, which are simpler techniques; the complexity level 2
for techniques based on behaviour, that require greater
computational power; and the complexity level 3 for intrusion
detection in applications, where the techniques are more specific
and depend on more data.
The division of scopes and the complexity levels make the
processing of the data to be carried out in phases. No resource has
full knowledge about the complete data processing flow. Each
resource only knows the results of its processing and the
destination to which it sends the results. Resources of higher
complexity must be linked to resources of lower complexity.
Therefore, the hierarchic structure of the DIDSoG is maintained,
facilitating its extension and integration with other domains of
intrusion detection.
By carrying out a hierarchic relationship between the several
chosen analysers for an environment, the sensor resource is not
overloaded with the task to send the data to all the analysers. An
initial analyser will exist (complexity level 1) to which the sensor
will send its data, and this analyser will then direct the data to the
next step of the processing flow. Another feature of the
hierarchical organization is the easy extension and integration
with other domains. If it is necessary to add a new host (sensor) to
the DIDSoG, it is enough to plug it to the first hierarchy of
resources. If it is necessary to add a new analyser, it will be in the
scope of several domains, it is enough to relate it to another
resource of same scope.
The DIDSoG allows different levels to be managed by different
entities. For example, the first scope can be managed by the local
user of a host. The second scope, comprising several hosts of a
domain can be managed by the administrator of the domain. A
third entity can be responsible for managing the security of
several domains in a joint way. This entity can act in the scope 3
independently from others.
With the proposed model for integration of IDSs in Grids, the
different IDSs of an environment (or multiple IDSs integrated) act
in a cooperative manner improving the intrusion detection
services, mainly in two aspects. First, the information from
multiple sources are analysed in an integrated way to search for
distributed attacks. This integration can be made under several
scopes. Second, there is a great diversity of data aggregation
techniques, data correlation and analysis, and intrusion response
that can be applied to the same environment; these techniques can
be organized under several levels of complexity.
6. CONCLUSION
The integration of heterogeneous IDSs is important. However, the
incompatibility and diversity of IDS solutions make such
integration extremely difficult. This work thus proposed a model
for composition of DIDS by integrating existing IDSs on a
computational Grid platform (DIDSoG). IDSs in DIDSoG are
encapsulated as Grid services for intrusion detection. A
computational Grid platform is used for the integration by
providing the basic requirements for communication, localization,
resource sharing and security mechanisms.
The components of the architecture of the DIDSoG were
developed and evaluated using the GridSim Grid simulator.
Services for communication and localization were used to carry
out the integration between components of different resources.
Based on the components of the architecture, several resources
were modelled forming a grid of intrusion detection. The
simulation demonstrated the usefulness of the proposed model.
Data from the sensor resources was read and this data was used to
feed other resources of DIDSoG.
The integration of distinct IDSs could be observed through the
simulated environment. Resources providing different intrusion
detection services were integrated (e.g. analysis, correlation,
aggregation and alert). The communication and localization
services provided by GridSim were used to integrate components
of different resources. Various resources were modelled following
the architecture components forming a grid of intrusion detection.
The components of DIDSoG architecture have served as base for
the integration of the resources presented in the simulation.
During the simulation, the different IDSs cooperated with one
another in a distributed manner; however, in a coordinated way
with an integrated view of the events, having, thus, the capability
to detect distributed attacks. This capability demonstrates that the
IDSs integrated have resulted in a DIDS.
Related work presents cooperation between components of a
specific DIDS. Some work focus on either the development of
DIDSs on computational Grids or the application of IDSs to
computational Grids. However, none deals with the integration of
heterogeneous IDSs. In contrast, the proposed model developed
and simulated in this work, can shed some light into the question
of integration of heterogeneous IDSs.
DIDSoG presents new research opportunities that we would like
to pursue, including: deployment of the model in a more realistic
environment such as a Grid; incorporation of new security
services; parallel analysis of data by Native IDSs in multiple
hosts.
In addition to the integration of IDSs enabled by a grid
middleware, the cooperation of heterogeneous IDSs can be
viewed as an economic problem. IDSs from different
organizations or administrative domains need incentives for
joining a grid of intrusion detection services and for collaborating
with other IDSs. The development of distributed strategy proof
mechanisms for integration of IDSs is a challenge that we would
like to tackle.
7. REFERENCES
[1] Sulistio, A.; Poduvaly, G.; Buyya, R; and Tham, CK,
Constructing A Grid Simulation with Differentiated Network
Service Using GridSim, Proc. of the 6th International
Conference on Internet Computing (ICOMP'05), June 27-30,
2005, Las Vegas, USA.
[2] Choon, O. T.; Samsudim, A. Grid-based Intrusion Detection
System. The 9th
IEEE Asia-Pacific Conference
Communications, September 2003.
[3] Foster, I.; Kesselman, C.; Tuecke, S. The Physiology of the
Grid: An Open Grid Service Architecture for Distributed
System Integration. Draft June 2002. Available at
http://www.globus.org/research/papers/ogsa.pdf. Access Feb.
2006.
[4] Foster, Ian; Kesselman, Carl; Tuecke, Steven. The anatomy
of the Grid: enabling scalable virtual organizations.
International Journal of Supercomputer Applications, 2001.
[5] Kannadiga, P.; Zulkernine, M. DIDMA: A Distributed
Intrusion Detection System Using Mobile Agents.
Proceedings of the IEEE Sixth International Conference on
Software Engineering, Artificial Intelligence, Networking
and Parallel/Distributed Computing, May 2005.
[6] Leu, Fang-Yie, et al. Integrating Grid with Intrusion
Detection. Proceedings of 19th
IEEE AINA"05, March 2005.
[7] Leu, Fang-Yie, et al. A Performance-Based Grid Intrusion
Detection System. Proceedings of the 29th
IEEE
COMPSAC"05, July 2005.
[8] McCanne, S.; Leres, C.; Jacobson, V.; TCPdump/Libpcap,
http://www.tcpdump.org/, 1994.
[9] Snapp, S. R. et al. DIDS (Distributed Intrusion Detection
System) - Motivation, Architecture and An Early Prototype.
Proceeding of the Fifteenth IEEE National Computer
Security Conference. Baltimore, MD, October 1992.
[10] Sterne, D.; et al. A General Cooperative Intrusion Detection
Architecture for MANETs. Proceedings of the Third IEEE
IWIA"05, March 2005.
[11] Tolba, M. F. ; et al. GIDA: Toward Enabling Grid Intrusion
Detection Systems. 5th IEEE International Symposium on
Cluster Computing and the Grid, May 2005.
[12] Wood, M. Intrusion Detection message exchange
requirements. Draft-ietf-idwg-requirements-10, October
2002. Available at
http://www.ietf.org/internet-drafts/draftietf-idwg-requirements-10.txt. Access March 2006.
[13] Zhang, Yu-Fang; Xiong, Z.; Wang, X. Distributed Intrusion
Detection Based on Clustering. Proceedings of IEEE
International Conference Machine Learning and Cybernetics,
August 2005.
[14] Curry, D.; Debar, H. Intrusion Detection Message exchange
format data model and Extensible Markup Language (XML)
Document Type Definition. Draft-ietf-idwg-idmef-xml-10,
March 2006. Available at
http://www.ietf.org/internetdrafts/draft-ietf-idwg-idmef-xml-16.txt. | system integration;gridsim grid simulator;heterogeneous intrusion detection system;distributed intrusion detection system;grid middleware;intrusion detection system;grid service for intrusion detection;ids integration;grid;open grid service architecture;integration of ids;computational grid;grid intrusion detection architecture;intrusion detection service |
train_C-76 | Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation | The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today"s event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation. | 1. INTRODUCTION
In huge networks a single fault can cause a burst of failure
events. To handle the flood of events and to find the root
cause of a fault, event correlation approaches like rule-based
reasoning, case-based reasoning or the codebook approach
have been developed. The main idea of correlation is to
condense and structure events to retrieve meaningful
information. Until now, these approaches address primarily the
correlation of events as reported from management tools or
devices. Therefore, we call them device-oriented.
In this paper we define a service as a set of functions
which are offered by a provider to a customer at a customer
provider interface. The definition of a service is therefore
more general than the definition of a Web Service, but a
Web Service is included in this service definition. As
a consequence, the results are applicable for Web Services
as well as for other kinds of services. A service level
agreement (SLA) is defined as a contract between customer and
provider about guaranteed service performance.
As in today"s IT environments the offering of such services
with an agreed service quality becomes more and more
important, this change also affects the event correlation. It
has become a necessity for providers to offer such
guarantees for a differentiation from other providers. To avoid SLA
violations it is especially important for service providers to
identify the root cause of a fault in a very short time or even
act proactively. The latter refers to the case of recognizing
the influence of a device breakdown on the offered services.
As in this scenario the knowledge about services and their
SLAs is used we call it service-oriented. It can be addressed
from two directions.
Top-down perspective: Several customers report a
problem in a certain time interval. Are these trouble
reports correlated? How to identify a resource as being
the problem"s root cause?
183
Bottom-up perspective: A device (e.g., router, server)
breaks down. Which services, and especially which
customers, are affected by this fault?
The rest of the paper is organized as follows. In Section
2 we describe how event correlation is performed today and
present a selection of the state-of-the-art event correlation
techniques. Section 3 describes the motivation for
serviceoriented event correlation and its benefits. After having
motivated the need for such type of correlation we use two
well-known IT service management models to gain
requirements for an appropriate workflow modeling and present
our proposal for it (see Section 4). In Section 5 we present
our information modeling which is derived from the MNM
Service Model. An application of the approach for a web
hosting scenario is performed in Section 6. The last section
concludes the paper and presents future work.
2. TODAY"S EVENT CORRELATION
TECHNIQUES
In [11] the task of event correlation is defined as a
conceptual interpretation procedure in the sense that a new
meaning is assigned to a set of events that happen in a certain
time interval. We can distinguish between three aspects
for event correlation.
Functional aspect: The correlation focuses on functions
which are provided by each network element. It is also
regarded which other functions are used to provide a
specific function.
Topology aspect: The correlation takes into account how
the network elements are connected to each other and
how they interact.
Time aspect: When explicitly regarding time constraints,
a start and end time has to be defined for each event.
The correlation can use time relationships between the
events to perform the correlation. This aspect is only
mentioned in some papers [11], but it has to be treated
in an event correlation system.
In the event correlation it is also important to distinguish
between the knowledge acquisition/representation and the
correlation algorithm. Examples of approaches to
knowledge acquisition/representation are Gruschke"s dependency
graphs [6] and Ensel"s dependency detection by neural
networks [3]. It is also possible to find the dependencies by
analyzing interactions [7]. In addition, there is an approach
to manage service dependencies with XML and to define a
resource description framework [4].
To get an overview about device-oriented event correlation
a selection of several event correlation techniques being used
for this kind of correlation is presented.
Model-based reasoning: Model-based reasoning (MBR)
[15, 10, 20] represents a system by modeling each of its
components. A model can either represent a physical
entity or a logical entity (e.g., LAN, WAN, domain,
service, business process). The models for physical
entities are called functional model, while the models
for all logical entities are called logical model. A
description of each model contains three categories of
information: attributes, relations to other models, and
behavior. The event correlation is a result of the
collaboration among models.
As all components of a network are represented with
their behavior in the model, it is possible to perform
simulations to predict how the whole network will
behave.
A comparison in [20] showed that a large MBR system
is not in all cases easy to maintain. It can be difficult to
appropriately model the behavior for all components
and their interactions correctly and completely.
An example system for MBR is NetExpert[16] from
OSI which is a hybrid MBR/RBR system (in 2000 OSI
was acquired by Agilent Technologies).
Rule-based reasoning: Rule-based reasoning (RBR) [15,
10] uses a set of rules for event correlation. The rules
have the form conclusion if condition. The condition
uses received events and information about the system,
while the conclusion contains actions which can either
lead to system changes or use system parameters to
choose the next rule.
An advantage of the approach is that the rules are
more or less human-readable and therefore their effect
is intuitive. The correlation has proved to be very fast
in practice by using the RETE algorithm.
In the literature [20, 1] it is claimed that RBR
systems are classified as relatively inflexible. Frequent
changes in the modeled IT environment would lead to
many rule updates. These changes would have to be
performed by experts as no automation has currently
been established. In some systems information about
the network topology which is needed for the event
correlation is not used explicitly, but is encoded into the
rules. This intransparent usage would make rule
updates for topology changes quite difficult. The system
brittleness would also be a problem for RBR systems.
It means that the system fails if an unknown case
occurs, because the case cannot be mapped onto similar
cases. The output of RBR systems would also be
difficult to predict, because of unforeseen rule interactions
in a large rule set. According to [15] an RBR system
is only appropriate if the domain for which it is used
is small, nonchanging, and well understood.
The GTE IMPACT system [11] is an example of a
rulebased system. It also uses MBR (GTE has merged
with Bell Atlantic in 1998 and is now called Verizon
[19]).
Codebook approach: The codebook approach [12, 21] has
similarities to RBR, but takes a further step and
encodes the rules into a correlation matrix.
The approach starts using a dependency graph with
two kinds of nodes for the modeling. The first kind
of nodes are the faults (denoted as problems in the
cited papers) which have to be detected, while the
second kind of nodes are observable events (symptoms in
the papers) which are caused by the faults or other
events. The dependencies between the nodes are
denoted as directed edges. It is possible to choose weights
for the edges, e.g., a weight for the probability that
184
fault/event A causes event B. Another possible
weighting could indicate time dependencies. There are
several possibilities to reduce the initial graph. If, e.g.,
a cyclic dependency of events exists and there are no
probabilities for the cycles" edges, all events can be
treated as one event.
After a final input graph is chosen, the graph is
transformed into a correlation matrix where the columns
contain the faults and the rows contain the events.
If there is a dependency in the graph, the weight of
the corresponding edge is put into the according
matrix cell. In case no weights are used, the matrix cells
get the values 1 for dependency and 0 otherwise.
Afterwards, a simplification can be done, where events
which do not help to discriminate faults are deleted.
There is a trade-off between the minimization of the
matrix and the robustness of the results. If the matrix
is minimized as much as possible, some faults can only
be distinguished by a single event. If this event cannot
be reliably detected, the event correlation system
cannot discriminate between the two faults. A measure
how many event observation errors can be
compensated by the system is the Hamming distance. The
number of rows (events) that can be deleted from the
matrix can differ very much depending on the
relationships [15].
The codebook approach has the advantage that it uses
long-term experience with graphs and coding. This
experience is used to minimize the dependency graph
and to select an optimal group of events with respect
to processing time and robustness against noise.
A disadvantage of the approach could be that similar
to RBR frequent changes in the environment make it
necessary to frequently edit the input graph.
SMARTS InCharge [12, 17] is an example of such a
correlation system.
Case-based reasoning: In contrast to other approaches
case-based reasoning (CBR) [14, 15] systems do not
use any knowledge about the system structure. The
knowledge base saves cases with their values for system
parameters and successful recovery actions for these
cases. The recovery actions are not performed by the
CBR system in the first place, but in most cases by a
human operator.
If a new case appears, the CBR system compares the
current system parameters with the system
parameters in prior cases and tries to find a similar one. To
identify such a match it has to be defined for which
parameters the cases can differ or have to be the same.
If a match is found, a learned action can be performed
automatically or the operator can be informed with a
recovery proposal.
An advantage of this approach is that the ability to
learn is an integral part of it which is important for
rapid changing environments.
There are also difficulties when applying the approach
[15]. The fields which are used to find a similar case
and their importance have to be defined appropriately.
If there is a match with a similar case, an adaptation
of the previous solution to the current one has to be
found.
An example system for CBR is SpectroRx from
Cabletron Systems. The part of Cabletron that developed
SpectroRx became an independent software company
in 2002 and is now called Aprisma Management
Technologies [2].
In this section four event correlation approaches were
presented which have evolved into commercial event correlation
systems. The correlation approaches have different focuses.
MBR mainly deals with the knowledge acquisition and
representation, while RBR and the codebook approach
propose a correlation algorithm. The focus of CBR is its ability
to learn from prior cases.
3. MOTIVATION OF SERVICE-ORIENTED
EVENT CORRELATION
Fig. 1 shows a general service scenario upon which we
will discuss the importance of a service-oriented correlation.
Several services like SSH, a web hosting service, or a video
conference service are offered by a provider to its customers
at the customer provider interface. A customer can allow
several users to use a subscribed service. The quality and
cost issues of the subscribed services between a customer
and a provider are agreed in SLAs. On the provider side
the services use subservices for their provisioning. In case
of the services mentioned above such subservices are DNS,
proxy service, and IP service. Both services and subservices
depend on resources upon which they are provisioned. As
displayed in the figure a service can depend on more than
one resource and a resource can be used by one or more
services.
SSH
DNS
proxy
IP
service dependency resource dependency
user a
user b
user c
customer SLA
web a
video conf.
SSH sun1
provider
video conf.
web
services
subservices
resources
Figure 1: Scenario
To get a common understanding, we distinguish between
different types of events:
Resource event: We use the term resource event for
network events and system events. A network event refers
to events like node up/down or link up/down whereas
system events refer to events like server down or
authentication failure.
Service event: A service event indicates that a service
does not work properly. A trouble ticket which is
generated from a customer report is a kind of such an
185
event. Other service events can be generated by the
provider of a service, if the provider himself detects a
service malfunction.
In such a scenario the provider may receive service events
from customers which indicate that SSH, web hosting
service, and video conference service are not available. When
referring to the service hierarchy, the provider can conclude
in such a case that all services depend on DNS. Therefore,
it seems more likely that a common resource which is
necessary for this service does not work properly or is not
available than to assume three independent service failures. In
contrast to a resource-oriented perspective where all of the
service events would have to be processed separately, the
service events can be linked together. Their information can
be aggregated and processed only once. If, e.g., the problem
is solved, one common message to the customers that their
services are available again is generated and distributed by
using the list of linked service events. This is certainly a
simplified example. However, it shows the general principle of
identifying the common subservices and common resources
of different services.
It is important to note that the service-oriented
perspective is needed to integrate service aspects, especially QoS
aspects. An example of such an aspect is that a fault does not
lead to a total failure of a service, but its QoS parameters,
respectively agreed service levels, at the customer-provider
interface might not be met. A degradation in service
quality which is caused by high traffic load on the backbone
is another example. In the resource-oriented perspective it
would be possible to define events which indicate that there
is a link usage higher than a threshold, but no mechanism
has currently been established to find out which services are
affected and whether a QoS violation occurs.
To summarize, the reasons for the necessity of a
serviceoriented event correlation are the following:
Keeping of SLAs (top-down perspective): The time
interval between the first symptom (recognized either
by provider, network management tools, or customers)
that a service does not perform properly and the
verified fault repair needs to be minimized. This is
especially needed with respect to SLAs as such agreements
often contain guarantees like a mean time to repair.
Effort reduction (top-down perspective): If several
user trouble reports are symptoms of the same fault,
fault processing should be performed only once and
not several times. If the fault has been repaired, the
affected customers should be informed about this
automatically.
Impact analysis (bottom-up perspective): In case of
a fault in a resource, its influence on the associated
services and affected customers can be determined. This
analysis can be performed for short term (when there
is currently a resource failure) or long term (e.g.,
network optimization) considerations.
4. WORKFLOW MODELING
In the following we examine the established IT process
management frameworks IT Infrastructure Library (ITIL)
and enhanced Telecom Operations Map (eTOM). The aim is
find out where event correlation can be found in the process
structure and how detailed the frameworks currently are.
After that we present our solution for a workflow modeling
for the service-oriented event correlation.
4.1 IT Infrastructure Library (ITIL)
The British Office of Government Commerce (OGC) and
the IT Service Management Forum (itSMF) [9] provide a
collection of best practices for IT processes in the area of
IT service management which is called ITIL. The service
management is described by 11 modules which are grouped
into Service Support Set (provider internal processes) and
Service Delivery Set (processes at the customer-provider
interface). Each module describes processes, functions, roles,
and responsibilities as well as necessary databases and
interfaces. In general, ITIL describes contents, processes, and
aims at a high abstraction level and contains no information
about management architectures and tools.
The fault management is divided into Incident
Management process and Problem Management process.
Incident Management: The Incident Management
contains the service desk as interface to customers (e.g.,
receives reports about service problems). In case of
severe errors structured queries are transferred to the
Problem Management.
Problem Management: The Problem Management"s
tasks are to solve problems, take care of keeping
priorities, minimize the reoccurrence of problems, and to
provide management information. After receiving
requests from the Incident Management, the problem
has to be identified and information about necessary
countermeasures is transferred to the Change
Management.
The ITIL processes describe only what has to be done, but
contain no information how this can be actually performed.
As a consequence, event correlation is not part of the
modeling. The ITIL incidents could be regarded as input for the
service-oriented event correlation, while the output could be
used as a query to the ITIL Problem Management.
4.2 Enhanced Telecom Operations Map
(eTOM)
The TeleManagement Forum (TMF) [18] is an
international non-profit organization from service providers and
suppliers in the area of telecommunications services. Similar
to ITIL a process-oriented framework has been developed at
first, but the framework was designed for a narrower focus,
i.e., the market of information and communications service
providers. A horizontal grouping into processes for
customer care, service development & operations, network &
systems management, and partner/supplier is performed. The
vertical grouping (fulfillment, assurance, billing) reflects the
service life cycle.
In the area of fault management three processes have been
defined along the horizontal process grouping.
Problem Handling: The purpose of this process is to
receive trouble reports from customers and to solve them
by using the Service Problem Management. The aim
is also to keep the customer informed about the
current status of the trouble report processing as well as
about the general network status (e.g., planned
maintenance). It is also a task of this process to inform the
186
QoS/SLA management about the impact of current
errors on the SLAs.
Service Problem Management: In this process reports
about customer-affecting service failures are received
and transformed. Their root causes are identified and
a problem solution or a temporary workaround is
established. The task of the Diagnose Problem
subprocess is to find the root cause of the problem by
performing appropriate tests. Nothing is said how this
can be done (e.g., no event correlation is mentioned).
Resource Trouble Management: A subprocess of the
Resource Trouble Management is responsible for
resource failure event analysis, alarm correlation &
filtering, and failure event detection & reporting.
Another subprocess is used to execute different tests to
find a resource failure. There is also another
subprocess which keeps track about the status of the trouble
report processing. This subprocess is similar to the
functionality of a trouble ticket system.
The process description in eTOM is not very detailed. It
is useful to have a check list which aspects for these processes
have to be taken into account, but there is no detailed
modeling of the relationships and no methodology for applying
the framework. Event correlation is only mentioned in the
resource management, but it is not used in the service level.
4.3 Workflow Modeling for the
Service-Oriented Event Correlation
Fig. 2 shows a general service scenario which we will use
as basis for the workflow modeling for the service-oriented
event correlation. We assume that the dependencies are
already known (e.g., by using the approaches mentioned
in Section 2). The provider offers different services which
depend on other services called subservices (service
dependency). Another kind of dependency exists between
services/subservices and resources. These dependencies are
called resource dependencies. These two kinds of
dependencies are in most cases not used for the event correlation
performed today. This resource-oriented event correlation
deals only with relationships on the resource level (e.g.,
network topology).
service dependency
resources
subservices
provider
services
resource dependency
Figure 2: Different kinds of dependencies for the
service-oriented event correlation
The dependencies depicted in Figure 2 reflect a situation
with no redundancy in the service provisioning. The
relationships can be seen as AND relationships. In case of
redundancy, if e.g., a provider has 3 independent web servers,
another modeling (see Figure 3) should be used (OR
relationship). In such a case different relationships are possible.
The service could be seen as working properly if one of the
servers is working or a certain percentage of them is working.
services
) AND relationship b) OR relationship
resources
Figure 3: Modeling of no redundancy (a) and of
redundancy (b)
As both ITIL and eTOM contain no description how event
correlation and especially service-oriented event correlation
should actually be performed, we propose the following
design for such a workflow (see Fig. 4). The additional
components which are not part of a device-oriented event
correlation are depicted with a gray background. The workflow is
divided into the phases fault detection, fault diagnosis, and
fault recovery.
In the fault detection phase resource events and service
events can be generated from different sources. The
resource events are issued during the use of a resource, e.g.,
via SNMP traps. The service events are originated from
customer trouble reports, which are reported via the Customer
Service Management (see below) access point. In addition
to these two passive ways to get the events, a provider
can also perform active tests. These tests can either deal
with the resources (resource active probing) or can assume
the role of a virtual customer and test a service or one of its
subservices by performing interactions at the service access
points (service active probing).
The fault diagnosis phase is composed of three event
correlation steps. The first one is performed by the resource
event correlator which can be regarded as the event
correlator in today"s commercial systems. Therefore, it deals only
with resource events. The service event correlator does a
correlation of the service events, while the aggregate event
correlator finally performs a correlation of both resource and
service events. If the correlation result in one of the
correlation steps shall be improved, it is possible to go back to
the fault detection phase and start the active probing to get
additional events. These events can be helpful to confirm a
correlation result or to reduce the list of possible root causes.
After the event correlation an ordered list of possible root
causes is checked by the resource management. When the
root cause is found, the failure repair starts. This last step
is performed in the fault recovery phase.
The next subsections present different elements of the
event correlation process.
4.4 Customer Service Management and
Intelligent Assistant
The Customer Service Management (CSM) access point
was proposed by [13] as a single interface between customer
187
fault
detection
fault
diagnosis
resource
active
probing
resource event
resource
event
correlator
resource
management
candidate
list
fault
recovery
resource
usage
service
active
probing
intelligent
assistant
service
event
correlator
aggregate
event
correlator
service eventCSM AP
Figure 4: Event correlation workflow
and provider. Its functionality is to provide information
to the customer about his subscribed services, e.g., reports
about the fulfillment of agreed SLAs. It can also be used to
subscribe services or to allow the customer to manage his
services in a restricted way. Reports about problems with a
service can be sent to the customer via CSM. The CSM is
also contained in the MNM Service Model (see Section 5).
To reduce the effort for the provider"s first level support,
an Intelligent Assistant can be added to the CSM. The
Intelligent Assistant structures the customer"s information about
a service problem. The information which is needed for a
preclassification of the problem is gathered from a list of
questions to the customer. The list is not static as the
current question depends on the answers to prior questions or
from the result of specific tests. A decision tree is used
to structure the questions and tests. The tests allow the
customer to gain a controlled access to the provider"s
management. At the LRZ a customer of the E-Mail Service can,
e.g., use the Intelligent Assistant to perform ping requests
to the mail server. But also more complex requests could be
possible, e.g., requests of a combination of SNMP variables.
4.5 Active Probing
Active probing is useful for the provider to check his
offered services. The aim is to identify and react to problems
before a customer notices them. The probing can be done
from a customer point of view or by testing the resources
which are part of the services. It can also be useful to
perform tests of subservices (own subservices or subservices
offered by suppliers).
Different schedules are possible to perform the active
probing. The provider could select to test important services
and resources in regular time intervals. Other tests could
be initiated by a user who traverses the decision tree of the
Intelligent Assistant including active tests. Another
possibility for the use of active probing is a request from the event
correlator, if the current correlation result needs to be
improved. The results of active probing are reported via service
or resource events to the event correlator (or if the test was
demanded by the Intelligent Assistant the result is reported
to it, too). While the events that are received from
management tools and customers denote negative events (something
does not work), the events from active probing should also
contain positive events for a better discrimination.
4.6 Event Correlator
The event correlation should not be performed by a single
event correlator, but by using different steps. The reason
for this are the different characteristics of the dependencies
(see Fig. 1).
On the resource level there are only relationships between
resources (network topology, systems configuration). An
example for this could be a switch linking separate LANs. If
the switch is down, events are reported that other network
components which are located behind the switch are also not
reachable. When correlating these events it can be figured
out that the switch is the likely error cause. At this stage,
the integration of service events does not seem to be helpful.
The result of this step is a list of resources which could be
the problem"s root cause. The resource event correlator is
used to perform this step.
In the service-oriented scenario there are also service and
resource dependencies. As next step in the event
correlation process the service events should be correlated with
each other using the service dependencies, because the
service dependencies have no direct relationship to the resource
level. The result of this step, which is performed by the
service event correlator, is a list of services/subservices which
could contain a failure in a resource. If, e.g., there are
service events from customers that two services do not work
and both services depend on a common subservice, it seems
more likely that the resource failure can be found inside the
subservice. The output of this correlation is a list of
services/subservices which could be affected by a failure in an
associated resource.
In the last step the aggregate event correlator matches
the lists from resource event correlator and service event
correlator to find the problem"s possible root cause. This is
done by using the resource dependencies.
The event correlation techniques presented in Section 2
could be used to perform the correlation inside the three
event correlators. If the dependencies can be found precisely,
an RBR or codebook approach seems to be appropriate. A
case database (CBR) could be used if there are cases which
could not be covered by RBR or the codebook approach.
These cases could then be used to improve the modeling in
a way that RBR or the codebook approach can deal with
them in future correlations.
5. INFORMATION MODELING
In this section we use a generic model for IT service
management to derive the information necessary for the event
correlation process.
5.1 MNM Service Model
The MNM Service Model [5] is a generic model for IT
service management. A distinction is made between customer
side and provider side. The customer side contains the
basic roles customer and user, while the provider side contains
the role provider. The provider makes the service available
to the customer side. The service as a whole is divided into
usage which is accessed by the role user and management
which is used by the role customer.
The model consists of two main views. The Service View
(see Fig. 5) shows a common perspective of the service for
customer and provider. Everything that is only important
188
for the service realization is not contained in this view. For
these details another perspective, the Realization View, is
defined (see Fig. 6).
customer domain
supplies supplies
provider domain
«role»
provider
accesses uses concludes accesses
implements observesrealizes
provides directs
substantiates
usesuses
manages
implementsrealizes
manages
service
concludes
QoS
parameters
usage
functionality
service
access point
management
functionality
service implementation service management implementation
service
agreement
customersideprovidersidesideindependent
«role»
user
«role»
customer
CSM
access point
service
client
CSM
client
Figure 5: Service View
The Service View contains the service for which the
functionality is defined for usage as well as for management. There
are two access points (service access point and CSM access
point) where user and customer can access the usage and
management functionality, respectively. Associated to each
service is a list of QoS parameters which have to be met by
the service at the service access point. The QoS surveillance
is performed by the management.
provider domain
implements observesrealizes
provides directs
implementsrealizes
accesses uses concludes accesses
usesuses
manages
side independent
side independent
manages
manages
concludes
acts as
service implementation service management implementation
manages
uses
acts as
service
logic
sub-service
client
service
client
CSM
client
uses
resources
usesuses
service
management logic
sub-service
management client
basic
management functionality
«role»
customer
«role»
provider
«role»
user
Figure 6: Realization View
In the Realization View the service implementation and the
service management implementation are described in detail.
For both there are provider-internal resources and
subservices. For the service implementation a service logic uses
internal resources (devices, knowledge, staff) and external
subservices to provide the service. Analogous, the service
management implementation includes a service management
logic using basic management functionalities [8] and external
management subservices.
The MNM Service Model can be used for a similar
modeling of the used subservices, i.e., the model can be applied
recursively.
As the service-oriented event correlation has to use
dependencies of a service from subservices and resources, the
model is used in the following to derive the needed
information for service events.
5.2 Information Modeling for Service Events
Today"s event correlation deals mainly with events which
are originated from resources. Beside a resource identifier
these events contain information about the resource status,
e.g., SNMP variables. To perform a service-oriented event
correlation it is necessary to define events which are related
to services. These events can be generated from the
provider"s own service surveillance or from customer reports
at the CSM interface. They contain information about the
problems with the agreed QoS. In our information
modeling we define an event superclass which contains common
attributes (e.g., time stamp). Resource event and service
event inherit from this superclass.
Derived from the MNM Service Model we define the
information necessary for a service event.
Service: As a service event shall represent the problems of
a single service, a unique identification of the affected
service is contained here.
Event description: This field has to contain a description
of the problem. Depending on the interactions at the
service access point (Service View) a classification of
the problem into different categories should be defined.
It should also be possible to add an informal
description of the problem.
QoS parameters: For each service QoS parameters
(Service View) are defined between the provider and the
customer. This field represents a list of these QoS
parameters and agreed service levels. The list can help
the provider to set the priority of a problem with
respect to the service levels agreed.
Resource list: This list contains the resources (Realization
View) which are needed to provide the service. This
list is used by the provider to check if one of these
resources causes the problem.
Subservice service event identification: In the service
hierarchy (Realization View) the service, for which this
service event has been issued, may depend on
subservices. If there is a suspicion that one of these
subservices causes the problem, child service events are
issued from this service event for the subservices. In
such a case this field contains links to the
corresponding events.
Other event identifications: In the event correlation
process the service event can be correlated with other
service events or with resource events. This field then
contains links to other events which have been
correlated to this service event. This is useful to, e.g., send a
common message to all affected customers when their
subscribed services are available again.
Issuer"s identification: This field can either contain an
identification of the customer who reported the
problem, an identification of a service provider"s employee
189
(in case the failure has been detected by the provider"s
own service active probing) or a link to a parent
service event. The identification is needed, if there are
ambiguities in the service event or the issuer should
be informed (e.g., that the service is available again).
The possible issuers refer to the basic roles (customer,
provider) in the Service Model.
Assignee: To keep track of the processing the name and
address of the provider"s employee who is solving or
solved the problem is also noted. This is a
specialization of the provider role in the Service Model.
Dates: This field contains key dates in the processing of the
service event such as initial date, problem
identification date, resolution date. These dates are important
to keep track how quick the problems have been solved.
Status: This field represents the service event"s actual
status (e.g., active, suspended, solved).
Priority: The priority shows which importance the service
event has from the provider"s perspective. The
importance is derived from the service agreement, especially
the agreed QoS parameters (Service View).
The fields date, status, and other service events are not
derived directly from the Service Model, but are necessary
for the event correlation process.
6. APPLICATION OF
SERVICE-ORIENTED EVENT CORRELATION FOR A
WEB HOSTING SCENARIO
The Leibniz Supercomputing Center is the joint
computing center for the Munich universities and research
institutions. It also runs the Munich Scientific Network and offers
related services. One of these services is the Virtual WWW
Server, a web hosting offer for smaller research institutions.
It currently has approximately 200 customers.
A subservice of the Virtual WWW Server is the
Storage Service which stores the static and dynamic web pages
and uses caching techniques for a fast access. Other
subservices are DNS and IP service. When a user accesses a
hosted web site via one of the LRZ"s Virtual Private
Networks the VPN service is also used. The resources of the
Virtual WWW Server include a load balancer and 5
redundant servers. The network connections are also part of the
resources as well as the Apache web server application
running on the servers. Figure 7 shows the dependencies of the
Virtual WWW Server.
6.1 Customer Service Management and
Intelligent Assistant
The Intelligent Assistant that is available at the Leibniz
Supercomputing Center can currently be used for
connectivity or performance problems or problems with the LRZ
E-Mail Service. A selection of possible customer problem
reports for the Virtual WWW Server is given in the
following:
• The hosted web site is not reachable.
• The web site access is (too) slow.
• The web site contains outdated content.
server
serverserver
server
server server
server
server
server
outgoing
connection
hosting of LRZ"s
own pages
content
caching
server
emergency
server
webmail
server dynamic
web pages
static
web pages
DNS ProxyIP Storage
Resources:
Services:
Virtual WWW Server
five redundant servers
AFS
NFS
DBload balancer
Figure 7: Dependencies of the Virtual WWW
Server
• The transfer of new content to the LRZ does not
change the provided content.
• The web site looks strange (e.g., caused by problems
with HTML version)
This customer reports have to be mapped onto failures
in resources. For, e.g., an unreachable web site different
root causes are possible like a DNS problem, connectivity
problem, wrong configuration of the load balancer.
6.2 Active Probing
In general, active probing can be used for services or
resources. For the service active probing of the Virtual WWW
Server a virtual customer could be installed. This customer
does typical HTTP requests of web sites and compares the
answer with the known content. To check the up-to-dateness
of a test web site, the content could contain a time stamp.
The service active probing could also include the testing of
subservices, e.g., sending requests to the DNS.
The resource active probing performs tests of the resources.
Examples are connectivity tests, requests to application
processes, and tests of available disk space.
6.3 Event Correlation for the Virtual WWW
Server
Figure 8 shows the example processing. At first, a
customer who takes a look at his hosted web site reports that
the content that he had changed is not displayed correctly.
This report is transferred to the service management via
the CSM interface. An Intelligent Assistant could be used
to structure the customer report. The service management
translates the customer report into a service event.
Independent from the customer report the service
provider"s own service active probing tries to change the content
of a test web site. Because this is not possible, a service
event is issued.
Meanwhile, a resource event has been reported to the
event correlator, because an access of the content caching
server to one of the WWW servers failed. As there are no
other events at the moment the resource event correlation
190
customer CSM
service
mgmt
event
correlator
resource
mgmt
customer reports:
"web site content
not up−to−date"
service active
probing reports:
"web site content
change not
possible"
event:
"retrieval of server
content failed"event forward
resource
event
correlation
service
event
correlation
aggregate
event
correlation
link failure
report
event forward
check WWW server
check link
result display
link repair
result display
result forward
customer report
Figure 8: Example processing of a customer report
cannot correlate this event to other events. At this stage
it would be possible that the event correlator asks the
resource management to perform an active probing of related
resources.
Both service events are now transferred to the service
event correlator and are correlated. From the correlation
of these events it seems likely that either the WWW server
itself or the link to the WWW server is the problem"s root
cause. A wrong web site update procedure inside the
content caching server seems to be less likely as this would only
explain the customer report and not the service active
probing result. At this stage a service active probing could be
started, but this does not seem to be useful as this
correlation only deals with the Web Hosting Service and its
resources and not with other services.
After the separate correlation of both resource and service
events, which can be performed in parallel, the aggregate
event correlator is used to correlate both types of events.
The additional resource event makes it seem much more
likely that the problems are caused by a broken link to the
WWW server or by the WWW server itself and not by the
content caching server. In this case the event correlator asks
the resource management to check the link and the WWW
server. The decision between these two likely error causes
can not be further automated here.
Later, the resource management finds out that a broken
link is the failure"s root cause. It informs the event correlator
about this and it can be determined that this explains all
previous events. Therefore, the event correlation can be
stopped at this point.
Depending on the provider"s customer relationship
management the finding of the root cause and an expected repair
time could be reported to the customers. After the link has
been repaired, it is possible to report this event via the CSM
interface.
Even though many details of this event correlation process
could also be performed differently, the example showed an
important advantage of the service-oriented event
correlation. The relationship between the service provisioning and
the provider"s resources is explicitly modeled. This allows a
mapping of the customer report onto the provider-internal
resources.
6.4 Event Correlation for Different Services
If a provider like the LRZ offers several services the
serviceoriented event correlation can be used to reveal relationships
that are not obvious in the first place. If the LRZ E-Mail
Service and its events are viewed in relationship with the
events for the Virtual WWW Server, it is possible to identify
failures in common subservices and resources. Both services
depend on the DNS which means that customer reports like
I cannot retrieve new e-mail and The web site of my
research institute is not available can have a common cause,
e.g., the DNS does not work properly.
7. CONCLUSION AND FUTURE WORK
In our paper we showed the need for a service-oriented
event correlation. For an IT service provider this new kind
of event correlation makes it possible to automatically map
problems with the current service quality onto resource
failures. This helps to find the failure"s root cause earlier and
to reduce costs for SLA violations. In addition, customer
reports can be linked together and therefore the processing
effort can be reduced.
To receive these benefits we presented our approach for
performing the service-oriented event correlation as well as
a modeling of the necessary correlation information. In the
future we are going to apply our workflow and information
modeling for services offered by the Leibniz Supercomputing
Center going further into details.
Several issues have not been treated in detail so far, e.g.,
the consequences for the service-oriented event correlation if
a subservice is offered by another provider. If a service does
not perform properly, it has to be determined whether this
is caused by the provider himself or by the subservice. In
the latter case appropriate information has to be exchanged
between the providers via the CSM interface. Another issue
is the use of active probing in the event correlation process
which can improve the result, but can also lead to a
correlation delay.
Another important point is the precise definition of
dependency which has also been left out by many other
publications. To avoid having to much dependencies in a certain
situation one could try to check whether the dependencies
currently exist. In case of a download from a web site there
is only a dependency from the DNS subservice at the
beginning, but after the address is resolved a download
failure is unlikely to have been caused by the DNS. Another
possibility to reduce the dependencies is to divide a service
into its possible user interactions (e.g., an e-mail service into
transactions like get mail, sent mail, etc) and to define the
dependencies for each user interaction.
Acknowledgments
The authors wish to thank the members of the Munich
Network Management (MNM) Team for helpful discussions and
valuable comments on previous versions of the paper. The
MNM Team, directed by Prof. Dr. Heinz-Gerd Hegering, is a
191
group of researchers of the Munich Universities and the
Leibniz Supercomputing Center of the Bavarian Academy of
Sciences. Its web server is located at wwwmnmteam.informatik.
uni-muenchen.de.
8. REFERENCES
[1] K. Appleby, G. Goldszmidt, and M. Steinder.
Yemanja - A Layered Event Correlation Engine for
Multi-domain Server Farms. In Proceedings of the
Seventh IFIP/IEEE International Symposium on
Integrated Network Management, pages 329-344.
IFIP/IEEE, May 2001.
[2] Spectrum, Aprisma Corporation.
http://www.aprisma.com.
[3] C. Ensel. New Approach for Automated Generation of
Service Dependency Models. In Network Management
as a Strategy for Evolution and Development; Second
Latin American Network Operation and Management
Symposium (LANOMS 2001). IEEE, August 2001.
[4] C. Ensel and A. Keller. An Approach for Managing
Service Dependencies with XML and the Resource
Description Framework. Journal of Network and
Systems Management, 10(2), June 2002.
[5] M. Garschhammer, R. Hauck, H.-G. Hegering,
B. Kempter, M. Langer, M. Nerb, I. Radisic,
H. Roelle, and H. Schmidt. Towards generic Service
Management Concepts - A Service Model Based
Approach. In Proceedings of the Seventh IFIP/IEEE
International Symposium on Integrated Network
Management, pages 719-732. IFIP/IEEE, May 2001.
[6] B. Gruschke. Integrated Event Management: Event
Correlation using Dependency Graphs. In Proceedings
of the 9th IFIP/IEEE International Workshop on
Distributed Systems: Operations & Management
(DSOM 98). IEEE/IFIP, October 1998.
[7] M. Gupta, A. Neogi, M. Agarwal, and G. Kar.
Discovering Dynamic Dependencies in Enterprise
Environments for Problem Determination. In
Proceedings of the 14th IFIP/IEEE Workshop on
Distributed Sytems: Operations and Management.
IFIP/IEEE, October 2003.
[8] H.-G. Hegering, S. Abeck, and B. Neumair. Integrated
Management of Networked Systems - Concepts,
Architectures and their Operational Application.
Morgan Kaufmann Publishers, 1999.
[9] IT Infrastructure Library, Office of Government
Commerce and IT Service Management Forum.
http://www.itil.co.uk.
[10] G. Jakobson and M. Weissman. Alarm Correlation.
IEEE Network, 7(6), November 1993.
[11] G. Jakobson and M. Weissman. Real-time
Telecommunication Network Management: Extending
Event Correlation with Temporal Constraints. In
Proceedings of the Fourth IEEE/IFIP International
Symposium on Integrated Network Management, pages
290-301. IEEE/IFIP, May 1995.
[12] S. Kliger, S. Yemini, Y. Yemini, D. Ohsie, and
S. Stolfo. A Coding Approach to Event Correlation. In
Proceedings of the Fourth IFIP/IEEE International
Symposium on Integrated Network Management, pages
266-277. IFIP/IEEE, May 1995.
[13] M. Langer, S. Loidl, and M. Nerb. Customer Service
Management: A More Transparent View To Your
Subscribed Services. In Proceedings of the 9th
IFIP/IEEE International Workshop on Distributed
Systems: Operations & Management (DSOM 98),
Newark, DE, USA, October 1998.
[14] L. Lewis. A Case-based Reasoning Approach for the
Resolution of Faults in Communication Networks. In
Proceedings of the Third IFIP/IEEE International
Symposium on Integrated Network Management.
IFIP/IEEE, 1993.
[15] L. Lewis. Service Level Management for Enterprise
Networks. Artech House, Inc., 1999.
[16] NETeXPERT, Agilent Technologies.
http://www.agilent.com/comms/OSS.
[17] InCharge, Smarts Corporation.
http://www.smarts.com.
[18] Enhanced Telecom Operations Map, TeleManagement
Forum. http://www.tmforum.org.
[19] Verizon Communications. http://www.verizon.com.
[20] H. Wietgrefe, K.-D. Tuchs, K. Jobmann, G. Carls,
P. Froelich, W. Nejdl, and S. Steinfeld. Using Neural
Networks for Alarm Correlation in Cellular Phone
Networks. In International Workshop on Applications
of Neural Networks to Telecommunications
(IWANNT), May 1997.
[21] S. Yemini, S. Kliger, E. Mozes, Y. Yemini, and
D. Ohsie. High Speed and Robust Event Correlation.
IEEE Communiations Magazine, 34(5), May 1996.
192 | service level agreement;qos;fault management;process management framework;customer service management;service management;service-oriented management;event correlation;rule-based reasoning;case-based reasoning;service-oriented event correlation |
train_C-77 | Tracking Immediate Predecessors in Distributed Computations | A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited. | 1. INTRODUCTION
A distributed computation consists of a set of processes
that cooperate to achieve a common goal. A main
characteristic of these computations lies in the fact that the
processes do not share a common global memory, and
communicate only by exchanging messages over a
communication network. Moreover, message transfer delays are finite
but unpredictable. This computation model defines what
is known as the asynchronous distributed system model. It
is particularly important as it includes systems that span
large geographic areas, and systems that are subject to
unpredictable loads. Consequently, the concepts, tools and
mechanisms developed for asynchronous distributed systems
reveal to be both important and general.
Causality is a key concept to understand and master the
behavior of asynchronous distributed systems [18]. More
precisely, given two events e and f of a distributed
computation, a crucial problem that has to be solved in a lot of
distributed applications is to know whether they are causally
related, i.e., if the occurrence of one of them is a consequence
of the occurrence of the other. The causal past of an event
e is the set of events from which e is causally dependent.
Events that are not causally dependent are said to be
concurrent. Vector clocks [5, 16] have been introduced to allow
processes to track causality (and concurrency) between the
events they produce. The timestamp of an event produced
by a process is the current value of the vector clock of the
corresponding process. In that way, by associating vector
timestamps with events it becomes possible to safely decide
whether two events are causally related or not.
Usually, according to the problem he focuses on, a
designer is interested only in a subset of the events produced by
a distributed execution (e.g., only the checkpoint events are
meaningful when one is interested in determining
consistent global checkpoints [12]). It follows that detecting causal
dependencies (or concurrency) on all the events of the
distributed computation is not desirable in all applications [7,
15]. In other words, among all the events that may occur
in a distributed computation, only a subset of them are
relevant. In this paper, we are interested in the restriction of
the causality relation to the subset of events defined as being
the relevant events of the computation.
Being a strict partial order, the causality relation is
transitive. As a consequence, among all the relevant events that
causally precede a given relevant event e, only a subset are
its immediate predecessors: those are the events f such that
there is no relevant event on any causal path from f to e.
Unfortunately, given only the vector timestamp associated
with an event it is not possible to determine which events of
its causal past are its immediate predecessors. This comes
from the fact that the vector timestamp associated with e
determines, for each process, the last relevant event
belong210
ing to the causal past of e, but such an event is not
necessarily an immediate predecessor of e. However, some
applications [4, 6] require to associate with each relevant event only
the set of its immediate predecessors. Those applications are
mainly related to the analysis of distributed computations.
Some of those analyses require the construction of the
lattice of consistent cuts produced by the computation [15, 16].
It is shown in [4] that the tracking of immediate
predecessors allows an efficient on the fly construction of this lattice.
More generally, these applications are interested in the very
structure of the causal past. In this context, the
determination of the immediate predecessors becomes a major issue
[6]. Additionally, in some circumstances, this determination
has to satisfy behavior constraints. If the communication
pattern of the distributed computation cannot be modified,
the determination has to be done without adding control
messages. When the immediate predecessors are used to
monitor the computation, it has to be done on the fly.
We call Immediate Predecessor Tracking (IPT) the
problem that consists in determining on the fly and without
additional messages the immediate predecessors of relevant
events. This problem consists actually in determining the
transitive reduction (Hasse diagram) of the causality graph
generated by the relevant events of the computation.
Solving this problem requires tracking causality, hence using
vector clocks. Previous works have addressed the efficient
implementation of vector clocks to track causal dependence on
relevant events. Their aim was to reduce the size of
timestamps attached to messages. An efficient vector clock
implementation suited to systems with fifo channels is proposed
in [19]. Another efficient implementation that does not
depend on channel ordering property is described in [11]. The
notion of causal barrier is introduced in [2, 17] to reduce
the size of control information required to implement causal
multicast. However, none of these papers considers the
IPT problem. This problem has been addressed for the first
time (to our knowledge) in [4, 6] where an IPT protocol
is described, but without correctness proof. Moreover, in
this protocol, timestamps attached to messages are of size
n. This raises the following question which, to our
knowledge, has never been answered: Are there efficient vector
clock implementation techniques that are suitable for the IPT
problem?.
This paper has three main contributions: (1) a positive
answer to the previous open question, (2) the design of a
family of efficient IPT protocols, and (3) a formal
correctness proof of the associated protocols. From a
methodological point of view the paper uses a top-down approach. It
states abstract properties from which more concrete
properties and protocols are derived. The family of IPT
protocols is defined by a general condition that allows
application messages to piggyback control information whose size
can be smaller than the system size (i.e., smaller than the
number of processes composing the system). In that sense,
this family defines low cost IPT protocols when we
consider the message size. In addition to efficiency, the proposed
approach has an interesting design property. Namely, the
family is incrementally built in three steps. The basic
vector clock protocol is first enriched by adding to each process
a boolean vector whose management allows the processes
to track the immediate predecessor events. Then, a general
condition is stated to reduce the size of the control
information carried by messages. Finally, according to the way this
condition is implemented, three IPT protocols are obtained.
The paper is composed of seven sections. Sections 2
introduces the computation model, vector clocks and the notion
of relevant events. Section 3 presents the first step of the
construction that results in an IPT protocol in which each
message carries a vector clock and a boolean array, both
of size n (the number of processes). Section 4 improves
this protocol by providing the general condition that allows
a message to carry control information whose size can be
smaller than n. Section 5 provides instantiations of this
condition. Section 6 provides a simulation study comparing
the behaviors of the proposed protocols. Finally, Section 7
concludes the paper. (Due to space limitations, proofs of
lemmas and theorems are omitted. They can be found in
[1].)
2. MODEL AND VECTOR CLOCK
2.1 Distributed Computation
A distributed program is made up of sequential local
programs which communicate and synchronize only by
exchanging messages. A distributed computation describes the
execution of a distributed program. The execution of a local
program gives rise to a sequential process. Let {P1, P2, . . . ,
Pn} be the finite set of sequential processes of the distributed
computation. Each ordered pair of communicating processes
(Pi, Pj ) is connected by a reliable channel cij through which
Pi can send messages to Pj. We assume that each message
is unique and a process does not send messages to itself1
.
Message transmission delays are finite but unpredictable.
Moreover, channels are not necessarily fifo. Process speeds
are positive but arbitrary. In other words, the underlying
computation model is asynchronous.
The local program associated with Pi can include send,
receive and internal statements. The execution of such a
statement produces a corresponding send/receive/internal
event. These events are called primitive events. Let ex
i
be the x-th event produced by process Pi. The sequence
hi = e1
i e2
i . . . ex
i . . . constitutes the history of Pi, denoted
Hi. Let H = ∪n
i=1Hi be the set of events produced by a
distributed computation. This set is structured as a partial
order by Lamport"s happened before relation [14] (denoted
hb
→) and defined as follows: ex
i
hb
→ ey
j if and only if
(i = j ∧ x + 1 = y) (local precedence) ∨
(∃m : ex
i = send(m) ∧ ey
j = receive(m)) (msg prec.) ∨
(∃ ez
k : ex
i
hb
→ ez
k ∧ e z
k
hb
→ ey
j ) (transitive closure).
max(ex
i , ey
j ) is a partial function defined only when ex
i and
ey
j are ordered. It is defined as follows: max(ex
i , ey
j ) = ex
i if
ey
j
hb
→ ex
i , max(ex
i , ey
j ) = ey
i if ex
i
hb
→ ey
j .
Clearly the restriction of
hb
→ to Hi, for a given i, is a total
order. Thus we will use the notation ex
i < ey
i iff x < y.
Throughout the paper, we will use the following notation:
if e ∈ Hi is not the first event produced by Pi, then pred(e)
denotes the event immediately preceding e in the sequence
Hi. If e is the first event produced by Pi, then pred(e) is
denoted by ⊥ (meaning that there is no such event), and
∀e ∈ Hi : ⊥ < e. The partial order bH = (H,
hb
→)
constitutes a formal model of the distributed computation it is
associated with.
1
This assumption is only in order to get simple protocols.
211
P1
P2
P3
[1, 1, 2]
[1, 0, 0] [3, 2, 1]
[1, 1, 0]
(2, 1)
[0, 0, 1]
(3, 1)
[2, 0, 1]
(1, 1) (1, 3)(1, 2)
(2, 2) (2, 3)
(3, 2)
[2, 2, 1] [2, 3, 1]
(1, 1) (1, 2) (1, 3)
(2, 1)
(2, 2)
(2, 3)
(3, 1)
(3, 2)
Figure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram)
2.2 Relevant Events
For a given observer of a distributed computation, only
some events are relevant2
[7, 9, 15]. An interesting example
of what an observation is, is the detection of predicates
on consistent global states of a distributed computation [3,
6, 8, 9, 13, 15]. In that case, a relevant event corresponds
to the modification of a local variable involved in the global
predicate. Another example is the checkpointing problem
where a relevant event is the definition of a local checkpoint
[10, 12, 20].
The left part of Figure 1 depicts a distributed computation
using the classical space-time diagram. In this figure, only
relevant events are represented. The sequence of relevant
events produced by process Pi is denoted by Ri, and R =
∪n
i=1Ri ⊆ H denotes the set of all relevant events. Let →
be the relation on R defined in the following way:
∀ (e, f) ∈ R × R : (e → f) ⇔ (e
hb
→ f).
The poset (R, →) constitutes an abstraction of the
distributed computation [7]. In the following we consider a
distributed computation at such an abstraction level.
Moreover, without loss of generality we consider that the set of
relevant events is a subset of the internal events (if a
communication event has to be observed, a relevant internal event
can be generated just before a send and just after a receive
communication event occurred). Each relevant event is
identified by a pair (process id, sequence number) (see Figure 1).
Definition 1. The relevant causal past of an event e ∈
H is the (partially ordered) subset of relevant events f such
that f
hb
→ e. It is denoted ↑ (e). We have ↑ (e) = {f ∈
R | f
hb
→ e}.
Note that, if e ∈ R then ↑ (e) = {f ∈ R | f → e}. In
the computation described in Figure 1, we have, for the
event e identified (2, 2): ↑ (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}.
The following properties are immediate consequences of the
previous definitions. Let e ∈ H.
CP1 If e is not a receive event then
↑ (e) =
8
<
:
∅ if pred(e) = ⊥,
↑ (pred(e)) ∪ {pred(e)} if pred(e) ∈ R,
↑ (pred(e)) if pred(e) ∈ R.
CP2 If e is a receive event (of a message m) then
↑ (e) =
8
>><
>>:
↑ (send(m)) if pred(e) = ⊥,
↑ (pred(e))∪ ↑ (send(m)) ∪ {pred(e)}
if pred(e) ∈ R,
↑ (pred(e))∪ ↑ (send(m)) if pred(e) ∈ R.
2
Those events are sometimes called observable events.
Definition 2. Let e ∈ Hi. For every j such that ↑ (e) ∩
Rj = ∅, the last relevant event of Pj with respect to e is:
lastr(e, j) = max{f | f ∈↑ (e) ∩ Rj}. When ↑ (e) ∩ Rj = ∅,
lastr(e, j) is denoted by ⊥ (meaning that there is no such
event).
Let us consider the event e identified (2,2) in Figure 1. We
have lastr(e, 1) = (1, 2), lastr(e, 2) = (2, 1), lastr(e, 3) =
(3, 1). The following properties relate the events lastr(e, j)
and lastr(f, j) for all the predecessors f of e in the relation
hb
→. These properties follow directly from the definitions.
Let e ∈ Hi.
LR0 ∀e ∈ Hi:
lastr(e, i) =
8
<
:
⊥ if pred(e) = ⊥,
pred(e) if pred(e) ∈ R,
lastr(pred(e),i) if pred(e) ∈ R.
LR1 If e is not a receipt event: ∀j = i :
lastr(e, j) = lastr(pred(e),j).
LR2 If e is a receive event of m: ∀j = i :
lastr(e, j) = max(lastr(pred(e),j), lastr(send(m),j)).
2.3 Vector Clock System
Definition As a fundamental concept associated with the
causality theory, vector clocks have been introduced in 1988,
simultaneously and independently by Fidge [5] and Mattern
[16]. A vector clock system is a mechanism that associates
timestamps with events in such a way that the
comparison of their timestamps indicates whether the
corresponding events are or are not causally related (and, if they are,
which one is the first). More precisely, each process Pi has a
vector of integers V Ci[1..n] such that V Ci[j] is the number
of relevant events produced by Pj, that belong to the
current relevant causal past of Pi. Note that V Ci[i] counts the
number of relevant events produced so far by Pi. When a
process Pi produces a (relevant) event e, it associates with
e a vector timestamp whose value (denoted e.V C) is equal
to the current value of V Ci.
Vector Clock Implementation The following
implementation of vector clocks [5, 16] is based on the observation
that ∀i, ∀e ∈ Hi, ∀j : e.V Ci[j] = y ⇔ lastr(e, j) = ey
j
where e.V Ci is the value of V Ci just after the occurrence
of e (this relation results directly from the properties LR0,
LR1, and LR2). Each process Pi manages its vector clock
V Ci[1..n] according to the following rules:
VC0 V Ci[1..n] is initialized to [0, . . . , 0].
VC1 Each time it produces a relevant event e, Pi increments
its vector clock entry V Ci[i] (V Ci[i] := V Ci[i] + 1) to
212
indicate it has produced one more relevant event, then
Pi associates with e the timestamp e.V C = V Ci.
VC2 When a process Pi sends a message m, it attaches to
m the current value of V Ci. Let m.V C denote this
value.
VC3 When Pi receives a message m, it updates its vector
clock as follows: ∀k : V Ci[k] := max(V Ci[k], m.V C[k]).
3. IMMEDIATE PREDECESSORS
In this section, the Immediate Predecessor Tracking
(IPT) problem is stated (Section 3.1). Then, some technical
properties of immediate predecessors are stated and proved
(Section 3.2). These properties are used to design the basic
IPT protocol and prove its correctness (Section 3.3). This
IPT protocol, previously presented in [4] without proof, is
built from a vector clock protocol by adding the
management of a local boolean array at each process.
3.1 The IPT Problem
As indicated in the introduction, some applications (e.g.,
analysis of distributed executions [6], detection of
distributed properties [7]) require to determine (on-the-fly and
without additional messages) the transitive reduction of the
relation → (i.e., we must not consider transitive causal
dependency). Given two relevant events f and e, we say that f
is an immediate predecessor of e if f → e and there is no
relevant event g such that f → g → e.
Definition 3. The Immediate Predecessor Tracking
(IPT) problem consists in associating with each relevant event
e the set of relevant events that are its immediate
predecessors. Moreover, this has to be done on the fly and without
additional control message (i.e., without modifying the
communication pattern of the computation).
As noted in the Introduction, the IPT problem is the
computation of the Hasse diagram associated with the partially
ordered set of the relevant events produced by a distributed
computation.
3.2 Formal Properties of IPT
In order to design a protocol solving the IPT problem, it
is useful to consider the notion of immediate relevant
predecessor of any event, whether relevant or not. First, we
observe that, by definition, the immediate predecessor on
Pj of an event e is necessarily the lastr(e, j) event.
Second, for lastr(e, j) to be immediate predecessor of e, there
must not be another lastr(e, k) event on a path between
lastr(e, j) and e. These observations are formalized in the
following definition:
Definition 4. Let e ∈ Hi. The set of immediate
relevant predecessors of e (denoted IP(e)), is the set of the relevant
events lastr(e, j) (j = 1, . . . , n) such that ∀k : lastr(e, j) ∈↑
(lastr(e, k)).
It follows from this definition that IP(e) ⊆ {lastr(e, j)|j =
1, . . . , n} ⊂↑ (e). When we consider Figure 1, The graph
depicted in its right part describes the immediate predecessors
of the relevant events of the computation defined in its left
part, more precisely, a directed edge (e, f) means that the
relevant event e is an immediate predecessor of the relevant
event f (3
).
The following lemmas show how the set of immediate
predecessors of an event is related to those of its predecessors
in the relation
hb
→. They will be used to design and prove
the protocols solving the IPT problem. To ease the reading
of the paper, their proofs are presented in Appendix A.
The intuitive meaning of the first lemma is the following:
if e is not a receive event, all the causal paths arriving at e
have pred(e) as next-to-last event (see CP1). So, if pred(e)
is a relevant event, all the relevant events belonging to its
relevant causal past are separated from e by pred(e), and
pred(e) becomes the only immediate predecessor of e. In
other words, the event pred(e) constitutes a reset w.r.t.
the set of immediate predecessors of e. On the other hand,
if pred(e) is not relevant, it does not separate its relevant
causal past from e.
Lemma 1. If e is not a receive event, IP(e) is equal to:
∅ if pred(e) = ⊥,
{pred(e)} if pred(e) ∈ R,
IP(pred(e)) if pred(e) ∈ R.
The intuitive meaning of the next lemma is as follows: if
e is a receive event receive(m), the causal paths arriving
at e have either pred(e) or send(m) as next-to-last events.
If pred(e) is relevant, as explained in the previous lemma,
this event hides from e all its relevant causal past and
becomes an immediate predecessor of e. Concerning the
last relevant predecessors of send(m), only those that are
not predecessors of pred(e) remain immediate predecessors
of e.
Lemma 2. Let e ∈ Hi be the receive event of a message
m. If pred(e) ∈ Ri, then, ∀j, IP(e) ∩ Rj is equal to:
{pred(e)} if j = i,
∅ if lastr(pred(e),j) ≥ lastr(send(m),j),
IP(send(m)) ∩ Rj if lastr(pred(e),j) < lastr(send(m),j).
The intuitive meaning of the next lemma is the following:
if e is a receive event receive(m), and pred(e) is not
relevant, the last relevant events in the relevant causal past of e are
obtained by merging those of pred(e) and those of send(m)
and by taking the latest on each process. So, the
immediate predecessors of e are either those of pred(e) or those
of send(m). On a process where the last relevant events
of pred(e) and of send(m) are the same event f, none of
the paths from f to e must contain another relevant event,
and thus, f must be immediate predecessor of both events
pred(e) and send(m).
Lemma 3. Let e ∈ Hi be the receive event of a message
m. If pred(e) ∈ Ri, then, ∀j, IP(e) ∩ Rj is equal to:
IP(pred(e)) ∩ Rj if lastr(pred(e),j) > lastr(send(m),j),
IP(send(m)) ∩ Rj if lastr(pred(e),j) < lastr(send(m),j)
IP(pred(e))∩IP(send(m))∩Rj if lastr(pred(e),j) = lastr
(send(m), j).
3.3 A Basic IPT Protocol
The basic protocol proposed here associates with each
relevant event e, an attribute encoding the set IP(e) of its
immediate predecessors. From the previous lemmas, the set
3
Actually, this graph is the Hasse diagram of the partial
order associated with the distributed computation.
213
IP(e) of any event e depends on the sets IP of the events
pred(e) and/or send(m) (when e = receive(m)). Hence the
idea to introduce a data structure allowing to manage the
sets IPs inductively on the poset (H,
hb
→). To take into
account the information from pred(e), each process manages
a boolean array IPi such that, ∀e ∈ Hi the value of IPi
when e occurs (denoted e.IPi) is the boolean array
representation of the set IP(e). More precisely, ∀j : IPi[j] =
1 ⇔ lastr(e, j) ∈ IP(e). As recalled in Section 2.3, the
knowledge of lastr(e,j) (for every e and every j) is based
on the management of vectors V Ci. Thus, the set IP(e) is
determined in the following way:
IP(e) = {ey
j | e.V Ci[j] = y ∧ e.IPi[j] = 1, j = 1, . . . , n}
Each process Pi updates IPi according to the Lemmas 1,
2, and 3:
1. It results from Lemma 1 that, if e is not a receive event,
the current value of IPi is sufficient to determine e.IPi.
It results from Lemmas 2 and 3 that, if e is a receive
event (e = receive(m)), then determining e.IPi
involves information related to the event send(m). More
precisely, this information involves IP(send(m)) and
the timestamp of send(m) (needed to compare the
events lastr(send(m),j) and lastr(pred(e),j), for
every j). So, both vectors send(m).V Cj and send(m).IPj
(assuming send(m) produced by Pj ) are attached to
message m.
2. Moreover, IPi must be updated upon the occurrence
of each event. In fact, the value of IPi just after an
event e is used to determine the value succ(e).IPi. In
particular, as stated in the Lemmas, the determination
of succ(e).IPi depends on whether e is relevant or not.
Thus, the value of IPi just after the occurrence of event
e must keep track of this event.
The following protocol, previously presented in [4] without
proof, ensures the correct management of arrays V Ci (as in
Section 2.3) and IPi (according to the Lemmas of Section
3.2). The timestamp associated with a relevant event e is
denoted e.TS.
R0 Initialization: Both V Ci[1..n] and IPi[1..n] are
initialized to [0, . . . , 0].
R1 Each time it produces a relevant event e:
- Pi associates with e the timestamp e.TS defined
as follows e.TS = {(k, V Ci[k]) | IPi[k] = 1},
- Pi increments its vector clock entry V Ci[i]
(namely it executes V Ci[i] := V Ci[i] + 1),
- Pi resets IPi: ∀ = i : IPi[ ] := 0; IPi[i] := 1.
R2 When Pi sends a message m to Pj, it attaches to m
the current values of V Ci (denoted m.V C) and the
boolean array IPi (denoted m.IP).
R3 When it receives a message m from Pj , Pi executes the
following updates:
∀k ∈ [1..n] : case
V Ci[k] < m.V C[k] thenV Ci[k] := m.V C[k];
IPi[k] := m.IP[k]
V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k])
V Ci[k] > m.V C[k] then skip
endcase
The proof of the following theorem directly follows from
Lemmas 1, 2 and 3.
Theorem 1. The protocol described in Section 3.3 solves
the IPT problem: for any relevant event e, the timestamp
e.TS contains the identifiers of all its immediate
predecessors and no other event identifier.
4. A GENERAL CONDITION
This section addresses a previously open problem,
namely, How to solve the IPT problem without requiring each
application message to piggyback a whole vector clock and
a whole boolean array?. First, a general condition that
characterizes which entries of vectors V Ci and IPi can be
omitted from the control information attached to a message
sent in the computation, is defined (Section 4.1). It is then
shown (Section 4.2) that this condition is both sufficient and
necessary.
However, this general condition cannot be locally
evaluated by a process that is about to send a message. Thus,
locally evaluable approximations of this general condition
must be defined. To each approximation corresponds a
protocol, implemented with additional local data structures. In
that sense, the general condition defines a family of IPT
protocols, that solve the previously open problem. This issue
is addressed in Section 5.
4.1 To Transmit or Not to Transmit Control
Information
Let us consider the previous IPT protocol (Section 3.3).
Rule R3 shows that a process Pj does not systematically
update each entry V Cj[k] each time it receives a message
m from a process Pi: there is no update of V Cj[k] when
V Cj[k] ≥ m.V C[k]. In such a case, the value m.V C[k] is
useless, and could be omitted from the control information
transmitted with m by Pi to Pj.
Similarly, some entries IPj[k] are not updated when a
message m from Pi is received by Pj. This occurs when
0 < V Cj[k] = m.V C[k] ∧ m.IP[k] = 1, or when V Cj [k] >
m.V C[k], or when m.V C[k] = 0 (in the latest case, as
m.IP[k] = IPi[k] = 0 then no update of IPj[k] is necessary).
Differently, some other entries are systematically reset to 0
(this occurs when 0 < V Cj [k] = m.V C[k] ∧ m.IP[k] = 0).
These observations lead to the definition of the condition
K(m, k) that characterizes which entries of vectors V Ci and
IPi can be omitted from the control information attached
to a message m sent by a process Pi to a process Pj:
Definition 5. K(m, k) ≡
(send(m).V Ci[k] = 0)
∨ (send(m).V Ci[k] < pred(receive(m)).V Cj[k])
∨
;
(send(m).V Ci[k] = pred(receive(m)).V Cj[k])
∧(send(m).IPi[k] = 1) .
4.2 A Necessary and Sufficient Condition
We show here that the condition K(m, k) is both
necessary and sufficient to decide which triples of the form
(k, send(m).V Ci[k], send(m).IPi[k]) can be omitted in an
outgoing message m sent by Pi to Pj. A triple attached to
m will also be denoted (k, m.V C[k], m.IP[k]). Due to space
limitations, the proofs of Lemma 4 and Lemma 5 are given
in [1]. (The proof of Theorem 2 follows directly from these
lemmas.)
214
Lemma 4. (Sufficiency) If K(m, k) is true, then the triple
(k, m.V C[k], m.IP[k]) is useless with respect to the correct
management of IPj[k] and V Cj [k].
Lemma 5. (Necessity) If K(m, k) is false, then the triple
(k, m.V C[k], m.IP[k]) is necessary to ensure the correct
management of IPj[k] and V Cj [k].
Theorem 2. When a process Pi sends m to a process Pj,
the condition K(m, k) is both necessary and sufficient not to
transmit the triple (k, send(m).V Ci[k], send(m).IPi[k]).
5. A FAMILY OF IPT PROTOCOLS BASED
ON EVALUABLE CONDITIONS
It results from the previous theorem that, if Pi could
evaluate K(m, k) when it sends m to Pj, this would
allow us improve the previous IPT protocol in the following
way: in rule R2, the triple (k, V Ci[k], IPi[k]) is
transmitted with m only if ¬K(m, k). Moreover, rule R3 is
appropriately modified to consider only triples carried by m.
However, as previously mentioned, Pi cannot locally
evaluate K(m, k) when it is about to send m. More
precisely, when Pi sends m to Pj , Pi knows the exact values of
send(m).V Ci[k] and send(m).IPi[k] (they are the current
values of V Ci[k] and IPi[k]). But, as far as the value of
pred(receive(m)).V Cj[k] is concerned, two cases are
possible. Case (i): If pred(receive(m))
hb
→ send(m), then Pi can
know the value of pred(receive(m)).V Cj[k] and
consequently can evaluate K(m, k). Case (ii): If pred(receive(m))
and send(m) are concurrent, Pi cannot know the value of
pred(receive(m)).V Cj[k] and consequently cannot evaluate
K(m, k). Moreover, when it sends m to Pj , whatever the
case (i or ii) that actually occurs, Pi has no way to know
which case does occur. Hence the idea to define evaluable
approximations of the general condition. Let K (m, k) be
an approximation of K(m, k), that can be evaluated by a
process Pi when it sends a message m. To be correct, the
condition K must ensure that, every time Pi should
transmit a triple (k, V Ci[k], IPi[k]) according to Theorem 2 (i.e.,
each time ¬K(m, k)), then Pi transmits this triple when it
uses condition K . Hence, the definition of a correct
evaluable approximation:
Definition 6. A condition K , locally evaluable by a
process when it sends a message m to another process, is
correct if ∀(m, k) : ¬K(m, k) ⇒ ¬K (m, k) or, equivalently,
∀(m, k) : K (m, k) ⇒ K(m, k).
This definition means that a protocol evaluating K to
decide which triples must be attached to messages, does not
miss triples whose transmission is required by Theorem 2.
Let us consider the constant condition (denoted K1),
that is always false, i.e., ∀(m, k) : K1(m, k) = false. This
trivially correct approximation of K actually corresponds
to the particular IPT protocol described in Section 3 (in
which each message carries a whole vector clock and a
whole boolean vector). The next section presents a better
approximation of K (denoted K2).
5.1 A Boolean Matrix-Based Evaluable
Condition
Condition K2 is based on the observation that condition
K is composed of sub-conditions. Some of them can be
Pj
send(m)
Pi
V Ci[k] = x
IPi[k] = 1
V Cj[k] ≥ x receive(m)
Figure 2: The Evaluable Condition K2
locally evaluated while the others cannot. More
precisely, K ≡ a ∨ α ∨ (β ∧ b), where a ≡ send(m).V Ci[k] = 0
and b ≡ send(m).IPi[k] = 1 are locally evaluable,
whereas α ≡ send(m).V Ci[k] < pred(receive(m)).V Cj[k] and
β ≡ send(m).V Ci[k] = pred(receive(m)).V Cj[k] are not.
But, from easy boolean calculus, a∨((α∨β)∧b) =⇒ a∨α∨
(β ∧ b) ≡ K. This leads to condition K ≡ a ∨ (γ ∧ b), where
γ = α ∨ β ≡ send(m).V Ci[k] ≤ pred(receive(m)).V Cj[k] ,
i.e., K ≡ (send(m).V Ci[k] ≤ pred(receive(m)).V Cj[k] ∧
send(m).IPi[k] = 1) ∨ send(m).V Ci[k] = 0.
So, Pi needs to approximate the predicate send(m).V Ci[k]
≤ pred(receive(m)).V Cj[k]. To be correct, this
approximation has to be a locally evaluable predicate ci(j, k) such that,
when Pi is about to send a message m to Pj, ci(j, k) ⇒
(send(m).V Ci[k] ≤ pred(receive(m)).V Cj[k]). Informally,
that means that, when ci(j, k) holds, the local context of
Pi allows to deduce that the receipt of m by Pj will not
lead to V Cj[k] update (Pj knows as much as Pi about
Pk). Hence, the concrete condition K2 is the following:
K2 ≡ send(m).V Ci[k] = 0 ∨ (ci(j, k) ∧ send(m).IPi[k] = 1).
Let us now examine the design of such a predicate
(denoted ci). First, the case j = i can be ignored, since it is
assumed (Section 2.1) that a process never sends a
message to itself. Second, in the case j = k, the relation
send(m).V Ci[j] ≤ pred(receive(m)).V Cj [j] is always true,
because the receipt of m by Pj cannot update V Cj[j]. Thus,
∀j = i : ci(j, j) must be true. Now, let us consider the case
where j = i and j = k (Figure 2). Suppose that there exists
an event e = receive(m ) with e < send(m), m sent by
Pj and piggybacking the triple (k, m .V C[k], m .IP[k]), and
m .V C[k] ≥ V Ci[k] (hence m .V C[k] = receive(m ).V Ci[k]).
As V Cj[k] cannot decrease this means that, as long as V Ci[k]
does not increase, for every message m sent by Pi to Pj we
have the following: send(m).V Ci[k] = receive(m ).V Ci[k] =
send(m ).V Cj[k] ≤ receive(m).V Cj [k], i.e., ci(j, k) must
remain true. In other words, once ci(j, k) is true, the only
event of Pi that could reset it to false is either the receipt
of a message that increases V Ci[k] or, if k = i, the
occurrence of a relevant event (that increases V Ci[i]). Similarly,
once ci(j, k) is false, the only event that can set it to true is
the receipt of a message m from Pj, piggybacking the triple
(k, m .V C[k], m .IP[k]) with m .V C[k] ≥ V Ci[k].
In order to implement the local predicates ci(j, k), each
process Pi is equipped with a boolean matrix Mi (as in [11])
such that M[j, k] = 1 ⇔ ci(j, k). It follows from the
previous discussion that this matrix is managed according to the
following rules (note that its i-th line is not significant (case
j = i), and that its diagonal is always equal to 1):
M0 Initialization: ∀ (j, k) : Mi[j, k] is initialized to 1.
215
M1 Each time it produces a relevant event e: Pi resets4
the ith column of its matrix: ∀j = i : Mi[j, i] := 0.
M2 When Pi sends a message: no update of Mi occurs.
M3 When it receives a message m from Pj , Pi executes the
following updates:
∀ k ∈ [1..n] : case
V Ci[k] < m.V C[k] then ∀ = i, j, k : Mi[ , k] := 0;
Mi[j, k] := 1
V Ci[k] = m.V C[k] then Mi[j, k] := 1
V Ci[k] > m.V C[k] then skip
endcase
The following lemma results from rules M0-M3. The
theorem that follows shows that condition K2(m, k) is correct.
(Both are proved in [1].)
Lemma 6. ∀i, ∀m sent by Pi to Pj, ∀k, we have:
send(m).Mi[j, k] = 1 ⇒
send(m).V Ci[k] ≤ pred(receive(m)).V Cj [k].
Theorem 3. Let m be a message sent by Pi to Pj . Let
K2(m, k) ≡ ((send(m).Mi[j, k] = 1) ∧ (send(m).IPi[k] =
1)∨(send(m).V Ci[k] = 0)). We have: K2(m, k) ⇒ K(m, k).
5.2 Resulting IPT Protocol
The complete text of the IPT protocol based on the
previous discussion follows.
RM0 Initialization:
- Both V Ci[1..n] and IPi[1..n] are set to [0, . . . , 0],
and ∀ (j, k) : Mi[j, k] is set to 1.
RM1 Each time it produces a relevant event e:
- Pi associates with e the timestamp e.TS defined
as follows: e.TS = {(k, V Ci[k]) | IPi[k] = 1},
- Pi increments its vector clock entry V Ci[i]
(namely, it executes V Ci[i] := V Ci[i] + 1),
- Pi resets IPi: ∀ = i : IPi[ ] := 0; IPi[i] := 1.
- Pi resets the ith column of its boolean matrix:
∀j = i : Mi[j, i] := 0.
RM2 When Pi sends a message m to Pj, it attaches to m the
set of triples (each made up of a process id, an integer
and a boolean): {(k, V Ci[k], IPi[k]) | (Mi[j, k] = 0 ∨
IPi[k] = 0) ∧ (V Ci[k] > 0)}.
RM3 When Pi receives a message m from Pj , it executes the
following updates:
∀(k,m.V C[k], m.IP[k]) carried by m:
case
V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k];
IPi[k] := m.IP[k];
∀ = i, j, k : Mi[ , k] := 0;
4
Actually, the value of this column remains constant after
its first update. In fact, ∀j, Mi[j, i] can be set to 1 only upon
the receipt of a message from Pj, carrying the value V Cj[i]
(see R3). But, as Mj [i, i] = 1, Pj does not send V Cj[i] to
Pi. So, it is possible to improve the protocol by executing
this reset of the column Mi[∗, i] only when Pi produces
its first relevant event.
Mi[j, k] := 1
V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]);
Mi[j, k] := 1
V Ci[k] > m.V C[k] then skip
endcase
5.3 A Tradeoff
The condition K2(m, k) shows that a triple has not to be
transmitted when (Mi[j, k] = 1 ∧ IPi[k] = 1) ∨ (V Ci[k] >
0). Let us first observe that the management of IPi[k]
is governed by the application program. More precisely,
the IPT protocol does not define which are the
relevant events, it has only to guarantee a correct
management of IPi[k]. Differently, the matrix Mi does not belong
to the problem specification, it is an auxiliary variable of
the IPT protocol, which manages it so as to satisfy the
following implication when Pi sends m to Pj : (Mi[j, k] =
1) ⇒ (pred(receive(m)).V Cj [k] ≥ send(m).V Ci[k]). The
fact that the management of Mi is governed by the protocol
and not by the application program leaves open the
possibility to design a protocol where more entries of Mi are equal
to 1. This can make the condition K2(m, k) more often
satisfied5
and can consequently allow the protocol to transmit
less triples.
We show here that it is possible to transmit less triples
at the price of transmitting a few additional boolean
vectors. The previous IPT matrix-based protocol (Section 5.2)
is modified in the following way. The rules RM2 and
RM3 are replaced with the modified rules RM2" and RM3"
(Mi[∗, k] denotes the kth column of Mi).
RM2" When Pi sends a message m to Pj, it attaches to m
the following set of 4-uples (each made up of a
process id, an integer, a boolean and a boolean vector):
{(k, V Ci[k], IPi[k], Mi[∗, k]) | (Mi[j, k] = 0 ∨ IPi[k] =
0) ∧ V Ci[k] > 0}.
RM3" When Pi receives a message m from Pj , it executes the
following updates:
∀(k,m.V C[k], m.IP[k], m.M[1..n, k]) carried by m:
case
V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k];
IPi[k] := m.IP[k];
∀ = i : Mi[ , k] := m.M[ , k]
V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]);
∀ =i : Mi[ , k] :=
max(Mi[ , k], m.M[ , k])
V Ci[k] > m.V C[k] then skip
endcase
Similarly to the proofs described in [1], it is possible to
prove that the previous protocol still satisfies the
property proved in Lemma 6, namely, ∀i, ∀m sent by Pi to Pj,
∀k we have (send(m).Mi[j, k] = 1) ⇒ (send(m).V Ci[k] ≤
pred(receive(m)).V Cj[k]).
5
Let us consider the previously described protocol (Section
5.2) where the value of each matrix entry Mi[j, k] is always
equal to 0. The reader can easily verify that this setting
correctly implements the matrix. Moreover, K2(m, k) is then
always false: it actually coincides with K1(k, m) (which
corresponds to the case where whole vectors have to be
transmitted with each message).
216
Intuitively, the fact that some columns of matrices M are
attached to application messages allows a transitive
transmission of information. More precisely, the relevant history
of Pk known by Pj is transmitted to a process Pi via a causal
sequence of messages from Pj to Pi. In contrast, the
protocol described in Section 5.2 used only a direct transmission of
this information. In fact, as explained Section 5.1, the
predicate c (locally implemented by the matrix M) was based on
the existence of a message m sent by Pj to Pi, piggybacking
the triple (k, m .V C[k], m .IP[k]), and m .V C[k] ≥ V Ci[k],
i.e., on the existence of a direct transmission of information
(by the message m ).
The resulting IPT protocol (defined by the rules RM0,
RM1, RM2" and RM3") uses the same condition K2(m, k)
as the previous one. It shows an interesting tradeoff between
the number of triples (k, V Ci[k], IPi[k]) whose transmission
is saved and the number of boolean vectors that have to
be additionally piggybacked. It is interesting to notice that
the size of this additional information is bounded while each
triple includes a non-bounded integer (namely a vector clock
value).
6. EXPERIMENTAL STUDY
This section compares the behaviors of the previous
protocols. This comparison is done with a simulation study.
IPT1 denotes the protocol presented in Section 3.3 that
uses the condition K1(m, k) (which is always equal to false).
IPT2 denotes the protocol presented in Section 5.2 that uses
the condition K2(m, k) where messages carry triples.
Finally, IPT3 denotes the protocol presented in Section 5.3 that
also uses the condition K2(m, k) but where messages carry
additional boolean vectors.
This section does not aim to provide an in-depth
simulation study of the protocols, but rather presents a general
view on the protocol behaviors. To this end, it compares
IPT2 and IPT3 with regard to IPT1. More precisely, for
IPT2 the aim was to evaluate the gain in terms of triples
(k, V Ci[k], IPi[k]) not transmitted with respect to the
systematic transmission of whole vectors as done in IPT1. For
IPT3, the aim was to evaluate the tradeoff between the
additional boolean vectors transmitted and the number of saved
triples. The behavior of each protocol was analyzed on a set
of programs.
6.1 Simulation Parameters
The simulator provides different parameters enabling to
tune both the communication and the processes features.
These parameters allow to set the number of processes for
the simulated computation, to vary the rate of
communication (send/receive) events, and to alter the time duration
between two consecutive relevant events. Moreover, to be
independent of a particular topology of the underlying
network, a fully connected network is assumed. Internal events
have not been considered.
Since the presence of the triples (k, V Ci[k], IPi[k])
piggybacked by a message strongly depends on the frequency at
which relevant events are produced by a process, different
time distributions between two consecutive relevant events
have been implemented (e.g., normal, uniform, and Poisson
distributions). The senders of messages are chosen
according to a random law. To exhibit particular configurations
of a distributed computation a given scenario can be
provided to the simulator. Message transmission delays follow
a standard normal distribution. Finally, the last parameter
of the simulator is the number of send events that occurred
during a simulation.
6.2 Parameter Settings
To compare the behavior of the three IPT protocols, we
performed a large number of simulations using different
parameters setting. We set to 10 the number of processes
participating to a distributed computation. The number of
communication events during the simulation has been set to
10 000. The parameter λ of the Poisson time distribution (λ
is the average number of relevant events in a given time
interval) has been set so that the relevant events are generated
at the beginning of the simulation. With the uniform time
distribution, a relevant event is generated (in the average)
every 10 communication events. The location parameter of
the standard normal time distribution has been set so that
the occurrence of relevant events is shifted around the third
part of the simulation experiment.
As noted previously, the simulator can be fed with a
given scenario. This allows to analyze the worst case scenarios
for IPT2 and IPT3. These scenarios correspond to the case
where the relevant events are generated at the maximal
frequency (i.e., each time a process sends or receives a message,
it produces a relevant event).
Finally, the three IPT protocols are analyzed with the
same simulation parameters.
6.3 Simulation Results
The results are displayed on the Figures 3.a-3.d. These
figures plot the gain of the protocols in terms of the number
of triples that are not transmitted (y axis) with respect to
the number of communication events (x axis). From these
figures, we observe that, whatever the time distribution
followed by the relevant events, both IPT2 and IPT3 exhibit
a behavior better than IPT1 (i.e., the total number of
piggybacked triples is lower in IPT2 and IPT3 than in IPT1),
even in the worst case (see Figure 3.d).
Let us consider the worst scenario. In that case, the gain
is obtained at the very beginning of the simulation and lasts
as long as it exists a process Pj for which ∀k : V Cj[k] = 0.
In that case, the condition ∀k : K(m, k) is satisfied. As soon
as ∃k : V Cj[k] = 0, both IPT2 and IPT3 behave as IPT1
(the shape of the curve becomes flat) since the condition
K(m, k) is no longer satisfied.
Figure 3.a shows that during the first events of the
simulation, the slope of curves IPT2 and IPT3 are steep. The
same occurs in Figure 3.d (that depicts the worst case
scenario). Then the slope of these curves decreases and remains
constant until the end of the simulation. In fact, as soon as
V Cj[k] becomes greater than 0, the condition ¬K(m, k)
reduces to (Mi[j, k] = 0 ∨ IPi[k] = 0).
Figure 3.b displays an interesting feature. It considers λ =
100. As the relevant events are taken only during the very
beginning of the simulation, this figure exhibits a very steep
slope as the other figures. The figure shows that, as soon as
no more relevant events are taken, on average, 45% of the
triples are not piggybacked by the messages. This shows
the importance of matrix Mi. Furthermore, IPT3 benefits
from transmitting additional boolean vectors to save triple
transmissions. The Figures 3.a-3.c show that the average
gain of IPT3 with respect to IPT2 is close to 10%.
Finally, Figure 3.c underlines even more the importance
217
of matrix Mi. When very few relevant events are taken,
IPT2 and IPT3 turn out to be very efficient. Indeed, this
figure shows that, very quickly, the gain in number of triples
that are saved is very high (actually, 92% of the triples are
saved).
6.4 Lessons Learned from the Simulation
Of course, all simulation results are consistent with the
theoretical results. IPT3 is always better than or equal to
IPT2, and IPT2 is always better than IPT1. The simulation
results teach us more:
• The first lesson we have learnt concerns the matrix Mi.
Its use is quite significant but mainly depends on the time
distribution followed by the relevant events. On the one
hand, when observing Figure 3.b where a large number of
relevant events are taken in a very short time, IPT2 can save
up to 45% of the triples. However, we could have
expected a more sensitive gain of IPT2 since the boolean vector
IP tends to stabilize to [1, ..., 1] when no relevant events
are taken. In fact, as discussed in Section 5.3, the
management of matrix Mi within IPT2 does not allow a transitive
transmission of information but only a direct transmission
of this information. This explains why some columns of Mi
may remain equal to 0 while they could potentially be equal
to 1. Differently, as IPT3 benefits from transmitting
additional boolean vectors (providing a transitive transmission
information) it reaches a gain of 50%.
On the other hand, when very few relevant events are
taken in a large period of time (see Figure 3.c), the behavior of
IPT2 and IPT3 turns out to be very efficient since the
transmission of up to 92% of the triples is saved. This comes from
the fact that very quickly the boolean vector IPi tends to
stabilize to [1, ..., 1] and that matrix Mi contains very few
0 since very few relevant events have been taken. Thus, a
direct transmission of the information is sufficient to quickly
get matrices Mi equal to [1, ..., 1], . . . , [1, ..., 1].
• The second lesson concerns IPT3, more precisely, the
tradeoff between the additional piggybacking of boolean
vectors and the number of triples whose transmission is saved.
With n = 10, adding 10 booleans to a triple does not
substantially increases its size. The Figures 3.a-3.c exhibit the
number of triples whose transmission is saved: the average
gain (in number of triples) of IPT3 with respect to IPT2 is
about 10%.
7. CONCLUSION
This paper has addressed an important causality-related
distributed computing problem, namely, the Immediate
Predecessors Tracking problem. It has presented a family of
protocols that provide each relevant event with a timestamp
that exactly identify its immediate predecessors. The
family is defined by a general condition that allows application
messages to piggyback control information whose size can
be smaller than n (the number of processes). In that sense,
this family defines message size-efficient IPT protocols.
According to the way the general condition is implemented,
different IPT protocols can be obtained. Three of them have
been described and analyzed with simulation experiments.
Interestingly, it has also been shown that the efficiency of
the protocols (measured in terms of the size of the control
information that is not piggybacked by an application
message) depends on the pattern defined by the communication
events and the relevant events.
Last but not least, it is interesting to note that if one is not
interested in tracking the immediate predecessor events, the
protocols presented in the paper can be simplified by
suppressing the IPi booleans vectors (but keeping the boolean
matrices Mi). The resulting protocols, that implement a
vector clock system, are particularly efficient as far as the
size of the timestamp carried by each message is concerned.
Interestingly, this efficiency is not obtained at the price of
additional assumptions (such as fifo channels).
8. REFERENCES
[1] Anceaume E., H´elary J.-M. and Raynal M., Tracking
Immediate Predecessors in Distributed Computations. Res.
Report #1344, IRISA, Univ. Rennes (France), 2001.
[2] Baldoni R., Prakash R., Raynal M. and Singhal M.,
Efficient ∆-Causal Broadcasting. Journal of Computer
Systems Science and Engineering, 13(5):263-270, 1998.
[3] Chandy K.M. and Lamport L., Distributed Snapshots:
Determining Global States of Distributed Systems, ACM
Transactions on Computer Systems, 3(1):63-75, 1985.
[4] Diehl C., Jard C. and Rampon J.-X., Reachability Analysis
of Distributed Executions, Proc. TAPSOFT"93,
Springer-Verlag LNCS 668, pp. 629-643, 1993.
[5] Fidge C.J., Timestamps in Message-Passing Systems that
Preserve Partial Ordering, Proc. 11th Australian
Computing Conference, pp. 56-66, 1988.
[6] Fromentin E., Jard C., Jourdan G.-V. and Raynal M.,
On-the-fly Analysis of Distributed Computations, IPL,
54:267-274, 1995.
[7] Fromentin E. and Raynal M., Shared Global States in
Distributed Computations, JCSS, 55(3):522-528, 1997.
[8] Fromentin E., Raynal M., Garg V.K. and Tomlinson A.,
On-the-Fly Testing of Regular Patterns in Distributed
Computations. Proc. ICPP"94, Vol. 2:73-76, 1994.
[9] Garg V.K., Principles of Distributed Systems, Kluwer
Academic Press, 274 pages, 1996.
[10] H´elary J.-M., Most´efaoui A., Netzer R.H.B. and Raynal
M., Communication-Based Prevention of Useless
Ckeckpoints in Distributed Computations. Distributed
Computing, 13(1):29-43, 2000.
[11] H´elary J.-M., Melideo G. and Raynal M., Tracking
Causality in Distributed Systems: a Suite of Efficient
Protocols. Proc. SIROCCO"00, Carleton University Press,
pp. 181-195, L"Aquila (Italy), June 2000.
[12] H´elary J.-M., Netzer R. and Raynal M., Consistency Issues
in Distributed Checkpoints. IEEE TSE,
25(4):274-281, 1999.
[13] Hurfin M., Mizuno M., Raynal M. and Singhal M., Efficient
Distributed Detection of Conjunction of Local Predicates
in Asynch Computations. IEEE TSE, 24(8):664-677, 1998.
[14] Lamport L., Time, Clocks and the Ordering of Events in a
Distributed System. Comm. ACM, 21(7):558-565, 1978.
[15] Marzullo K. and Sabel L., Efficient Detection of a Class of
Stable Properties. Distributed Computing, 8(2):81-91, 1994.
[16] Mattern F., Virtual Time and Global States of Distributed
Systems. Proc. Int. Conf. Parallel and Distributed
Algorithms, (Cosnard, Quinton, Raynal, Robert Eds),
North-Holland, pp. 215-226, 1988.
[17] Prakash R., Raynal M. and Singhal M., An Adaptive
Causal Ordering Algorithm Suited to Mobile Computing
Environment. JPDC, 41:190-204, 1997.
[18] Raynal M. and Singhal S., Logical Time: Capturing
Causality in Distributed Systems. IEEE Computer,
29(2):49-57, 1996.
[19] Singhal M. and Kshemkalyani A., An Efficient
Implementation of Vector Clocks. IPL, 43:47-52, 1992.
[20] Wang Y.M., Consistent Global Checkpoints That Contain
a Given Set of Local Checkpoints. IEEE TOC,
46(4):456-468, 1997.
218
0
1000
2000
3000
4000
5000
6000
0 2000 4000 6000 8000 10000
gaininnumberoftriples
communication events number
IPT1
IPT2
IPT3
relevant events
(a) The relevant events follow a uniform distribution
(ratio=1/10)
-5000
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0 2000 4000 6000 8000 10000
gaininnumberoftriples
communication events number
IPT1
IPT2
IPT3
relevant events
(b) The relevant events follow a Poisson distribution
(λ = 100)
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
0 2000 4000 6000 8000 10000
gaininnumberoftriples
communication events number
IPT1
IPT2
IPT3
relevant events
(c) The relevant events follow a normal distribution
0
50
100
150
200
250
300
350
400
450
1 10 100 1000 10000
gaininnumberoftriples
communication events number
IPT1
IPT2
IPT3
relevant events
(d) For each pi, pi takes a relevant event and
broadcast to all processes
Figure 3: Experimental Results
219 | immediate predecessor tracking;relevant event;causality track;transitive reduction;ipt protocol;timestamp;message-pass;hasse diagram;piggybacking;immediate predecessor;tracking causality;common global memory;message transfer delay;checkpointing problem;control information;vector timestamp;distributed computation;vector clock;channel ordering property |
train_C-78 | An Architectural Framework and a Middleware for Cooperating Smart Components | In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. | 1. INTRODUCTION
In recent years we have seen the continuous improvement
of technologies that are relevant for the construction of
distributed embedded systems, including trustworthy visual,
auditory, and location sensing [11], communication and
processing. We believe that in a future networked physical
world a new class of applications will emerge, composed of
a myriad of smart sensors and actuators to assess and
control aspects of their environments and autonomously act in
response to it. The anticipated challenging characteristics
of these applications include autonomy, responsiveness and
safety criticality, large scale, geographical dispersion,
mobility and evolution.
In order to deal with these challenges, it is of
fundamental importance to use adequate high-level models,
abstractions and interaction paradigms. Unfortunately, when
facing the specific characteristics of the target systems, the
shortcomings of current architectures and middleware
interaction paradigms become apparent. Looking to the basic
building blocks of such systems we may find components
which comprise mechanical parts, hardware, software and
a network interface. However, classical event/object
models are usually software oriented and, as such, when
trans28
ported to a real-time, embedded systems setting, their
harmony is cluttered by the conflict between, on the one side,
send/receive of software events (message-based), and on
the other side, input/output of hardware or real-world
events, register-based. In terms of interaction paradigms,
and although the use of event-based models appears to be
a convenient solution [10, 22], these often lack the
appropriate support for non-functional requirements like reliability,
timeliness or security.
This paper describes an architectural framework and a
middleware, supporting a component-based system and an
integrated view on event-based communication comprising
the real world events and the events generated in the system.
When choosing the appropriate interaction paradigm it
is of fundamental importance to address the challenging
issues of the envisaged sentient applications. Unlike classical
approaches that confine the possible interactions to the
application boundaries, i.e. to its components, we consider
that the environment surrounding the application also plays
a relevant role in this respect. Therefore, the paper starts by
clarifying several issues concerning our view of the system,
about the interactions that may take place and about the
information flows. This view is complemented by
providing an outline of the component-based system construction
and, in particular, by showing that it is possible to
compose larger applications from basic components, following
an hierarchical composition approach.
This provides the necessary background to introduce the
Generic-Events Architecture (GEAR), which describes
the event-based interaction between the components via a
generic event layer while allowing the seamless integration
of physical and computer information flows. In fact, the
generic event layer hides the different communication
channels, including the interactions through the environment.
Additionally, the event layer abstraction is also adequate
for the proper handling of the non-functional requirements,
namely reliability and timeliness, which are particularly
stringent in real-time settings. The paper devotes particular
attention to this issue by discussing the temporal aspects of
interactions and the needs for predictability.
An appropriate middleware is presented which reflects
these needs and allows to specify events which have quality
attributes to express temporal constraints. This is
complemented by the notion of Event Channels (EC), which are
abstractions of the underlying network while being abstracted
by the event layer. In fact, event channels play a
fundamental role in securing the functional and non-functional
(e.g. reliability and timeliness) properties of the envisaged
applications, that is, in allowing the enforcement of quality
attributes. They are established prior to interaction to
reserve the needed computational and network resources for
highly predictable event dissemination.
The paper is organized as follows. In Section 3 we
introduce the fundamental notions and abstractions that we
adopt in this work to describe the interactions taking place
in the system. Then, in Section 4, we describe the
componentbased approach that allows composition of objects. GEAR
is then described in Section 5 and Section 6 focuses on
temporal aspects of the interactions. Section 7 describes the
COSMIC middleware, which may be used to specify the
interaction between sentient objects. A simple example to
highlight the ideas presented in the paper appears in
Section 8 and Section 9 concludes the paper.
2. RELATED WORK
Our work considers a wired physical world in which a
very large number of autonomous components cooperate.
It is inspired by many research efforts in very different
areas. Event-based systems in general have been introduced to
meet the requirements of applications in which entities
spontaneously generate information and disseminate it [1, 25,
22]. Intended for large systems and requiring quite complex
infrastructures, these event systems do not consider
stringent quality aspects like timeliness and dependability issues.
Secondly, they are not created to support inter-operability
between tiny smart devices with substantial resource
constraints.
In [10] a real-time event system for CORBA has been
introduced. The events are routed via a central event server
which provides scheduling functions to support the real-time
requirements. Such a central component is not available
in an infrastructure envisaged in our system architecture
and the developed middleware TAO (The Ace Orb) is quite
complex and unsuitable to be directly integrated in smart
devices.
There are efforts to implement CORBA for control
networks, tailored to connect sensor and actuator components [15,
19]. They are targeted for the CAN-Bus [9], a popular
network developed for the automotive industry. However, in
these approaches the support for timeliness or
dependability issues does not exist or is only very limited.
A new scheme to integrate smart devices in a CORBA
environment is proposed in [17] and has lead to the proposal of
a standard by the Object Management Group (OMG) [26].
Smart transducers are organized in clusters that are
connected to a CORBA system by a gateway.
The clusters form isolated subnetworks. A special master
node enforces the temporal properties in the cluster subnet.
A CORBA gateway allows to access sensor data and write
actuator data by means of an interface file system (IFS).
The basic structure is similar to the WAN-of-CANs
structure which has been introduced in the CORTEX project [4].
Islands of tight control may be realized by a control network
and cooperate via wired or wireless networks covering a large
number of these subnetworks. However, in contrast to the
event channel model introduced in this paper, all
communication inside a cluster relies on a single technical solution of
a synchronous communication channel. Secondly, although
the temporal behaviour of a single cluster is rigorously
defined, no model to specify temporal properties for
clusterto-CORBA or cluster-to-cluster interactions is provided.
3. INFORMATION FLOW AND
INTERACTION MODEL
In this paper we consider a component-based system model
that incorporates previous work developed in the context of
the IST CORTEX project [5]. As mentioned above, a
fundamental idea underlying the approach is that applications can
be composed of a large number of smart components that
are able to sense their surrounding environment and
interact with it. These components are referred to as sentient
objects, a metaphor elaborated in CORTEX and inspired
on the generic concept of sentient computing introduced in
[12]. Sentient objects accept input events from a variety of
different sources (including sensors, but not constrained to
that), process them, and produce output events, whereby
29
they actuate on the environment and/or interact with other
objects. Therefore, the following kinds of interactions can
take place in the system:
Environment-to-object interactions: correspond to a
flow of information from the environment to
application objects, reporting about the state of the former,
and/or notifying about events taking place therein.
Object-to-object interactions: correspond to a flow of
information among sentient objects, serving two
purposes. The first is related with complementing the
assessment of each individual object about the state
of the surrounding space. The second is related to
collaboration, in which the object tries to influence other
objects into contributing to a common goal, or into
reacting to an unexpected situation.
Object-to-environment interactions: correspond to a
flow of information from an object to the environment,
with the purpose of forcing a change in the state of the
latter.
Before continuing, we need to clarify a few issues with
respect to these possible forms of interaction. We consider
that the environment can be a producer or consumer of
information while interacting with sentient objects. The
environment is the real (physical) world surrounding an
object, not necessarily close to the object or limited to certain
boundaries. Quite clearly, the information produced by the
environment corresponds to the physical representation of
real-time entities, of which typical examples include
temperature, distance or the state of a door. On the other hand,
actuation on the environment implies the manipulation of
these real-time entities, like increasing the temperature
(applying more heat), changing the distance (applying some
movement) or changing the state of the door (closing or
opening it). The required transformations between system
representations of these real-time entities and their physical
representations is accomplished, generically, by sensors and
actuators. We further consider that there may exist dumb
sensors and actuators, which interact with the objects by
disseminating or capturing raw transducer information, and
smart sensors and actuators, with enhanced processing
capabilities, capable of speaking some more elaborate event
dialect (see Sections 5 and 6.1). Interaction with the
environment is therefore done through sensors and actuators,
which may, or may not be part of sentient objects, as
discussed in Section 4.2.
State or state changes in the environment are considered
as events, captured by sensors (in the environment or within
sentient objects) and further disseminated to other
potentially interested sentient objects in the system. In
consequence, it is quite natural to base the communication and
interaction among sentient objects and with the environment
on an event-based communication model. Moreover, typical
properties of event-based models, such as anonymous and
non-blocking communication, are highly desirable in systems
where sentient objects can be mobile and where interactions
are naturally very dynamic.
A distinguishing aspect of our work from many of the
existing approaches, is that we consider that sentient objects
may indirectly communicate with each other through the
environment, when they act on it. Thus the environment
constitutes an interaction and communication channel and
is in the control and awareness loop of the objects. In other
words, when a sentient object actuates on the environment it
will be able to observe the state changes in the environment
by means of events captured by the sensors. Clearly, other
objects might as well capture the same events, thus
establishing the above-mentioned indirect communication path.
In systems that involve interactions with the environment
it is very important to consider the possibility of
communication through the environment. It has been shown that
the hidden channels developing through the latter (e.g.,
feedback loops) may hinder software-based algorithms ignoring
them [30]. Therefore, any solution to the problem requires
the definition of convenient abstractions and appropriate
architectural constructs.
On the other hand, in order to deal with the information
flow through the whole computer system and environment in
a seamless way, handling software and hardware events
uniformly, it is also necessary to find adequate abstractions.
As discussed in Section 5, the Generic-Events Architecture
introduces the concept of Generic Event and an Event Layer
abstraction which aim at dealing, among others, with these
issues.
4. SENTIENT OBJECT COMPOSITION
In this section we analyze the most relevant issues related
with the sentient object paradigm and the construction of
systems composed of sentient objects.
4.1 Component-based System Construction
Sentient objects can take several different forms: they
can simply be software-based components, but they can also
comprise mechanical and/or hardware parts, amongst which
the very sensorial apparatus that substantiates sentience,
mixed with software components to accomplish their task.
We refine this notion by considering a sentient object as an
encapsulating entity, a component with internal logic and
active processing elements, able to receive, transform and
produce new events. This interface hides the internal
hardware/software structure of the object, which may be
complex, and shields the system from the low-level functional
and temporal details of controlling a specific sensor or
actuator.
Furthermore, given the inherent complexity of the
envisaged applications, the number of simultaneous input events
and the internal size of sentient objects may become too
large and difficult to handle. Therefore, it should be
possible to consider the hierarchical composition of sentient
objects so that the application logic can be separated across as
few or as many of these objects as necessary. On the other
hand, composition of sentient objects should normally be
constrained by the actual hardware component"s structure,
preventing the possibility of arbitrarily composing sentient
objects. This is illustrated in Figure 1, where a sentient
object is internally composed of a few other sentient
objects, each of them consuming and producing events, some
of which only internally propagated.
Observing the figure, and recalling our previous discussion
about the possible interactions, we identify all of them here:
an object-to-environment interaction occurs between the
object controlling a WLAN transmitter and some WLAN
receiver in the environment; an environment-to-object
interaction takes place when the object responsible for the GPS
30
G P S
r e c e p t i o n
W i r e l e s s
t r a n s m i s s i o n
D o p p l e r
r a d a r
P h y s i c a l f e e d b a c k
O b j e c t ' s b o d y
I n t e r n a l N e t w o r k
Figure 1: Component-aware sentient object
composition.
signal reception uses the information transmitted by the
satellites; finally, explicit object-to-object interactions occur
internally to the container object, through an internal
communication network. Additionally, it is interesting to
observe that implicit communication can also occur, whether
the physical feedback develops through the environment
internal to the container object (as depicted) or through the
environment external to this object. However, there is a
subtle difference between both cases. While in the former the
feedback can only be perceived by objects internal to the
container, bounding the extent to which consistency must
be ensured, such bounds do not exist in the latter. In fact,
the notion of sentient object as an encapsulating entity may
serve other purposes (e.g., the confinement of feedback and
of the propagation of events), beyond the mere hierarchical
composition of objects.
To give a more concrete example of such component-aware
object composition we consider a scenario of cooperating
robots. Each robot is made of several components,
corresponding, for instance, to axis and manipulator controllers.
Together with the control software, each of these controllers
may be a sentient object. On the other hand, a robot itself
is a sentient object, composed of the objects materialized
by the controllers, and the environment internal to its own
structure, or body.
This means that it should be possible to define
cooperation activities using the events produced by robot sentient
objects, without the need to know the internal structure of
robots, or the events produced by body objects or by smart
sensors within the body. From an engineering point of view,
however, this also means that robot sentient object may
have to generate new events that reflect its internal state,
which requires the definition of a gateway to make the bridge
between the internal and external environments.
4.2 Encapsulation and Scoping
Now an important question is about how to represent and
disseminate events in a large scale networked world. As we
have seen above, any event generated by a sentient object
could, in principle, be visible anywhere in the system and
thus received by any other sentient object. However, there
are substantial obstacles to such universal interactions,
originating from the components heterogeneity in such a
largescale setting.
Firstly, the components may have severe performance
constraints, particularly because we want to integrate smart
sensors and actuators in such an architecture. Secondly, the
bandwidth of the participating networks may vary largely.
Such networks may be low power, low bandwidth fieldbuses,
or more powerful wireless networks as well as high speed
backbones. Thirdly, the networks may have widely different
reliability and timeliness characteristics. Consider a
platoon of cooperating vehicles. Inside a vehicle there may be
a field-bus like CAN [8, 9], TTP/A [17] or LIN [20], with a
comparatively low bandwidth. On the other hand, the
vehicles are communicating with others in the platoon via a
direct wireless link. Finally, there may be multiple platoons
of vehicles which are coordinated by an additional wireless
network layer.
At the abstraction level of sentient objects, such
heterogeneity is reflected by the notion of body-vs-environment.
At the network level, we assume the WAN-of-CANs
structure [27] to model the different networks. The notion of
body and environment is derived from the recursively
defined component-based object model. A body is similar to
a cell membrane and represents a quality of service
container for the sentient objects inside. On the network level,
it may be associated with the components coupled by a
certain CAN. A CAN defines the dissemination quality which
can be expected by the cooperating objects.
In the above example, a vehicle may be a sentient object,
whose body is composed of the respective lower level objects
(sensors and actuators) which are connected by the internal
network (see Figure 1). Correspondingly, the platoon can be
seen itself as an object composed of a collection of
cooperating vehicles, its body being the environment encapsulated by
the platoon zone. At the network level, the wireless network
represents the respective CAN. However, several platoons
united by their CANs may interact with each other and
objects further away, through some wider-range, possible fixed
networking substrate, hence the concept of WAN-of-CANs.
The notions of body-environment and WAN-of-CANs are
very useful when defining interaction properties across such
boundaries. Their introduction obeyed to our belief that
a single mechanism to provide quality measures for
interactions is not appropriate. Instead, a high level construct
for interaction across boundaries is needed which allows to
specify the quality of dissemination and exploits the
knowledge about body and environment to assess the feasibility of
quality constraints. As we will see in the following section,
the notion of an event channel represents this construct in
our architecture. It disseminates events and allows the
network independent specification of quality attributes. These
attributes must be mapped to the respective properties of
the underlying network structure.
5. A GENERIC EVENTS ARCHITECTURE
In order to successfully apply event-based object-oriented
models, addressing the challenges enumerated in the
introduction of this paper, it is necessary to use adequate
architectural constructs, which allow the enforcement of
fundamental properties such as timeliness or reliability.
We propose the Generic-Events Architecture (GEAR),
depicted in Figure 2, which we briefly describe in what
follows (for a more detailed description please refer to [29]).
The L-shaped structure is crucial to ensure some of the
properties described.
Environment: The physical surroundings, remote and close,
solid and etherial, of sentient objects.
31
C o m m ' sC o m m ' sC o m m ' s
T r a n s l a t i o n
L a y e r
T r a n s l a t i o n
L a y e r
B o d y
E n v i r o n m e n t
B o d y
E n v i r o n m e n t
B o d y
E n v i r o n m e n t
( i n c l u d i n g o p e r a t i o n a l n e t w o r k )
( o f o b j e c t o r o b j e c t c o m p o u n d )
T r a n s l a t i o n
L a y e r
T r a n s l a t i o n
S e n t i e n t
O b j e c t
S e n t i e n t
O b j e c t
S e n t i e n t
O b j e c t
R e g u l a r N e t w o r k
c o n s u m ep r o d u c e
E v e n t
L a y e r
E v e n t
L a y e r
E v e n t
L a y e r
S e n t i e n t
O b j e c t
Figure 2: Generic-Events architecture.
Body: The physical embodiment of a sentient object (e.g.,
the hardware where a mechatronic controller resides,
the physical structure of a car). Note that due to the
compositional approach taken in our model, part of
what is environment to a smaller object seen
individually, becomes body for a larger, containing object.
In fact, the body is the internal environment of the
object. This architecture layering allows composition
to take place seamlessly, in what concerns information
flow.
Inside a body there may also be implicit knowledge,
which can be exploited to make interaction more
efficient, like the knowledge about the number of
cooperating entities, the existence of a specific
communication network or the simple fact that all components are
co-located and thus the respective events do not need
to specify location in their context attributes. Such
intrinsic information is not available outside a body and,
therefore, more explicit information has to be carried
by an event.
Translation Layer: The layer responsible for physical event
transformation from/to their native form to event
channel dialect, between environment/body and an event
channel. Essentially one doing observation and
actuation operations on the lower side, and doing
transactions of event descriptions on the other. On the lower
side this layer may also interact with dumb sensors or
actuators, therefore talking the language of the
specific device. These interactions are done through
operational networks (hence the antenna symbol in the
figure).
Event Layer: The layer responsible for event propagation
in the whole system, through several Event Channels
(EC):. In concrete terms, this layer is a kind of
middleware that provides important event-processing services
which are crucial for any realistic event-based system.
For example, some of the services that imply the
processing of events may include publishing, subscribing,
discrimination (zoning, filtering, fusion, tracing), and
queuing.
Communication Layer: The layer responsible for
wrapping events (as a matter of fact, event descriptions
in EC dialect) into carrier event-messages, to be
transported to remote places. For example, a
sensing event generated by a smart sensor is wrapped in
an event-message and disseminated, to be caught by
whoever is concerned. The same holds for an
actuation event produced by a sentient object, to be
delivered to a remote smart actuator. Likewise, this may
apply to an event-message from one sentient object
to another. Dumb sensors and actuators do not send
event-messages, since they are unable to understand
the EC dialect (they do not have an event layer
neither a communication layer- they communicate, if
needed, through operational networks).
Regular Network: This is represented in the horizontal
axis of the block diagram by the communication layer,
which encompasses the usual LAN, TCP/IP, and
realtime protocols, desirably augmented with reliable and/or
ordered broadcast and other protocols.
The GEAR introduces some innovative ideas in distributed
systems architecture. While serving an object model based
on production and consumption of generic events, it treats
events produced by several sources (environment, body,
objects) in a homogeneous way. This is possible due to the use
of a common basic dialect for talking about events and due
to the existence of the translation layer, which performs the
necessary translation between the physical representation of
a real-time entity and the EC compliant format. Crucial to
the architecture is the event layer, which uses event channels
to propagate events through regular network infrastructures.
The event layer is realized by the COSMIC middleware, as
described in Section 7.
5.1 Information Flow in GEAR
The flow of information (external environment and
computational part) is seamlessly supported by the L-shaped
architecture. It occurs in a number of different ways, which
demonstrates the expressiveness of the model with regard to
the necessary forms of information encountered in real-time
cooperative and embedded systems.
Smart sensors produce events which report on the
environment. Body sensors produce events which report on
the body. They are disseminated by the local event layer
module, on an event channel (EC) propagated through the
regular network, to any relevant remote event layer
modules where entities showed an interest on them, normally,
sentient objects attached to the respective local event layer
modules.
Sentient objects consume events they are interested in,
process them, and produce other events. Some of these
events are destined to other sentient objects. They are
published on an EC using the same EC dialect that serves, e.g.,
sensor originated events. However, these events are
semantically of a kind such that they are to be subscribed by
the relevant sentient objects, for example, the sentient
objects composing a robot controller system, or, at a higher
level, the sentient objects composing the actual robots in
32
a cooperative application. Smart actuators, on the other
hand, merely consume events produced by sentient objects,
whereby they accept and execute actuation commands.
Alternatively to talking to other sentient objects, sentient
objects can produce events of a lower level, for example,
actuation commands on the body or environment. They
publish these exactly the same way: on an event channel
through the local event layer representative. Now, if these
commands are of concern to local actuator units (e.g., body,
including internal operational networks), they are passed on
to the local translation layer. If they are of concern to a
remote smart actuator, they are disseminated through the
distributed event layer, to reach the former. In any case,
if they are also of interest to other entities, such as other
sentient objects that wish to be informed of the actuation
command, then they are also disseminated through the EC
to these sentient objects.
A key advantage of this architecture is that event-messages
and physical events can be globally ordered, if necessary,
since they all pass through the event layer. The model also
offers opportunities to solve a long lasting problem in
realtime, computer control, and embedded systems: the
inconsistency between message passing and the feedback loop
information flow subsystems.
6. TEMPORAL ASPECTS OF THE
INTERACTIONS
Any interaction needs some form of predictability. If safety
critical scenarios are considered as it is done in CORTEX,
temporal aspects become crucial and have to be made
explicit. The problem is how to define temporal constraints
and how to enforce them by appropriate resource usage in a
dynamic ad-hoc environment. In an system where
interactions are spontaneous, it may be also necessary to determine
temporal properties dynamically. To do this, the respective
temporal information must be stated explicitly and available
during run-time. Secondly, it is not always ensured that
temporal properties can be fulfilled. In these cases,
adaptations and timing failure notification must be provided [2,
28]. In most real-time systems, the notion of a deadline
is the prevailing scheme to express and enforce timeliness.
However, a deadline only weakly reflect the temporal
characteristics of the information which is handled. Secondly, a
deadline often includes implicit knowledge about the system
and the relations between activities. In a rather well defined,
closed environment, it is possible to make such implicit
assumptions and map these to execution times and deadlines.
E.g. the engineer knows how long a vehicle position can be
used before the vehicle movement outdates this information.
Thus he maps this dependency between speed and position
on a deadline which then assures that the position error
can be assumed to be bounded. In a open environment, this
implicit mapping is not possible any more because, as an
obvious reason, the relation between speed and position, and
thus the error bound, cannot easily be reverse engineered
from a deadline. Therefore, our event model includes
explicit quality attributes which allow to specify the temporal
attributes for every individual event. This is of course an
overhead compared to the use of implicit knowledge, but in
a dynamic environment such information is needed.
To illustrate the problem, consider the example of the
position of a vehicle. A position is a typical example for
time, value entity [30]. Thus, the position is useful if we
can determine an error bound which is related to time, e.g. if
we want a position error below 10 meters to establish a safety
property between cooperating cars moving with 5 m/sec,
the position has a validity time of 2 seconds. In a time,
value entity entity we can trade time against the precision
of the value. This is known as value over time and time over
value [18]. Once having established the time-value relation
and captured in event attributes, subscribers of this event
can locally decide about the usefulness of an information. In
the GEAR architecture temporal validity is used to reason
about safety properties in a event-based system [29]. We
will briefly review the respective notions and see how they
are exploited in our COSMIC event middleware.
Consider the timeline of generating an event representing
some real-time entity [18] from its occurrence to the
notification of a certain sentient object (Figure 3). The real-time
entity is captured at the sensor interface of the system and
has to be transformed in a form which can be treated by a
computer. During the time interval t0 the sensor reads the
real-time entity and a time stamp is associated with the
respective value. The derived time, value entity represents
an observation. It may be necessary to perform substantial
local computations to derive application relevant
information from the raw sensor data. However, it should be noted
that the time stamp of the observation is associated with
the capture time and thus independent from further signal
processing and event generation. This close relationship
between capture time and the associated value is supported by
smart sensors described above.
The processed sensor information is assembled in an event
data structure after ts to be published to an event channel.
As is described later, the event includes the time stamp of
generation and the temporal validity as attributes.
The temporal validity is an application defined measure
for the expiration of a time, value . As we explained in
the example of a position above, it may vary dependent on
application parameters. Temporal validity is a more general
concept than that of a deadline. It is independent of a
certain technical implementation of a system. While deadlines
may be used to schedule the respective steps in an event
generation and dissemination, a temporal validity is an
intrinsic property of a time, value entity carried in an event.
A temporal validity allows to reason about the usefulness
of information and is beneficial even in systems in which
timely dissemination of events cannot be enforced because
it enables timing failure detection at the event consumer. It
is obvious that deadlines or periods can be derived from the
temporal validity of an event. To set a deadline, knowledge
of an implementation, worst case execution times or
message dissemination latencies is necessary. Thus, in the
timeline of Figure 3 every interval may have a deadline. Event
dissemination through soft real-time channels in COSMIC
exploits the temporal validity to define dissemination
deadlines. Quality attributes can be defined, for instance, in
terms of validity interval, omission degree pairs. These
allow to characterize the usefulness of the event for a certain
application, in a certain context. Because of that, quality
attributes of an event clearly depend on higher level issues,
such as the nature of the sentient object or of the smart
sensor that produced the event. For instance, an event
containing an indication of some vehicle speed must have
different quality attributes depending on the kind of vehicle
33
real-world
event
observation:
<time stamp, value>
event generated
ready to be transmitted
event
received
notification
, to
t
event
producer communication network
event
consumer
event channel
push <event>
, ts , tm , tt , tn
, t o : t i m e t o o b t a i n a n o b s e r v a t i o n
, t s : t i m e t o p r o c e s s s e n s o r r e a d i n g
, t m : t i m e t o a s s e m b l e a n e v e n t m e s s a g e
, t t : t i m e t o t r a n s f e r t h e e v e n t o n t h e r e g u l a r n e t w o r k
, t n : t i m e f o r n o t i f i c a t i o n o n t h e c o n s u m e r s i t e
Figure 3: Event processing and dissemination.
from which it originated, or depending on its current speed.
The same happens with the position event of the car
example above, whose validity depends on the current speed
and on a predefined required precision. However, since
quality attributes are strictly related with the semantics of the
application or, at least, with some high level knowledge of
the purpose of the system (from which the validity of the
information can be derived), the definition of these quality
attributes may be done by exploiting the information
provided at the programming interface. Therefore, it is
important to understand how the system programmer can
specify non-functional requirements at the API, and how these
requirements translate into quality attributes assigned to
events. While temporal validity is identified as an intrinsic
event property, which is exploited to decide on the
usefulness of data at a certain point in time, it is still necessary
to provide a communication facility which can disseminate
the event before the validity is expired.
In a WAN-of-CANs network structure we have to cope
with very different network characteristics and quality of
service properties. Therefore, when crossing the network
boundaries the quality of service guarantees available in a
certain network will be lost and it will be very hard, costly
and perhaps impossible to achieve these properties in the
next larger area of the WAN-of CANs structure. CORTEX
has a couple of abstractions to cope with this situation
(network zones, body/environment) which have been discussed
above. From the temporal point of view we need a high
level abstraction like the temporal validity for the
individual event now to express our quality requirements of the
dissemination over the network. The bound, coverage pair,
introduced in relation with the TCB [28] seems to be an
appropriate approach. It considers the inherent uncertainty
of networks and allows to trade the quality of dissemination
against the resources which are needed. In relation with
the event channel model discussed later, the bound,
coverage pair allows to specify the quality properties of an event
channel independently of specific technical issues. Given
the typical environments in which sentient applications will
operate, where it is difficult or even impossible to provide
timeliness or reliability guarantees, we proposed an
alternative way to handle non-functional application requirements,
in relation with the TCB approach [28]. The proposed
approach exploits intrinsic characteristics of applications, such
as fail-safety, or time-elasticity, in order to secure QoS
specifications of the form bound, coverage . Instead of
constructing systems that rely on guaranteed bounds, the idea
is to use (possibly changing) bounds that are secured with a
constant probability all over the execution. This obviously
requires an application to be able to adapt to changing
conditions (and/or changing bounds) or, if this is not possible,
to be able to perform some safety procedures when the
operational conditions degrade to an unbearable level. The
bounds we mentioned above refer essentially to timeliness
bounds associated to the execution of local or distributed
activities, or combinations thereof. From these bounds it is
then possible to derive the quality attributes, in particular
validity intervals, that characterize the events published in
the event channel.
6.1 The Role of Smart Sensors and Actuators
Smart devices encapsulate hardware, software and
mechanical components and provide information and a set of
well specified functions and which are closely related to
the interaction with the environment. The built-in
computational components and the network interface enable the
implementation of a well-defined high level interface that
does not just provide raw transducer data, but a processed,
application-related set of events. Moreover, they exhibit an
autonomous spontaneous behaviour. They differ from
general purpose nodes because they are dedicated to a certain
functionality which complies to their sensing and
actuating capabilities while general purpose node may execute any
program.
Concerning the sentient object model, smart sensors and
actuators may be basic sentient objects themselves,
consuming events from the real-world environment and producing
the respective generic events for the system"s event layer or,
34
vice versa consuming a generic event and converting it to a
real-world event by an actuation. Smart components
therefore constitute the periphery, i.e. the real-world interface of
a more complex sentient object. The model of sentient
objects also constitutes the framework to built more complex
virtual sensors by relating multiple (primary, i.e. sensors
which directly sense a physical entity) sensors.
Smart components translate events of the environment
to an appropriate form available at the event layer or, vice
versa, transform a system event into an actuation. For smart
components we can assume that:
• Smart components have dedicated resources to
perform a specific function.
• These resources are not used for other purposes during
normal real-time operation.
• No local temporal conflicts occur that will change the
observable temporal behaviour.
• The functions of a component can usually only be
changed during a configuration procedure which is not
performed when the component is involved in critical
operations.
• An observation of the environment as a time,value
pair can be obtained with a bounded jitter in time.
Many predictability and scheduling problems arise from
the fact, that very low level timing behaviours have to be
handled on a single processor. Here, temporal
encapsulation of activities is difficult because of the possible side
effects when sharing a single processor resource. Consider the
control of a simple IR-range detector which is used for
obstacle avoidance. Dependent on its range and the speed of
a vehicle, it has to be polled to prevent the vehicle from
crashing into an obstacle. On a single central processor,
this critical activity has to be coordinated with many
similar, possibly less critical functions. It means that a very
fine grained schedule has to be derived based purely on the
artifacts of the low level device control. In a smart
sensor component, all this low level timing behaviour can be
optimized and encapsulated. Thus we can assume temporal
encapsulation similar to information hiding in the functional
domain. Of course, there is still the problem to guarantee
that an event will be disseminated and recognized in due
time by the respective system components, but this relates
to application related events rather than the low artifacts of
a device timing. The main responsibility to provide
timeliness guarantees is shifted to the event layer where these
events are disseminated. Smart sensors thus lead to network
centric system model. The network constitute the shared
resource which has to be scheduled in a predictable way. The
COSMIC middleware introduced in the next section is an
approach to provide predictable event dissemination for a
network of smart sensors and actuators.
7. AN EVENT MODEL ANDMIDDLEWARE
FOR COOPERATING SMART DEVICES
An event model and a middleware suitable for smart
components must support timely and reliable communication
and also must be resource efficient. COSMIC
(COoperating Smart devices) is aimed at supporting the
interaction between those components according to the concepts
introduced so far. Based on the model of a WAN-of-CANs,
we assume that the components are connected to some form
of CAN as a fieldbus or a special wireless sensor network
which provides specific network properties. E.g. a fieldbus
developed for control applications usually includes
mechanisms for predictable communication while other networks
only support a best effort dissemination. A gateway
connects these CANs to the next level in the network hierarchy.
The event system should allow the dynamic interaction over
a hierarchy of such networks and comply with the overall
CORTEX generic event model. Events are typed
information carriers and are disseminated in a publisher/ subscriber
style [24, 7], which is particularly suitable because it
supports generative, anonymous communication [3] and does
not create any artificial control dependencies between
producers of information and the consumers. This decoupling
in space (no references or names of senders or receivers are
needed for communication) and the flow decoupling (no
control transfer occurs with a data transfer) are well known [24,
7, 14] and crucial properties to maintain autonomy of
components and dynamic interactions.
It is obvious that not all networks can provide the same
QoS guarantees and secondly, applications may have widely
differing requirements for event dissemination.
Additionally, when striving for predictability, resources have to be
reserved and data structures must be set up before
communication takes place. Thus, these things can not predictably
be made on the fly while disseminating an event. Therefore,
we introduced the notion of an event channel to cope with
differing properties and requirements and have an object to
which we can assign resources and reservations. The
concept of an event channel is not new [10, 25], however, it has
not yet been used to reflect the properties of the underlying
heterogeneous communication networks and mechanisms as
described by the GEAR architecture. Rather, existing event
middleware allows to specify the priorities or deadlines of
events handled in an event server. Event channels allow
to specify the communication properties on the level of the
event system in a fine grained way. An event channel is
defined by:
event channel := subject, quality attributeList,
handlers
The subject determines the types of events event which
may be issued to the channel. The quality attributes model
the properties of the underlying communication network
and dissemination scheme. These attributes include latency
specifications, dissemination constraints and reliability
parameters. The notion of zones which represent a guaranteed
quality of service in a subnetwork support this approach.
Our goal is to handle the temporal specifications as bound,
coverage pairs [28] orthogonal to the more technical
questions of how to achieve a certain synchrony property of the
dissemination infrastructure. Currently, we support
quality attributes of event channels in a CAN-Bus environment
represented by explicit synchrony classes.
The COSMIC middleware maps the channel properties to
lower level protocols of the regular network. Based on our
previous work on predictable protocols for the CAN-Bus,
COSMIC defines an abstract network which provides hard,
soft and non real-time message classes [21].
Correspondingly, we distinguish three event channel classes
according to their synchrony properties: hard real-time
channels, soft real-time channels and non-real-time channels.
Hard real-time channels (HRTC) guarantee event
propagation within the defined time constraints in the presence
35
of a specified number of omission faults. HRTECs are
supported by a reservation scheme which is similar to the scheme
used in time-triggered protocols like TTP [16][31], TTP/A [17],
and TTCAN [8]. However, a substantial advantage over a
TDMA scheme is that due to CAN-Bus properties,
bandwidth which was reserved but is not needed by a HRTEC
can be used by less critical traffic [21].
Soft real-time channels (SRTC) exploit the temporal
validity interval of events to derive deadlines for scheduling.
The validity interval defines the point in time after which
an event becomes temporally inconsistent. Therefore, in a
real-time system an event is useless after this point and may
me discarded. The transmission deadline (DL) is defined as
the latest point in time when a message has to be
transmitted and is specified in a time interval which is derived from
the expiration time:
tevent ready < DL < texpiration − ∆notification
texpiration defines the point in time when the temporal
validity expires. ∆notification is the expected end-to-end
latency which includes the transfer time over the network and
the time the event may be delayed by the local event
handling in the nodes. As said before, event deadlines are used
to schedule the dissemination by SRTECs. However,
deadlines may be missed in transient overload situations or due
to arbitrary arrival times of events. On the publisher side
the application"s exception handler is called whenever the
event deadline expires before event transmission. At this
point in time the event is also not expected to arrive at the
subscriber side before the validity expires. Therefore, the
event is removed from the sending queue. On the subscriber
side the expiration time is used to schedule the delivery of
the event. If the event cannot be delivered until its
expiration time it is removed from the respective queues allocated
by the COSMIC middleware. This prevents the
communication system to be loaded by outdated messages.
Non-real-time channels do not assume any temporal
specification and disseminate events in a best effort manner. An
instance of an event channel is created locally, whenever a
publisher makes an announcement for publication or a
subscriber subscribes for an event notification. When a
publisher announces publication, the respective data structures
of an event channel are created by the middleware. When
a subscriber subscribes to an event channel, it may specify
context attributes of an event which are used to filter events
locally. E.g. a subscriber may only be interested in events
generated at a certain location. Additionally the subscriber
specifies quality properties of the event channel. A more
detailed description of the event channels can be found in [13].
Currently, COSMIC handles all event channels which
disseminate events beyond the CAN network boundary as non
real-time event channels. This is mainly because we use the
TCP/IP protocol to disseminate events over wireless links
or to the standard Ethernet. However, there are a
number of possible improvements which can easily be integrated
in the event channel model. The Timely Computing Base
(TCB) [28] can be exploited for timing failure detection and
thus would provide awareness for event dissemination in
environments where timely delivery of events cannot be
enforced. Additionally, there are wireless protocols which can
provide timely and reliable message delivery [6, 23] which
may be exploited for the respective event channel classes.
Events are the information carriers which are exchanged
between sentient objects through event channels. To cope
with the requirements of an ad-hoc environment, an event
includes the description of the context in which it has been
generated and quality attributes defining requirements for
dissemination. This is particularly important in an open,
dynamic environment where an event may travel over
multiple networks. An event instance is specified as:
event := subject, context attributeList,
quality attributeList, contents
A subject defines the type of the event and is related
to the event contents. It supports anonymous
communication and is used to route an event. The subject has to
match to the subject of the event channel through which
the event is disseminated. Attributes are complementary
to the event contents. They describe individual functional
and non-functional properties of the event. The context
attributes describe the environment in which the event has
been generated, e.g. a location, an operational mode or a
time of occurrence. The quality attributes specify
timeliness and dependability aspects in terms of validity
interval, omission degree pairs. The validity interval defines the
point in time after which an event becomes temporally
inconsistent [18]. As described above, the temporal validity
can be mapped to a deadline. However, usually a
deadline is an engineering artefact which is used for scheduling
while the temporal validity is a general property of a time,
value entity. In a environment where a deadline cannot
be enforced, a consumer of an event eventually must decide
whether the event still is temporally consistent, i.e.
represents a valid time, value entity.
7.1 The Architecture of the COSMIC
Middleware
On the architectural level, COSMIC distinguish three
layers roughly depicted in Figure 4. Two of them, the event
layer and the abstract network layer are implemented by the
COSMIC middleware. The event layer provides the API for
the application and realizes the abstraction of event and
event channels.
The abstract network implements real-time message classes
and adapts the quality requirements to the underlying real
network. An event channel handler resides in every node. It
supports the programming interface and provides the
necessary data structures for event-based communication.
Whenever an object subscribes to a channel or a publisher
announces a channel, the event channel handler is involved. It
initiates the binding of the channel"s subject, which is
represented by a network independent unique identifier to an
address of the underlying abstract network to enable
communication [14]. The event channel handler then tightly
cooperates with the respective handlers of the abstract network
layer to disseminate events or receive event notifications. It
should be noted that the QoS properties of the event layer
in general depend on what the abstract network layer can
provide. Thus, it may not always be possible to e.g. support
hard real-time event channels because the abstract network
layer cannot provide the respective guarantees. In [13], we
describe the protocols and services of the abstract network
layer particularly for the CAN-Bus.
As can be seen in Figure 4, the hard real-time (HRT)
message class is supported by a dedicated handler which is
able to provide the time triggered message dissemination.
36
event
notifications
HRT-msg
list
SRT-msg
queue
NRT-msg
queue
HRT-msg
calendar
HRTC
Handler
S/NRTC
Handler
Abstract Network
Layer
CAN Layer
RX Buffer TX Buffer
RX, TX, error
interrupts
Event Channel
Specs.
Event Layer
send
messages
exception
notification
exceptions,
notifications
ECH:
Event Channel
Handler
p u b l i s h a n n o u n c e s u b s c r i b e
b i n d i n g
p r o t o c o l
c o n f i g .
p r o t o c o l
Global
Time
Service
event
notifications
HRT-msg
list
SRT-msg
queue
NRT-msg
queue
HRT-msg
calendar
HRTC
Handler
S/NRTC
Handler
Abstract Network
Layer
CAN Layer
RX Buffer TX Buffer
RX, TX, error
interrupts
Event Channel
Specs.
Event Layer
send
messages
exception
notification
exceptions,
notifications
ECH:
Event Channel
Handler
p u b l i s h a n n o u n c e s u b s c r i b e
b i n d i n g
p r o t o c o l
c o n f i g .
p r o t o c o l
Global
Time
Service
Figure 4: Architecture layers of COSMIC.
The HRT handler maintains the HRT message list, which
contains an entry for each local HRT message to be sent.
The entry holds the parameters for the message, the
activation status and the binding information. Messages are
scheduled on the bus according to the HRT message
calendar which comprises the precise start time for each time slot
allocated for a message. Soft real-time message queues order
outgoing messages according to their transmission deadlines
derived from the temporal validity interval. If the
transmission deadline is exceeded, the event message is purged out of
the queue. The respective application is notified via the
exception notification interface and can take actions like trying
to publish the event again or publish it to a channel of
another class. Incoming event messages are ordered according
to their temporal validity. If an event message arrive, the
respective applications are notified. At the moment, an
outdated message is deleted from the queue and if the queue
runs out of space, the oldest message is discarded.
However, there are other policies possible depending on event
attributes and available memory space. Non real-time
messages are FIFO ordered in a fixed size circular buffer.
7.2 Status of COSMIC
The goal for developing COSMIC was to provide a
platform to seamlessly integrate smart tiny components in a
large system. Therefore, COSMIC should run also on the
small, resource constraint devices which are built around
16Bit or even 8-Bit micro-controllers. The distributed
COSMIC middleware has been implemented and tested on
various platforms. Under RT-Linux, we support the real-time
channels over the CAN Bus as described above. The
RTLinux version runs on Pentium processors and is currently
evaluated before we intent to port it to a smart sensor or
actuator. For the interoperability in a WAN-of-CANs
environment, we only provide non real-time channels at the
moment. This version includes a gateway between the
CANbus and a TCP/IP network. It allows us to use a
standard wireless 802.11 network. The non real-time version of
COSMIC is available on Linux, RT-Linux and on the
microcontroller families C167 (Infineon) and 68HC908 (Motorola).
Both micro-controllers have an on-board CAN controller
and thus do not require additional hardware components for
the network. The memory footprint of COSMIC is about 13
Kbyte on a C167 and slightly more on the 68HC908 where it
fits into the on-board flash memory without problems.
Because only a few channels are required on such a smart sensor
or actuator component, the requirement of RAM (which is
a scarce resource on many single chip systems) to hold the
dynamic data structures of a channel is low. The COSMIC
middleware makes it very easy to include new smart sensors
in an existing system. Particularly, the application running
on a smart sensor to condition and process the raw physical
data must not be aware of any low level network specific
details. It seamlessly interacts with other components of the
system exclusively via event channels.
The demo example, briefly described in the next chapter,
is using a distributed infrastructure of tiny smart sensors
and actuators directly cooperating via event channels over
heterogeneous networks.
8. AN ILLUSTRATIVE EXAMPLE
A simple example for many important properties of the
proposed system showing the coordination through the
environment and events disseminated over the network is the
demo of two cooperating robots depicted in Figure 5.
Each robot is equipped with smart distance sensors, speed
sensors, acceleration sensors and one of the robots (the guide
(KURT2) in front (Figure 5)) has a tracking camera
allowing to follow a white line. The robots form a WAN-of-CANs
system in which their local CANs are interconnected via a
wireless 802.11 network. COSMIC provides the event layer
for seamless interaction. The blind robot (N.N.) is
searching the guide randomly. Whenever the blind robot detects
(by its front distance sensors) an obstacle, it checks whether
this may be the guide. For this purpose, it dynamically
subscribes to the event channel disseminating distance events
from rear distance sensors of the guide(s) and compares
these with the distance events from its local front sensors.
If the distance is approximately the same it infers that it
is really behind a guide. Now N.N. also subscribes to the
event channels of the tracking camera and the speed sensors
37
Figure 5: Cooperating robots.
to follow the guide. The demo application highlights the
following properties of the system:
1. Dynamic interaction of robots which is not known in
advance. In principle, any two a priori unknown robots
can cooperate. All what publishers and subscribers
have to know to dynamically interact in this
environment is the subject of the respective event class. A
problem will be to receive only the events of the robot
which is closest. A robot identity does not help much
to solve this problem. Rather, the position of the event
generation entity which is captured in the respective
attributes can be evaluated to filter the relevant event
out of the event stream. A suitable wireless protocol
which uses proximity to filter events has been proposed
by Meier and Cahill [22] in the CORTEX project.
2. Interaction through the environment. The
cooperation between the robots is controlled by sensing the
distance between the robots. If the guide detects that
the distance grows, it slows down. Respectively, if the
blind robot comes too close it reduces its speed. The
local distance sensors produce events which are
disseminated through a low latency, highly predictable event
channel. The respective reaction time can be
calculated as function of the speed and the distance of the
robots and define a dynamic dissemination deadline
for events. Thus, the interaction through the
environment will secure the safety properties of the
application, i.e. the follower may not crash into the guide and
the guide may not loose the follower. Additionally, the
robots have remote subscriptions to the respective
distance events which are used to check it with the local
sensor readings to validate that they really follow the
guide which they detect with their local sensors.
Because there may be longer latencies and omissions, this
check occasionally will not be possible. The
unavailability of the remote events will decrease the quality
of interaction and probably and slow down the robots,
but will not affect safety properties.
3. Cooperative sensing. The blind robot subscribes to the
events of the line tracking camera. Thus it can see
through the eye of the guide. Because it knows the
distance to the guide and the speed as well, it can foresee
the necessary movements. The proposed system
provides the architectural framework for such a
cooperation. The respective sentient object controlling the
actuation of the robot receives as input the position
and orientation of the white line to be tracked. In the
case of the guide robot, this information is directly
delivered as a body event with a low latency and a high
reliability over the internal network. For the follower
robot, the information comes also via an event channel
but with different quality attributes. These quality
attributes are reflected in the event channel description.
The sentient object controlling the actuation of the
follower is aware of the increased latency and higher
probability of omission.
9. CONCLUSION AND FUTURE WORK
The paper addresses problems of building large distributed
systems interacting with the physical environment and
being composed from a huge number of smart components.
We cannot assume that the network architecture in such a
system is homogeneous. Rather multiple edge- networks
are fused to a hierarchical, heterogeneous wide area
network. They connect the tiny sensors and actuators
perceiving the environment and providing sentience to the
application. Additionally, mobility and dynamic deployment of
components require the dynamic interaction without fixed,
a priori known addressing and routing schemes. The work
presented in the paper is a contribution towards the
seamless interaction in such an environment which should not be
restricted by technical obstacles. Rather it should be
possible to control the flow of information by explicitly specifying
functional and temporal dissemination constraints.
The paper presented the general model of a sentient
object to describe composition, encapsulation and interaction
in such an environment and developed the Generic Event
Architecture GEAR which integrates the interaction through
the environment and the network. While appropriate
abstractions and interaction models can hide the functional
heterogeneity of the networks, it is impossible to hide the
quality differences. Therefore, one of the main concerns is
to define temporal properties in such an open
infrastructure. The notion of an event channel has been introduced
which allows to specify quality aspects explicitly. They can
be verified at subscription and define a boundary for event
dissemination. The COSMIC middleware is a first attempt
to put these concepts into operation. COSMIC allows the
interoperability of tiny components over multiple network
boundaries and supports the definition of different real-time
event channel classes.
There are many open questions that emerged from our
work. One direction of future research will be the inclusion
of real-world communication channels established between
sensors and actuators in the temporal analysis and the
ordering of such events in a cause-effect chain. Additionally,
the provision of timing failure detection for the adaptation
of interactions will be in the focus of our research. To reduce
network traffic and only disseminate those events to the
subscribers which they are really interested in and which have
a chance to arrive timely, the encapsulation and scoping
schemes have to be transformed into respective multi-level
filtering rules. The event attributes which describe aspects
of the context and temporal constraints for the
dissemination will be exploited for this purpose. Finally, it is intended
to integrate the results in the COSMIC middleware to
enable experimental assessment.
38
10. REFERENCES
[1] J. Bacon, K. Moody, J. Bates, R. Hayton, C. Ma,
A. McNeil, O. Seidel, and M. Spiteri. Generic support
for distributed applications. IEEE Computer,
33(3):68-76, 2000.
[2] L. B. Becker, M. Gergeleit, S. Schemmer, and E. Nett.
Using a flexible real-time scheduling strategy in a
distributed embedded application. In Proc. of the 9th
IEEE International Conference on Emerging
Technologies and Factory Automation (ETFA), Lisbon,
Portugal, Sept. 2003.
[3] N. Carriero and D. Gelernter. Linda in context.
Communications of the ACM, 32(4):444-458, apr 1989.
[4] A. Casimiro (Ed.). Preliminary definition of cortex
system architecture. CORTEX project,
IST-2000-26031, Deliverable D4, Apr. 2002.
[5] CORTEX project Annex 1, Description of Work.
Technical report, CORTEX project, IST-2000-26031,
Oct. 2000. http://cortex.di.fc.ul.pt.
[6] R. Cunningham and V. Cahill. Time bounded medium
access control for ad-hoc networks. In Proceedings of
the Second ACM International Workshop on Principles
of Mobile Computing (POMC"02), pages 1-8, Toulouse,
France, Oct. 2002. ACM Press.
[7] P. T. Eugster, P. Felber, R. Guerraoui, and A.-M.
Kermarrec. The many faces of publish/subscribe.
Technical Report DSC ID:200104, EPFL, Lausanne,
Switzerland, 2001.
[8] T. F¨uhrer, B. M¨uller, W. Dieterle, F. Hartwich,
R. Hugel, and M.Walther. Time triggered
communication on CAN, 2000.
http://www.can-cia.org/can/ttcan/fuehrer.pdf.
[9] R. B. GmbH. CAN Specification Version 2.0. Technical
report, Sept. 1991.
[10] T. Harrison, D. Levine, and D. Schmidt. The design
and performance of a real-time corba event service. In
Proceedings of the 1997 Conference on Object Oriented
Programming Systems, Languages and Applications
(OOPSLA), pages 184-200, Atlanta, Georgia, USA,
1997. ACM Press.
[11] J. Hightower and G. Borriello. Location systems for
ubiquitous computing. IEEE Computer, 34(8):57-66,
aug 2001.
[12] A. Hopper. The Clifford Paterson Lecture, 1999
Sentient Computing. Philosophical Transactions of the
Royal Society London, 358(1773):2349-2358, Aug. 2000.
[13] J. Kaiser, C. Mitidieri, C. Brudna, and C. Pereira.
COSMIC: A Middleware for Event-Based Interaction
on CAN. In Proc. 2003 IEEE Conference on Emerging
Technologies and Factory Automation, Lisbon,
Portugal, Sept. 2003.
[14] J. Kaiser and M. Mock. Implementing the real-time
publisher/subscriber model on the controller area
network (CAN). In Proceedings of the 2nd International
Symposium on Object-oriented Real-time distributed
Computing (ISORC99), Saint-Malo, France, May 1999.
[15] K. Kim, G. Jeon, S. Hong, T. Kim, and S. Kim.
Integrating subscription-based and connection-oriented
communications into the embedded CORBA for
the CAN Bus. In Proceedings of the IEEE Real-time
Technology and Application Symposium, May 2000.
[16] H. Kopetz and G. Gr¨unsteidl. TTP - A
Time-Triggered Protocol for Fault-Tolerant Real-Time
Systems. Technical Report rr-12-92, Institut f¨ur
Technische Informatik, Technische Universit¨at Wien,
Treilstr. 3/182/1, A-1040 Vienna, Austria, 1992.
[17] H. Kopetz, M. Holzmann, and W. Elmenreich. A
Universal Smart Transducer Interface: TTP/A.
International Journal of Computer System, Science
Engineering, 16(2), Mar. 2001.
[18] H. Kopetz and P. Ver´ıssimo. Real-time and
Dependability Concepts. In S. J. Mullender, editor,
Distributed Systems, 2nd Edition, ACM-Press,
chapter 16, pages 411-446. Addison-Wesley, 1993.
[19] S. Lankes, A. Jabs, and T. Bemmerl. Integration of a
CAN-based connection-oriented communication model
into Real-Time CORBA. In Workshop on Parallel and
Distributed Real-Time Systems, Nice, France, Apr.
2003.
[20] Local Interconnect Network: LIN Specification
Package Revision 1.2. Technical report, Nov. 2000.
[21] M. Livani, J. Kaiser, and W. Jia. Scheduling hard and
soft real-time communication in the controller area
network. Control Engineering, 7(12):1515-1523, 1999.
[22] R. Meier and V. Cahill. Steam: Event-based
middleware for wireless ad-hoc networks. In Proceedings
of the International Workshop on Distributed
Event-Based Systems (ICDCS/DEBS"02), pages
639-644, Vienna, Austria, 2002.
[23] E. Nett and S. Schemmer. Reliable real-time
communication in cooperative mobile applications.
IEEE Transactions on Computers, 52(2):166-180, Feb.
2003.
[24] B. Oki, M. Pfluegl, A. Seigel, and D. Skeen. The
information bus - an architecture for extensible
distributed systems. Operating Systems Review,
27(5):58-68, 1993.
[25] O. M. G. (OMG). CORBAservices: Common Object
Services Specification - Notification Service
Specification, Version 1.0, 2000.
[26] O. M. G. (OMG). Smart transducer interface, initial
submission, June 2001.
[27] P. Ver´ıssimo, V. Cahill, A. Casimiro, K. Cheverst,
A. Friday, and J. Kaiser. Cortex: Towards supporting
autonomous and cooperating sentient entities. In
Proceedings of European Wireless 2002, Florence, Italy,
Feb. 2002.
[28] P. Ver´ıssimo and A. Casimiro. The Timely Computing
Base model and architecture. Transactions on
Computers - Special Issue on Asynchronous Real-Time
Systems, 51(8):916-930, Aug. 2002.
[29] P. Ver´ıssimo and A. Casimiro. Event-driven support of
real-time sentient objects. In Proceedings of the 8th
IEEE International Workshop on Object-oriented
Real-time Dependable Systems, Guadalajara, Mexico,
Jan. 2003.
[30] P. Ver´ıssimo and L. Rodrigues. Distributed Systems for
System Architects. Kluwer Academic Publishers, 2001.
39 | smart sensor;sensor and actuator;corba;dissemination quality;sentient object;event channel;real-time entity;generic event architecture;gear architecture;soft real-time channel;temporal constraint;event-based system;cosmic middleware;middleware architecture;cortex;temporal validity;event-base communication;componentbase system;sentient computing |
train_C-79 | A Cross-Layer Approach to Resource Discovery and Distribution in Mobile ad-hoc Networks | This paper describes a cross-layer approach to designing robust P2P system over mobile ad-hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented. | 1. INTRODUCTION
Mobile ad-hoc networks (MANETs) consist of mobile nodes that
autonomously establish connectivity via multi-hop wireless
communications. Without relying on any existing, pre-configured
network infrastructure or centralized control, MANETs are useful
in situations where impromptu communication facilities are
required, such as battlefield communications and disaster relief
missions. As MANET applications demand collaborative
processing and information sharing among mobile nodes, resource
(service) discovery and distribution have become indispensable
capabilities.
One approach to designing resource discovery and distribution
schemes over MANETs is to construct a peer-to-peer (P2P)
system (or an overlay) which organizes peers of the system into a
logical structure, on top of the actual network topology. However,
deploying such P2P systems over MANETs may result in either a
large number of flooding operations triggered by the reactive
routing process, or inefficiency in terms of bandwidth utilization in
proactive routing schemes. Either way, constructing an overlay
will potentially create a scalability problem for large-scale
MANETs.
Due to the dynamic nature of MANETs, P2P systems should be
robust by being scalable and adaptive to topology changes. These
systems should also provide efficient and effective ways for peers
to interact, as well as other desirable application specific features.
This paper describes a design paradigm that uses the following
two functional primitives to design robust resource discovery and
distribution schemes over MANETs.
1. Positive/negative feedback. Query packets are used to
explore a route to other peers holding resources of interest.
Optionally, advertisement packets are sent out to advertise
routes from other peers about available resources. When
traversing a route, these control packets measure goodness
of the route and leave feedback information on each node
along the way to guide subsequent control packets to
appropriate directions.
2. Sporadic random walk. As the network topology and/or
the availability of resources change, existing routes may
become stale while better routes become available. Sporadic
random walk allows a control packet to explore different
paths and opportunistically discover new and/or better
routes.
Adopting this paradigm, the whole MANET P2P system operates
as a collection of autonomous entities which consist of different
types of control packets such as query and advertisement packets.
These packets work collaboratively, but indirectly, to achieve
common tasks, such as resource discovery, routing, and load
balancing. With collaboration among these entities, a MANET P2P
system is able to ‘learn" the network dynamics by itself and adjust
its behavior accordingly, without the overhead of organizing peers
into an overlay.
The remainder of this paper is organized as follows. Related work
is described in the next section. Section 3 describes the resource
discovery scheme. Section 4 describes the resource distribution
scheme. The replica invalidation scheme is described in Section 5,
followed by it performance evaluation in Section 6. Section 7
concludes the paper.
2. RELATED WORK
For MANETs, P2P systems can be classified based on the design
principle, into layered and cross-layer approaches. A layered
approach adopts a P2P-like [1] solution, where resource discovery is
facilitated as an application layer protocol and query/reply
messages are delivered by the underlying MANET routing protocols.
For instance, Konark [2] makes use of a underlying multicast
protocol such that service providers and queriers advertise and
search services via a predefined multicast group, respectively.
Proem [3] is a high-level mobile computing platform for P2P
systems over MANETs. It defines a transport protocol that sits on
top of the existing TCP/IP stack, hence relying on an existing
routing protocol to operate. With limited control over how control
and data packets are routed in the network, it is difficult to avoid
the inefficiency of the general-purpose routing protocols which
are often reactive and flooding-based.
In contrast, cross-layer approaches either relies on its own routing
mechanism or augments existing MANET routing algorithms to
support resource discovery. 7DS [4], which is the pioneering
work deploying P2P system on mobile devices, exploits data
locality and node mobility to dissemination data in a single-hop
fashion. Hence, long search latency may be resulted as a 7DS
node can get data of interest only if the node that holds the data is
in its radio coverage. Mohan et al. [5] propose an adaptive service
discovery algorithm that combines both push and pull models.
Specifically, a service provider/querier broadcasts
advertisement/query only when the number of nodes advertising or
querying, which is estimated by received control packets, is below a
threshold during a period of time. In this way, the number of
control packets on the network is constrained, thus providing good
scalability. Despite the mechanism to reduce control packets, high
overhead may still be unavoidable, especially when there are
many clients trying to locate different services, due to the fact that
the algorithm relies on flooding,
For resource replication, Yin and Cao [6] design and evaluate
cooperative caching techniques for MANETs. Caching, however,
is performed reactively by intermediate nodes when a querier
requests data from a server. Data items or resources are never
pushed into other nodes proactively. Thanedar et al. [7] propose a
lightweight content replication scheme using an expanding ring
technique. If a server detects the number of requests exceed a
threshold within a time period, it begins to replicate its data onto
nodes capable of storing replicas, whose hop counts from the
server are of certain values. Since data replication is triggered by
the request frequency alone, it is possible that there are replicas
unnecessarily created in a large scope even though only nodes
within a small range request this data. Our proposed resource
replication mechanism, in contrast, attempts to replicate a data
item in appropriate areas, instead of a large area around the server,
where the item is requested frequently.
3. RESOURCE DISCOVERY
We propose a cross-layer, hybrid resource discovery scheme that
relies on the multiple interactions of query, reply and
advertisement packets. We assume that each resource is associated with a
unique ID1
. Initially, when a node wants to discover a resource, it
deploys query packets, which carry the corresponding resource
ID, and randomly explore the network to search for the requested
resource. Upon receiving such a query packet, a reply packet is
generated by the node providing the requested resource.
Advertisement packets can also be used to proactively inform other
nodes about what resources are available at each node. In addition
to discovering the ‘identity" of the node providing the requested
resource, it may be also necessary to discover a ‘route" leading to
this node for further interaction.
To allow intermediate nodes to make a decision on where to
forward query packets, each node maintains two tables: neighbor
1
The assumption of unique ID is made for brevity in exposition,
and resources could be specified via attribute-value assertions.
table and pheromone table. The neighbor table maintains a list of
all current neighbors obtained via a neighbor discovery protocol.
The pheromone table maintains the mapping of a resource ID and
a neighbor ID to a pheromone value. This table is initially empty,
and is updated by a reply packet generated by a successful query.
Figure 1 illustrates an example of a neighbor table and a
pheromone table maintained by node A having four neighbors. When
node A receives a query packet searching for a resource, it makes
a decision to which neighbor it should forward the query packet
by computing the desirability of each of the neighbors that have
not been visited before by the same query packet. For a resource
ID r, the desirability of choosing a neighbor n, δ(r,n), is obtained
from the pheromone value of the entry whose neighbor and
resource ID fields are n and r, respectively. If no such entry exists in
the pheromone table, δ(r,n) is set to zero.
Once the desirabilities of all valid next hops have been calculated,
they are normalized to obtain the probability of choosing each
neighbor. In addition, a small probability is also assigned to those
neighbors with zero desirability to exercise the sporadic random
walk primitive. Based on these probabilities, a next hop is
selected to forward the query packet to. When a query packet
encounters a node with a satisfying resource, a reply packet is
returned to the querying node. The returning reply packet also
updates the pheromone table at each node on its return trip by
increasing the pheromone value in the entry whose resource ID and
neighbor ID fields match the ID of the discovered resource and
the previous hop, respectively. If such an entry does not exist, a
new entry is added into the table. Therefore, subsequent query
packets looking for the same resource, when encountering this
pheromone information, are then guided toward the same
destination with a small probability of taking an alternate path.
Since the hybrid discovery scheme neither relies on a MANET
routing protocol nor arranges nodes into a logical overlay, query
packets are to traverse the actual network topology. In dense
networks, relatively large nodal degrees can have potential impacts
on this random exploring mechanism. To address this issue, the
hybrid scheme also incorporates proactive advertisement in
addition to the reactive query. To perform proactive advertisement,
each node periodically deploys an advertising packet containing a
list of its available resources" IDs. These packets will traverse
away from the advertising node in a random walk manner up to a
limited number of hops and advertise resource information to
surrounding nodes in the same way as reply packets. In the hybrid
scheme, an increase of pheromone serves as a positive feedback
which indirectly guides query packets looking for similar
resources. Intuitively, the amount of pheromone increased is
inversely proportional to the distance the reply packet has traveled
back, and other metrics, such as quality of the resource, could
contribute to this amount as well. Each node also performs an
implicit negative feedback for resources that have not been given
a positive feedback for some time by regularly decreasing the
pheromone in all of its pheromone table entries over time. In
addition, pheromone can be reduced by an explicit negative response,
for instance, a reply packet returning from a node that is not
willing to provide a resource due to excessive workload. As a result,
load balancing can be achieved via positive and negative
feedback. A node serving too many nodes can either return fewer
responses to query packets or generate negative responses.
2 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006
Figure 1: Example illustrating neighbor and pheromone tables maintained by node A: (a) wireless connectivity around A showing
that it currently has four neighbors, (b) A"s neighbor table, and (c) a possible pheromone table of A
Figure 2: Sample scenarios illustrating the three mechanisms supporting load-balancing: (a) resource replication, (b) resource
relocation, and (c) resource division
4. RESOURCE DISTRIBUTION
In addition to resource discovery, a querying node usually
attempts to access and retrieve the contents of a resource after a
successful discovery. In certain situations, it is also beneficial to
make a resource readily available at multiple nodes when the
resource can be relocated and/or replicated, such as data files.
Furthermore, in MANETs, we should consider not only the amount
of load handled by a resource provider, but also the load on those
intermediate nodes that are located on the communication paths
between the provider and other nodes as well. Hence, we describe
a cross-layer, hybrid resource distribution scheme to achieve load
balancing by incorporating the functionalities of resource
relocation, resource replication, and resource division.
4.1 Resource Replication
Multiple replicas of a resource in the network help prevent a
single node, as well as nodes surrounding it, from being overloaded
by a large number of requests and data transfers. An example is
when a node has obtained a data file from another node, the
requesting node and the intermediate nodes can cache the file and
start sharing that file with other surrounding nodes right away. In
addition, replicable resources can also be proactively replicated at
other nodes which are located in certain strategic areas. For
instance, to help nodes find a resource quickly, we could replicate
the resource so that it becomes reachable by random walk for a
specific number of hops from any node with some probability, as
depicted in Figure 2(a).
To realize this feature, the hybrid resource distribution scheme
employs a different type of control packet, called resource
replication packet, which is responsible for finding an appropriate
place to create a replica of a resource. A resource replication
packet of type R is deployed by a node that is providing the
resource R itself. Unlike a query packet which follows higher
pheromone upstream toward a resource it is looking for, a
resource replication packet tends to be propelled away from similar
resources by moving itself downstream toward weaker
pheromone. When a resource replication packet finds itself in an area
with sufficiently low pheromone, it makes a decision whether it
should continue exploring or turn back. The decision depends on
conditions such as current workload and/or remaining energy of
the node being visited, as well as popularity of the resource itself.
4.2 Resource Relocation
In certain situations, a resource may be required to transfer from
one node to another. For example, a node may no longer want to
possess a file due to the shortage of storage space, but it cannot
simply delete the file since other nodes may still need it in the
future. In this case, the node can choose to create replicas of the
file by the aforementioned resource replication mechanism and
then delete its own copy. Let us consider a situation where a
majority of nodes requesting for a resource are located far away from
a resource provider, as shown on the top of Figure 2(b). If the
resource R is relocatable, it is preferred to be relocated to another
area that is closer to those nodes, similar to the bottom of the
same figure. Hence network bandwidth is more efficiently
utilized.
The 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 3
The hybrid resource distribution scheme incorporates resource
relocation algorithms that are adaptive to user requests and aim to
reduce communication overhead. Specifically, by following the
same pheromone maintenance concept, the hybrid resource
distribution scheme introduces another type of pheromone which
corresponds to user requests, instead of resources. This type of
pheromone, called request pheromone, is setup by query packets
that are in their exploring phases (not returning ones) to guide a
resource to a new location.
4.3 Resource Division
Certain types of resources can be divided into smaller
subresources (e.g., a large file being broken into smaller files) and
distributed to multiple locations to avoid overloading a single
node, as depicted in Figure 2(c). The hybrid resource distribution
scheme incorporates a resource division mechanism that operates
at a thin layer right above all the other mechanisms described
earlier. The resource division mechanism is responsible for
decomposing divisible resources into sub-resources, and then adds
an extra keyword to distinguish each sub-resource from one
another. Therefore, each of these sub-resources will be seen by the
other mechanisms as one single resource which can be
independently discovered, replicated, and relocated. The resource division
mechanism is also responsible for combining data from these
subresources together (e.g., merging pieces of a file) and delivering
the final result to the application.
5. REPLICA INVALIDATION
Although replicas improve accessibility and balance load, replica
invalidation becomes a critical issue when nodes caching
updatable resources may concurrently update their own replicas,
which renders replicas held by other nodes obsolete. Most
existing solutions to the replica invalidation problem either impose
constrains that only the data source could perform update and
invalidate other replicas, or resort to network-wide flooding which
results in heavy network traffic and leads to scalability problem,
or both. The lack of infrastructure supports and frequent topology
changes in MANETs further challenge the issue.
We apply the same cross-layer paradigm to invalidating replicas
in MANETs which allows concurrent updates performed by
multiple replicas. To coordinate concurrent updates and disseminate
replica invalidations, a special infrastructure, called validation
mesh or mesh for short, is adaptively maintained among nodes
possessing ‘valid" replicas of a resource. Once a node has updated
its replica, an invalidation packet will only be disseminated over
the validation mesh to inform other replica-possessing nodes that
their replicas become invalid and should be deleted. The structure
(topology) of the validation mesh keeps evolving (1) when nodes
request and cache a resource, (2) when nodes update their
respective replicas and invalidate other replicas, and (3) when nodes
move. To accommodate the dynamics, our scheme integrates the
components of swarm intelligence to adaptively maintain the
validation mesh without relying on any underlying MANET routing
protocol. In particular, the scheme takes into account concurrent
updates initiated by multiple nodes to ensure the consistency
among replicas. In addition, version number is used to distinguish
new from old replicas when invalidating any stale replica.
Simulation results show that the proposed scheme effectively facilitates
concurrent replica updates and efficiently perform replica
invalidation without incurring network-wide flooding.
Figure 3 depicts the idea of ‘validation mesh" which maintains
connectivity among nodes holding valid replicas of a resource to
avoid network-wide flooding when invalidating replicas.
Figure 3: Examples showing maintenance of validation mesh
There are eight nodes in the sample network, and we start with
only node A holding the valid file, as shown in Figure 3(a). Later
on, node G issues a query packet for the file and eventually
obtains the file from A via nodes B and D. Since intermediate nodes
are allowed to cache forwarded data, nodes B, D, and G will now
hold valid replicas of the file. As a result, a validation mesh is
established among nodes A, B, D, and G, as depicted in Figure
3(b). In Figure 3(c), another node, H, has issued a query packet
for the same file and obtained it from node B"s cache via node E.
At this point, six nodes hold valid replicas and are connected
through the validation mesh. Now we assume node G updates its
replica of the file and informs the other nodes by sending an
invalidation packet over the validation mesh. Consequently, all
other nodes except G remove their replicas of the file from their
storage and the validation mesh is torn down. However, query
forwarding pheromone, as denoted by the dotted arrows in Figure
3(d), is setup at these nodes via the ‘reverse paths" in which the
invalidation packets have traversed, so that future requests for this
file will be forwarded to node G. In Figure 3(e), node H makes a
new request for the file again. This time, its query packet follows
the pheromone toward node G, where the updated file can be
obtained. Eventually, a new validation mesh is established over
nodes G, B, D, E, and H.
To maintain a validation mesh among the nodes holding valid
replicas, one of them is designated to be the focal node. Initially,
the node that originally holds the data is the focal node. As nodes
update replicas, the node that last (or most recently) updates a
4 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006
corresponding replica assumes the role of focal node. We also
name nodes, such as G and H, who originate requests to replicate
data as clients, and nodes B, D, and E who locally cache passing
data as data nodes. For instance, in Figures 3(a), 3(b), and 3(c),
node A is the focal node; in Figures 3(d), 3(e), and 3(f), node G
becomes the focal node. In addition, to accommodate newly
participating nodes and mobility of nodes, the focal node periodically
floods the validation mesh with a keep-alive packet, so that nodes
who can hear this packet are considered themselves to be part of
the validation mesh. If a node holding a valid/updated replica
doesn"t hear a keep-alive packet for a certain time interval, it will
deploy a search packet using the resource discovery mechanism
described in Section 3 to find another node (termed attachment
point) currently on the validation mesh so that it can attach itself
to. Once an attachment point is found, a search_reply packet is
returned to the disconnected node who originated the search.
Intermediate nodes who forward the search_reply packet will
become part of the validation mesh as well. To illustrate the effect of
node mobility, in Figure 3(f), node H has moved to a location
where it is not directly connected to the mesh. Via the resource
discovery mechanism, node H relies on an intermediate node F to
connect itself to the mesh. Here, node F, although part of the
validation mesh, doesn"t hold data replica, and hence is termed
nondata node.
Client and data node who keep hearing the keep-alive packets
from the focal node act as if they are holding a valid replica, so
that they can reply to query packets, like node B in Figure 3(c)
replying a request from node H. While a disconnected node
attempting to discover an attachment point to reattach itself to the
mesh, the disconnected node can"t reply to a query packet. For
instance, in Figure 3(f), node H does not reply to any query packet
before it reattaches itself to the mesh.
Although validation mesh provides a conceptual topology that (1)
connects all replicas together, (2) coordinates concurrent updates,
and (3) disseminates invalidation packets, the technical issue is
how such a mesh topology could be effectively and efficiently
maintained and evolved when (a) nodes request and cache a
resource, (b) when nodes update their respective replicas and
invalidate other replicas, and (c) when nodes move. Without relying
on any MANET routing protocols, the two primitives work
together to facilitate efficient search and adaptive maintenance.
6. PERFORMANCE EVALUATION
We have conducted simulation experiments using the QualNet
simulator to evaluate the performance of the described resource
discovery, resource distribution, and replica invalidation schemes.
However, due to space limitation only the performance of the
replica invalidation is reported. In our experiments, eighty nodes
are uniformly distributed over a terrain of size 1000×1000 m2
.
Each node has a communication range of approximately 250 m
over a 2 Mbps wireless channel, using IEEE802.11 as the MAC
layer. We use the random-waypoint mobility model with a pause
time of 1 second. Nodes may move at the minimum and maximum
speeds of 1 m/s and 5 m/s, respectively. Table 1 lists other
parameter settings used in the simulation. Initially, there is one
resource server node in network. Two nodes are randomly picked
up every 10 seconds as clients. Every β seconds, we check the
number of nodes, N, which have gotten data. Then we randomly
pickup Min(γ,N) nodes from them to initiate data update. Each
experiment is run for 10 minutes.
Table 1: Simulation Settings
HOP_LIMIT 10
ADVERTISE_HOP_LIMIT 1
KEEPALIVE_INTERVAL 3 second
NUM_SEARCH 1
ADVERTISE_INTERVAL 5 second
EXPIRATION_INTERVAL 10 second
Average Query Generation Rate 2 query/ 10 sec
Max # of Concurrent Update (γ) 2
Frequency of Update (β) 3s
We evaluate the performance under different mobility speed, the
density, the maximum number of concurrent update nodes, and
update frequency using two metrics:
• Average overhead per update measures the average number of
packets transmitted per update in the network.
• Average delay per update measures how long our approach
takes to finish an update on average.
All figures shown present the results with a 70% confidence
interval.
Figure 4: Overhead vs. speed
for 80 nodes
Figure 5: Overhead vs. density
Figure 6: Overhead vs. max
#concurrent updates
Figure 7: Overhead vs. freq.
Figure 8: Delay vs. speed Figure 9: Delay vs. density
The 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 5
Figure 10: Delay vs. max
#concurrent updates
Figure 11: Delay vs. freq.
Figures 4, 5, 6, and 7 show the overhead versus various parameter
values. In Figure 4, the overhead increases as the speed increase,
which is due to the fact that as the speed increase, nodes move out
of mesh more frequently, and will send out more search packets.
However, the overhead is not high, and even in speed 10m/sec,
the overhead is below 100 packets. In contrast, the packets will be
expected to be more than 200 packets at various speeds when
flooding is used.
Figure 5 shows that the overhead almost remains the same under
various densities. That is attributed to only flooding over the mesh
instead of the whole network. The size of mesh doesn"t vary much
on various densities, so that the overhead doesn"t vary much.
Figure 6 shows that overhead also almost remains the same under
various maximum number of concurrent updates. That"s because
one more node just means one more flood over the mesh during
update process so that the impact is limited.
Figure 7 shows that if updates happen more frequently, the
overhead is higher. This is because the more quickly updates happen,
(1) there will be more keep_alive message over the mesh between
two updates, and (2) nodes move out of mesh more frequently and
send out more search packets.
Figures 8, 9, 10, and 11 show the delay versus various parameter
values. From Figure 8, we know the delay increases as the speed
increases, which is due to the fact that with increasing speed,
clients will move out of mesh with higher probability. When these
clients want to update data, they will spend time to first search the
mesh. The faster the speed, the more time clients need to spend to
search the mesh.
Figure 9 shows that delay is negligibly affected by the density.
Delay decreases slightly as the number of nodes increases, due to
the fact that the more nodes in the network, the more nodes
receives the advertisement packets which helps the search packet
find the target so that the delay of update decreases.
Figure 10 shows that delay decreases slightly as the maximum
number of concurrent updates increases. The larger the maximum
number of concurrent updates is, the more nodes are picked up to
do update. Then with higher probability, one of these nodes is still
in mesh and finishes the update immediately (don"t need to search
mesh first), which decreases the delay.
Figure 11 shows how the delay varies with the update frequency.
When updates happen more frequently, the delay will higher.
Because the less frequently, the more time nodes in mesh have to
move out of mesh then they need to take time to search the mesh
when they do update, which increases the delay.
The simulation results show that the replica invalidation scheme
can significantly reduce the overhead with an acceptable delay.
7. CONCLUSION
To facilitate resource discovery and distribution over MANETs,
one approach is to designing peer-to-peer (P2P) systems over
MANETs which constructs an overlay by organizing peers of the
system into a logical structure on the top of MANETs" physical
topology. However, deploying overlay over MANETs may result
in either a large number of flooding operations triggered by the
routing process, or inefficiency in terms of bandwidth usage.
Specifically, overlay routing relies on the network-layer routing
protocols. In the case of a reactive routing protocol, routing on the
overlay may cause a large number of flooded route discovery
message since the routing path in each routing step must be
discovered on demand. On the other hand, if a proactive routing
protocol is adopted, each peer has to periodically broadcast
control messages, which leads to poor efficiency in terms of
bandwidth usage. Either way, constructing an overlay will potentially
suffer from the scalability problem. The paper describes a design
paradigm that uses the functional primitives of positive/negative
feedback and sporadic random walk to design robust resource
discovery and distribution schemes over MANETs. In particular,
the scheme offers the features of (1) cross-layer design of P2P
systems, which allows the routing process at both the P2P and the
network layers to be integrated to reduce overhead, (2) scalability
and mobility support, which minimizes the use of global flooding
operations and adaptively combines proactive resource
advertisement and reactive resource discovery, and (3) load balancing,
which facilitates resource replication, relocation, and division to
achieve load balancing.
8. REFERENCES
[1] A. Oram, Peer-to-Peer: Harnessing the Power of Disruptive
Technologies. O"Reilly, March 2000.
[2] S. Helal, N. Desai, V. Verma, and C. Lee, Konark - A
Service Discovery and Delivery Protocol for ad-hoc
Networks, in the Third IEEE Conference on Wireless
Communication Networks (WCNC), New Orleans, Louisiana, 2003.
[3] G. Krotuem, Proem: A Peer-to-Peer Computing Platform
for Mobile ad-hoc Networks, in Advanced Topic Workshop
Middleware for Mobile Computing, Germany, 2001.
[4] M. Papadopouli and H. Schulzrinne, A Performance
Analysis of 7DS a Peer-to-Peer Data Dissemination and
Prefetching Tool for Mobile Users, in Advances in wired and
wireless communications, IEEE Sarnoff Symposium Digest,
Ewing, NJ, 2001, (Best student paper & poster award).
[5] U. Mohan, K. Almeroth, and E. Belding-Royer, Scalable
Service Discovery in Mobile ad-hoc Networks, in IFIP
Networking Conference, Athens, Greece, May 2004.
[6] L. Yin and G. Cao, Supporting Cooperative Caching in Ad
Hoc Networks, in IEEE INFOCOM, 2004.
[7] V. Thanedar, K. Almeroth, and E. Belding-Royer, A
Lightweight Content Replication Scheme for Mobile ad-hoc
Environments, in IFIP Networking Conference, Athens, Greece,
May 2004.
6 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006 | manet routing protocol;resource discovery;query packet;mobile ad-hoc network;manet p2p system;manet;neighbor discovery protocol;concurrent update;replica invalidation;invalidation packet;route discovery message;validation mesh;hybrid discovery scheme;negative feedback |
train_C-80 | Consistency-preserving Caching of Dynamic Database Content | With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones. | 1. INTRODUCTION
An increasing fraction of web content is dynamically generated
from back-end relational databases. Even when database content
remains unchanged, temporal locality of access cannot be exploited
because dynamic content is not cacheable by web browsers or by
intermediate caching servers such as Akamai mirrors. In a
multitiered architecture, each web request can stress the WAN link
between the web server and the database. This causes user
experience to be highly variable because there is no caching to
insulate the client from bursty loads. Previous attempts in caching
dynamic database content have generally weakened transactional
semantics [3, 4] or required application modifications [15, 34].
We report on a new solution that takes the form of a
databaseagnostic middleware layer called Ganesh. Ganesh makes no effort
to semantically interpret the contents of queries or their results.
Instead, it relies exclusively on cryptographic hashing to detect
similarities with previous results. Hash-based similarity detection has
seen increasing use in distributed file systems [26, 36, 37] for
improving performance on low-bandwidth networks. However, these
techniques have not been used for relational databases. Unlike
previous approaches that use generic methods to detect similarity,
Ganesh exploits the structure of relational database results to yield
superior performance improvement.
One faces at least three challenges in applying hash-based
similarity detection to back-end databases. First, previous work in this
space has traditionally viewed storage content as uninterpreted bags
of bits with no internal structure. This allows hash-based
techniques to operate on long, contiguous runs of data for maximum
effectiveness. In contrast, relational databases have rich internal
structure that may not be as amenable to hash-based similarity
detection. Second, relational databases have very tight integrity and
consistency constraints that must not be compromised by the use
of hash-based techniques. Third, the source code of commercial
databases is typically not available. This is in contrast to previous
work which presumed availability of source code.
Our experiments show that Ganesh, while conceptually simple,
can improve performance significantly at bandwidths
representative of today"s commercial Internet. On benchmarks modeling
multitiered web applications, the throughput improvement was as high
as tenfold for data-intensive workloads. For workloads that were
not data-intensive, throughput improvements of up to twofold were
observed. Even when bandwidth was not a constraint, Ganesh had
low overhead and did not hurt performance. Our experiments also
confirm that exploiting the structure present in database results is
crucial to this performance improvement.
2. BACKGROUND
2.1 Dynamic Content Generation
As the World Wide Web has grown, many web sites have
decentralized their data and functionality by pushing them to the edges
of the Internet. Today, eBusiness systems often use a three-tiered
architecture consisting of a front-end web server, an application
server, and a back-end database server. Figure 1 illustrates this
architecture. The first two tiers can be replicated close to a
concentration of clients at the edge of the Internet. This improves user
experience by lowering end-to-end latency and reducing exposure
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
311
Back-End Database
Server
Front-End Web and
Application Servers
Figure 1: Multi-Tier Architecture
to backbone traffic congestion. It can also increase the availability
and scalability of web services.
Content that is generated dynamically from the back-end database
cannot be cached in the first two tiers. While databases can be
easily replicated in a LAN, this is infeasible in a WAN because of
the difficult task of simultaneously providing strong consistency,
availability, and tolerance to network partitions [7]. As a result,
databases tend to be centralized to meet the strong consistency
requirements of many eBusiness applications such as banking,
finance, and online retailing [38]. Thus, the back-end database is
usually located far from many sets of first and second-tier nodes [2].
In the absence of both caching and replication, WAN bandwidth
can easily become a limiting factor in the performance and
scalability of data-intensive applications.
2.2 Hash-Based Systems
Ganesh"s focus is on efficient transmission of results by
discovering similarities with the results of previous queries. As SQL queries
can generate large results, hash-based techniques lend themselves
well to the problem of efficiently transferring these large results
across bandwidth constrained links.
The use of hash-based techniques to reduce the volume of data
transmitted has emerged as a common theme of many recent
storage systems, as discussed in Section 8.2. These techniques rely
on some basic assumptions. Cryptographic hash functions are
assumed to be collision-resistant. In other words, it is
computationally intractable to find two inputs that hash to the same output. The
functions are also assumed to be one-way; that is, finding an
input that results in a specific output is computationally infeasible.
Menezes et al. [23] provide more details about these assumptions.
The above assumptions allow hash-based systems to assume that
collisions do not occur. Hence, they are able to treat the hash of a
data item as its unique identifier. A collection of data items
effectively becomes content-addressable, allowing a small hash to serve
as a codeword for a much larger data item in permanent storage or
network transmission.
The assumption that collisions are so rare as to be effectively
non-existent has recently come under fire [17]. However, as
explained by Black [5], we believe that these issues do not form a
concern for Ganesh. All communication is between trusted parts
of the system and an adversary has no way to force Ganesh to
accept invalid data. Further, Ganesh does not depend critically on any
specific hash function. While we currently use SHA-1, replacing it
with a different hash function would be simple. There would be
no impact on performance as stronger hash functions (e.g.
SHA256) only add a few extra bytes and the generated hashes are still
orders of magnitude smaller than the data items they represent. No
re-hashing of permanent storage is required since Ganesh only uses
hashing on volatile data.
3. DESIGN AND IMPLEMENTATION
Ganesh exploits redundancy in the result stream to avoid
transmitting result fragments that are already present at the query site.
Redundancy can arise naturally in many different ways. For
example, a query repeated after a certain interval may return a different
result because of updates to the database; however, there may be
significant commonality between the two results. As another
example, a user who is refining a search may generate a sequence
of queries with overlapping results. When Ganesh detects
redundancy, it suppresses transmission of the corresponding result
fragments. Instead, it transmits a much smaller digest of those
fragments and lets the query site reconstruct the result through hash
lookup in a cache of previous results. In effect, Ganesh uses
computation at the edges to reduce Internet communication.
Our description of Ganesh focuses on four aspects. We first
explain our approach to detecting similarity in query results. Next,
we discuss how the Ganesh architecture is completely invisible to
all components of a multi-tier system. We then describe Ganesh"s
proxy-based approach and the dataflow for detecting similarity.
3.1 Detecting Similarity
One of the key design decisions in Ganesh is how similarity is
detected. There are many potential ways to decompose a result into
fragments. The optimal way is, of course, the one that results in the
smallest possible object for transmission for a given query"s results.
Finding this optimal decomposition is a difficult problem because
of the large space of possibilities and because the optimal choice
depends on many factors such as the contents of the query"s result,
the history of recent results, and the cache management algorithm.
When an object is opaque, the use of Rabin fingerprints [8, 30]
to detect common data between two objects has been successfully
shown in the past by systems such as LBFS [26] and CASPER [37].
Rabin fingerprinting uses a sliding window over the data to
compute a rolling hash. Assuming that the hash function is uniformly
distributed, a chunk boundary is defined whenever the lower order
bits of the hash value equal some predetermined value. The
number of lower order bits used defines the average chunk size. These
sub-divided chunks of the object become the unit of comparison for
detecting similarity between different objects.
As the locations of boundaries found by using Rabin fingerprints
is stochastically determined, they usually fail to align with any
structural properties of the underlying data. The algorithm
therefore deals well with in-place updates, insertions and deletions.
However, it performs poorly in the presence of any reordering of data.
Figure 2 shows an example where two results, A and B,
consisting of three rows, have the same data but have different sort
attributes. In the extreme case, Rabin fingerprinting might be unable
to find any similar data due to the way it detects chunk boundaries.
Fortunately, Ganesh can use domain specific knowledge for more
precise boundary detection. The information we exploit is that a
query"s result reflects the structure of a relational database where
all data is organized as tables and rows. It is therefore simple to
check for similarity with previous results at two granularities: first
the entire result, and then individual rows. The end of a row in a
result serves as a natural chunk boundary. It is important to note that
using the tabular structure in results only involves shallow
interpretation of the data. Ganesh does not perform any deeper semantic
interpretation such as understanding data types, result schema, or
integrity constraints.
Tuning Rabin fingerprinting for a workload can also be difficult.
If the average chunk size is too large, chunks can span multiple
result rows. However, selecting a smaller average chunk size
increases the amount of metadata required to the describe the results.
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
312
Figure 2: Rabin Fingerprinting vs. Ganesh"s Chunking
This, in turn, would decrease the savings obtained via its use.
Rabin fingerprinting also needs two computationally-expensive passes
over the data: once to determine chunk boundaries and one again to
generate cryptographic hashes for the chunks. Ganesh only needs
a single pass for hash generation as the chunk boundaries are
provided by the data"s natural structure.
The performance comparison in Section 6 shows that Ganesh"s
row-based algorithm outperforms Rabin fingerprinting. Given that
previous work has already shown that Rabin fingerprinting
performs better than gzip [26], we do not compare Ganesh to
compression algorithms in this paper.
3.2 Transparency
The key factor influencing our design was the need for Ganesh
to be completely transparent to all components of a typical
eBusiness system: web servers, application servers, and database servers.
Without this, Ganesh stands little chance of having a significant
real-world impact. Requiring modifications to any of the above
components would raise the barrier for entry of Ganesh into an
existing system, and thus reduce its chances of adoption. Preserving
transparency is simplified by the fact that Ganesh is purely a
performance enhancement, not a functionality or usability enhancement.
We chose agent interposition as the architectural approach to
realizing our goal. This approach relies on the existence of a compact
programming interface that is already widely used by target
software. It also relies on a mechanism to easily add new code without
disrupting existing module structure.
These conditions are easily met in our context because of the
popularity of Java as the programming language for eBusiness
systems. The Java Database Connectivity (JDBC) API [32] allows
Java applications to access a wide variety of databases and even
other tabular data repositories such as flat files. Access to these
data sources is provided by JDBC drivers that translate between
the JDBC API and the database communication mechanism.
Figure 3(a) shows how JDBC is typically used in an application.
As the JDBC interface is standardized, one can substitute one
JDBC driver for another without application modifications. The
JDBC driver thus becomes the natural module to exploit for code
interposition. As shown in Figure 3(b), the native JDBC driver is
replaced with a Ganesh JDBC driver that presents the same
standardized interface. The Ganesh driver maintains an in-memory
cache of result fragments from previous queries and performs
reassembly of results. At the database, we add a new process called
the Ganesh proxy. This proxy, which can be shared by multiple
front-end nodes, consists of two parts: code to detect similarity
in result fragments and the original native JDBC driver that
communicates with the database. The use of a proxy at the database
makes Ganesh database-agnostic and simplifies prototyping and
experimentation. Ganesh is thus able to work with a wide range
of databases and applications, requiring no modifications to either.
3.3 Proxy-Based Caching
The native JDBC driver shown in Figure 3(a) is a lightweight
code component supplied by the database vendor. Its main
funcClient
Database
Web and
Application Server
Native JDBC Driver
WAN
(a) Native Architecture
Client
Database
Ganesh Proxy
Native JDBC Driver
WAN
Web and
Application Server
Ganesh JDBC Driver
(b) Ganesh"s Interposition-based Architecture
Figure 3: Native vs. Ganesh Architecture
tion is to mediate communication between the application and the
remote database. It forwards queries, buffers entire results, and
responds to application requests to view parts of results.
The Ganesh JDBC driver shown in Figure 3(b) presents the
application with an interface identical to that provided by the native
driver. It provides the ability to reconstruct results from compact
hash-based descriptions sent by the proxy. To perform this
reconstruction, the driver maintains an in-memory cache of
recentlyreceived results. This cache is only used as a source of result
fragments in reconstructing results. No attempt is made by the Ganesh
driver or proxy to track database updates. The lack of cache
consistency does not hurt correctness as a description of the results is
always fetched from the proxy - at worst, there will be no
performance benefit from using Ganesh. Stale data will simply be paged
out of the cache over time.
The Ganesh proxy accesses the database via the native JDBC
driver, which remains unchanged between Figures 3(a) and (b).
The database is thus completely unaware of the existence of the
proxy. The proxy does not examine any queries received from
the Ganesh driver but passes them to the native driver. Instead,
the proxy is responsible for inspecting database output received
from the native driver, detecting similar results, and generating
hash-based encodings of these results whenever enough similarity
is found. While this architecture does not decrease the load on a
database, as mentioned earlier in Section 2.1, it is much easier to
replicate databases for scalability in a LAN than in a WAN.
To generate a hash-based encoding, the proxy must be aware of
what result fragments are available in the Ganesh driver"s cache.
One approach is to be optimistic, and to assume that all result
fragments are available. This will result in the smallest possible initial
transmission of a result. However, in cases where there is little
overlap with previous results, the Ganesh driver will have to make
many calls to the proxy during reconstruction to fetch missing
result fragments. To avoid this situation, the proxy loosely tracks the
state of the Ganesh driver"s cache. Since both components are
under our control, it is relatively simple to do this without resorting
to gray-box techniques or explicit communication for maintaining
cache coherence. Instead, the proxy simulates the Ganesh driver"s
cache management algorithm and uses this to maintain a list of
hashes for which the Ganesh driver is likely to possess the result
fragments. In case of mistracking, there will be no loss of
correctness but there will be extra round-trip delays to fetch the missing
fragments. If the client detects loss of synchronization with the
proxy, it can ask the proxy to reset the state shared between them.
Also note that the proxy does not need to keep the result fragments
themselves, only their hashes. This allows the proxy to remain
scalable even when it is shared by many front-end nodes.
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
313
Object Output Stream
Convert ResultSet
Object Input Stream
Convert ResultSet
All Data
Recipe
ResultSet
All Data
ResultSet
Network
Ganesh Proxy Ganesh JDBC Driver
Result
Set
Recipe
Result
Set
Yes
Yes
No
No
GaneshInputStream
GaneshOutputStream
Figure 4: Dataflow for Result Handling
3.4 Encoding and Decoding Results
The Ganesh proxy receives database output as Java objects from
the native JDBC driver. It examines this output to see if a Java
object of type ResultSet is present. The JDBC interface uses
this data type to store results of database queries. If a ResultSet
object is found, it is shrunk as discussed below. All other Java
objects are passed through unmodified.
As discussed in Section 3.1, the proxy uses the row boundaries
defined in the ResultSet to partition it into fragments
consisting of single result rows. All ResultSet objects are converted
into objects of a new type called RecipeResultSet. We use
the term recipe for this compact description of a database
result because of its similarity to a file recipe in the CASPER file
system [37]. The conversion replaces each result fragment that is
likely to be present in the Ganesh driver"s cache by a SHA-1 hash
of that fragment. Previously unseen result fragments are retained
verbatim. The proxy also retains hashes for the new result
fragments as they will be present in the driver"s cache in the future.
Note that the proxy only caches hashes for result fragments and
does not cache recipes.
The proxy constructs a RecipeResultSet by checking for
similarity at the entire result and then the row level. If the entire
result is predicted to be present in the Ganesh driver"s cache, the
RecipeResultSet is simply a single hash of the entire result.
Otherwise, it contains hashes for those rows predicted to be present
in that cache; all other rows are retained verbatim. If the proxy
estimates an overall space savings, it will transmit the
RecipeResultSet. Otherwise the original ResultSet is transmitted.
The RecipeResultSet objects are transformed back into
ResultSet objects by the Ganesh driver. Figure 4 illustrates
ResultSet handling at both ends. Each SHA-1 hash found in a
RecipeResultSet is looked up in the local cache of result
fragments. On a hit, the hash is replaced by the corresponding
fragment. On a miss, the driver contacts the Ganesh proxy to fetch the
fragment. All previously unseen result fragments that were retained
verbatim by the proxy are hashed and added to the result cache.
There should be very few misses if the proxy has accurately
tracked the Ganesh driver"s cache state. A future optimization would
be to batch the fetch of missing fragments. This would be valuable
when there are many small missing fragments in a high-latency
WAN. Once the transformation is complete, the fully reconstructed
ResultSet object is passed up to the application.
4. EXPERIMENTAL VALIDATION
Three questions follow from the goals and design of Ganesh:
• First, can performance can be improved significantly by
exploiting similarity across database results?
Benchmark Dataset Details
500,000 Users, 12,000 Stories
BBOARD 2.0 GB 3,298,000 Comments
AUCTION 1.3 GB 1,000,000 Users, 34,000 Items
Table 1: Benchmark Dataset Details
• Second, how important is Ganesh"s structural similarity
detection relative to Rabin fingerprinting"s similarity detection?
• Third, is the overhead of the proxy-based design acceptable?
Our evaluation answers these question through controlled
experiments with the Ganesh prototype. This section describes the
benchmarks used, our evaluation procedure, and the experimental setup.
Results of the experiments are presented in Sections 5, 6, and 7.
4.1 Benchmarks
Our evaluation is based on two benchmarks [18] that have been
widely used by other researchers to evaluate various aspects of
multi-tier [27] and eBusiness architectures [9]. The first
benchmark, BBOARD, is modeled after Slashdot, a technology-oriented
news site. The second benchmark, AUCTION, is modeled after
eBay, an online auction site. In both benchmarks, most content is
dynamically generated from information stored in a database.
Details of the datasets used can be found in Table 1.
4.1.1 The BBOARD Benchmark
The BBOARD benchmark, also known as RUBBoS [18],
models Slashdot, a popular technology-oriented web site. Slashdot
aggregates links to news stories and other topics of interest found
elsewhere on the web. The site also serves as a bulletin board by
allowing users to comment on the posted stories in a threaded
conversation form. It is not uncommon for a story to gather hundreds
of comments in a matter of hours. The BBOARD benchmark is
similar to the site and models the activities of a user, including
readonly operations such as browsing the stories of the day, browsing
story categories, and viewing comments as well as write operations
such as new user registration, adding and moderating comments,
and story submission.
The benchmark consists of three different phases: a short
warmup phase, a runtime phase representing the main body of the
workload, and a short cool-down phase. In this paper we only report
results from the runtime phase. The warm-up phase is important
in establishing dynamic system state, but measurements from that
phase are not significant for our evaluation. The cool-down phase
is solely for allowing the benchmark to shut down.
The warm-up, runtime, and cool-down phases are 2, 15, and 2
minutes respectively. The number of simulated clients were 400,
800, 1200, and 1600. The benchmark is available in a Java Servlets
and PHP version and has different datasets; we evaluated Ganesh
using the Java Servlets version and the Expanded dataset.
The BBOARD benchmark defines two different workloads. The
first, the Authoring mix, consists of 70% read-only operations and
30% read-write operations. The second, the Browsing mix,
contains only read-only operations and does not update the database.
4.1.2 The AUCTION Benchmark
The AUCTION benchmark, also known as RUBiS [18], models
eBay, the online auction site. The eBay web site is used to buy
and sell items via an auction format. The main activities of a user
include browsing, selling, or bidding for items. Modeling the
activities on this site, this benchmark includes read-only activities such
as browsing items by category and by region, as well as read-write
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
314
NetEm
Router Ganesh
Proxy
Clients Web and
Application Server
Database
Server
Figure 5: Experimental Setup
activities such as bidding for items, buying and selling items, and
leaving feedback.
As with BBOARD, the benchmark consists of three different phases.
The warm-up, runtime, and cool-down phases for this experiment
are 1.5, 15, and 1 minutes respectively. We tested Ganesh with
four client configurations where the number of test clients was set
to 400, 800, 1200, and 1600. The benchmark is available in a
Enterprise Java Bean (EJB), Java Servlets, and PHP version and has
different datasets; we evaluated Ganesh with the Java Servlets
version and the Expanded dataset.
The AUCTION benchmark defines two different workloads. The
first, the Bidding mix, consists of 70% read-only operations and
30% read-write operations. The second, the Browsing mix,
contains only read-only operations and does not update the database.
4.2 Experimental Procedure
Both benchmarks involve a synthetic workload of clients
accessing a web server. The number of clients emulated is an
experimental parameter. Each emulated client runs an instance of the
benchmark in its own thread, using a matrix to transition between
different benchmark states. The matrix defines a stochastic model
with probabilities of transitioning between the different states that
represent typical user actions. An example transition is a user
logging into the AUCTION system and then deciding on whether to
post an item for sale or bid on active auctions. Each client also
models user think time between requests. The think time is
modeled as an exponential distribution with a mean of 7 seconds.
We evaluate Ganesh along two axes: number of clients and WAN
bandwidth. Higher loads are especially useful in understanding
Ganesh"s performance when the CPU or disk of the database server
or proxy is the limiting factor. A previous study has shown that
approximately 50% of the wide-area Internet bottlenecks observed
had an available bandwidth under 10 Mb/s [1]. Based on this work,
we focus our evaluation on the WAN bandwidth of 5 Mb/s with
66 ms of round-trip latency, representative of severely constrained
network paths, and 20 Mb/s with 33 ms of round-trip latency,
representative of a moderately constrained network path. We also report
Ganesh"s performance at 100 Mb/s with no added round-trip
latency. This bandwidth, representative of an unconstrained network,
is especially useful in revealing any potential overhead of Ganesh
in situations where WAN bandwidth is not the limiting factor. For
each combination of number of clients and WAN bandwidth, we
measured results from the two configurations listed below:
• Native: This configuration corresponds to Figure 3(a).
Native avoids Ganesh"s overhead in using a proxy and
performing Java object serialization.
• Ganesh: This configuration corresponds to Figure 3(b). For
a given number of clients and WAN bandwidth, comparing
these results to the corresponding Native results gives the
performance benefit due to the Ganesh middleware system.
The metric used to quantify the improvement in throughput is
the number of client requests that can be serviced per second. The
metric used to quantify Ganesh"s overhead is the average response
time for a client request. For all of the experiments, the Ganesh
driver used by the application server used a cache size of 100,000
items1
. The proxy was effective in tracking the Ganesh driver"s
cache state; for all of our experiments the miss rate on the driver
never exceeded 0.7%.
4.3 Experimental Setup
The experimental setup used for the benchmarks can be seen in
Figure 5. All machines were 3.2 GHz Pentium 4s (with
HyperThreading enabled.) With the exception of the database server, all
machines had 2 GB of SDRAM and ran the Fedora Core Linux
distribution. The database server had 4 GB of SDRAM.
We used Apache"s Tomcat as both the application server that
hosted the Java Servlets and the web server. Both benchmarks
used Java Servlets to generate the dynamic content. The database
server used the open source MySQL database. For the native JDBC
drivers, we used the Connector/J drivers provided by MySQL. The
application server used Sun"s Java Virtual Machine as the runtime
environment for the Java Servlets. The sysstat tool was used to
monitor the CPU, network, disk, and memory utilization on all
machines.
The machines were connected by a switched gigabit Ethernet
network. As shown in Figure 5, the front-end web and
application server was separated from the proxy and database server by a
NetEm router [16]. This router allowed us to control the bandwidth
and latency settings on the network. The NetEm router is a
standard PC with two network cards running the Linux Traffic Control
and Network Emulation software. The bandwidth and latency
constraints were only applied to the link between the application server
and the database for the native case and between the application
server and the proxy for the Ganesh case. There is no
communication between the application server and the database with Ganesh
as all data flows through the proxy. As our focus was on the WAN
link between the application server and the database, there were no
constraints on the link between the simulated test clients and the
web server.
5. THROUGHPUT AND RESPONSE TIME
In this section, we address the first question raised in Section 4:
Can performance can be improved significantly by exploiting
similarity across database results? To answer this question, we use
results from the BBOARD and AUCTION benchmarks. We use
two metrics to quantify the performance improvement obtainable
through the use of Ganesh: throughput, from the perspective of the
web server, and average response time, from the perspective of the
client. Throughput is measured in terms of the number of client
requests that can be serviced per second.
5.1 BBOARD Results and Analysis
5.1.1 Authoring Mix
Figures 6 (a) and (b) present the average number of requests
serviced per second and the average response time for these requests
as perceived by the clients for BBOARD"s Authoring Mix.
As Figure 6 (a) shows, Native easily saturates the 5 Mb/s link.
At 400 clients, the Native solution delivers 29 requests/sec with an
average response time of 8.3 seconds. Native"s throughput drops
with an increase in test clients as clients timeout due to
congestion at the application server. Usability studies have shown that
response times above 10 seconds cause the user to move on to
1
As Java lacks a sizeof() operator, Java caches therefore limit
their size based on the number of objects. The size of cache dumps
taken at the end of the experiments never exceeded 212 MB.
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
315
0
50
100
150
200
250
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Requests/sec
Native Ganesh
0.001
0.01
0.1
1
10
100
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Avg.Resp.Time(sec)
Native Ganesh
Note Logscale
(a) Throughput: Authoring Mix (b) Response Time: Authoring Mix
0
50
100
150
200
250
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Requests/sec
Native Ganesh
0.001
0.01
0.1
1
10
100
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Avg.Resp.Time(sec)
Native Ganesh
Note Logscale
(c) Throughput: Browsing Mix (d) Response Time: Browsing Mix
Mean of three trials. The maximum standard deviation for throughput and response time was 9.8% and 11.9% of the corresponding mean.
Figure 6: BBOARD Benchmark - Throughput and Average Response Time
other tasks [24]. Based on these numbers, increasing the
number of test clients makes the Native system unusable. Ganesh at
5 Mb/s, however, delivers a twofold improvement with 400 test
clients and a fivefold improvement at 1200 clients. Ganesh"s
performance drops slightly at 1200 and 1600 clients as the network is
saturated. Compared to Native, Figure 6 (b) shows that Ganesh"s
response times are substantially lower with sub-second response
times at 400 clients.
Figure 6 (a) also shows that for 400 and 800 test clients Ganesh
at 5 Mb/s has the same throughput and average response time as
Native at 20 Mb/s. Only at 1200 and 1600 clients does Native at 20
Mb/s deliver higher throughput than Ganesh at 5 Mb/s.
Comparing both Ganesh and Native at 20 Mb/s, we see that
Ganesh is no longer bandwidth constrained and delivers up to a
twofold improvement over Native at 1600 test clients. As Ganesh
does not saturate the network with higher test client configurations,
at 1600 test clients, its average response time is 0.1 seconds rather
than Native"s 7.7 seconds.
As expected, there are no visible gains from Ganesh at the higher
bandwidth of 100 Mb/s where the network is no longer the
bottleneck. Ganesh, however, still tracks Native in terms of throughput.
5.1.2 Browsing Mix
Figures 6 (c) and (d) present the average number of requests
serviced per second and the average response time for these requests
as perceived by the clients for BBOARD"s Browsing Mix.
Regardless of the test client configuration, Figure 6 (c) shows
that Native"s throughput at 5 Mb/s is limited to 10 reqs/sec. Ganesh
at 5 Mb/s with 400 test clients, delivers more than a sixfold
increase in throughput. The improvement increases to over a
elevenfold increase at 800 test clients before Ganesh saturates the
network. Further, Figure 6 (d) shows that Native"s average response
time of 35 seconds at 400 test clients make the system unusable.
These high response times further increase with the addition of test
clients. Even with the 1600 test client configuration Ganesh
delivers an acceptable average response time of 8.2 seconds.
Due to the data-intensive nature of the Browsing mix, Ganesh at
5 Mb/s surprisingly performs much better than Native at 20 Mb/s.
Further, as shown in Figure 6 (d), while the average response time
for Native at 20 Mb/s is acceptable at 400 test clients, it is unusable
with 800 test clients with an average response time of 15.8 seconds.
Like the 5 Mb/s case, this response time increases with the addition
of extra test clients.
Ganesh at 20 Mb/s and both Native and Ganesh at 100 Mb/s are
not bandwidth limited. However, performance plateaus out after
1200 test clients due to the database CPU being saturated.
5.1.3 Filter Variant
We were surprised by the Native performance from the BBOARD
benchmark. At the bandwidth of 5 Mb/s, Native performance was
lower than what we had expected. It turned out the benchmark
code that displays stories read all the comments associated with
the particular story from the database and only then did some
postprocessing to select the comments to be displayed. While this is
exactly the behavior of SlashCode, the code base behind the
Slashdot web site, we decided to modify the benchmark to perform some
pre-filtering at the database. This modified benchmark, named the
Filter Variant, models a developer who applies optimizations at the
SQL level to transfer less data. In the interests of brevity, we only
briefly summarize the results from the Authoring mix.
For the Authoring mix, at 800 test clients at 5 Mb/s, Figure 7 (a)
shows that Native"s throughput increase by 85% when compared
to the original benchmark while Ganesh"s improvement is smaller
at 15%. Native"s performance drops above 800 clients as the test
clients time out due to high response times. The most significant
gain for Native is seen at 20 Mb/s. At 1600 test clients, when
compared to the original benchmark, Native sees a 73% improvement
in throughput and a 77% reduction in average response time. While
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
316
0
50
100
150
200
250
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Requests/sec
Native Ganesh
0.001
0.01
0.1
1
10
100
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Avg.Resp.Time(sec)
Native Ganesh
Note Logscale
(a) Throughput: Authoring Mix (b) Response Time: Authoring Mix
Mean of three trials. The maximum standard deviation for throughput and response time was 7.2% and 11.5% of the corresponding mean.
Figure 7: BBOARD Benchmark - Filter Variant - Throughput and Average Response Time
Ganesh sees no improvement when compared to the original, it still
processes 19% more requests/sec than Native. Thus, while the
optimizations were more helpful to Native, Ganesh still delivers an
improvement in performance.
5.2 AUCTION Results and Analysis
5.2.1 Bidding Mix
Figures 8 (a) and (b) present the average number of requests
serviced per second and the average response time for these requests
as perceived by the clients for AUCTION"s Bidding Mix. As
mentioned earlier, the Bidding mix consists of a mixture of read and
write operations.
The AUCTION benchmark is not as data intensive as BBOARD.
Therefore, most of the gains are observed at the lower bandwidth
of 5 Mb/s. Figure 8 (a) shows that the increase in throughput due
to Ganesh ranges from 8% at 400 test clients to 18% with 1600
test clients. As seen in Figure 8 (b), the average response times for
Ganesh are significantly lower than Native ranging from a decrease
of 84% at 800 test clients to 88% at 1600 test clients.
Figure 8 (a) also shows that with a fourfold increase of
bandwidth from 5 Mb/s to 20 Mb/s, Native is no longer bandwidth
constrained and there is no performance difference between Ganesh
and Native. With the higher test client configurations, we did
observe that the bandwidth used by Ganesh was lower than Native.
Ganesh might still be useful in these non-constrained scenarios if
bandwidth is purchased on a metered basis. Similar results are seen
for the 100 Mb/s scenario.
5.2.2 Browsing Mix
For AUCTION"s Browsing Mix, Figures 8 (c) and (d) present the
average number of requests serviced per second and the average
response time for these requests as perceived by the clients.
Again, most of the gains are observed at lower bandwidths. At 5
Mb/s, Native and Ganesh deliver similar throughput and response
times with 400 test clients. While the throughput for both remains
the same at 800 test clients, Figure 8 (d) shows that Ganesh"s
average response time is 62% lower than Native. Native saturates the
link at 800 clients and adding extra test clients only increases the
average response time. Ganesh, regardless of the test client
configuration, is not bandwidth constrained and maintains the same
response time. At 1600 test clients, Figure 8 (c) shows that Ganesh"s
throughput is almost twice that of Native.
At the higher bandwidths of 20 and 100 Mb/s, neither Ganesh
nor Native is bandwidth limited and deliver equivalent throughput
and response times.
Benchmark Orig. Size Ganesh Size Rabin Size
SelectSort1 223.6 MB 5.4 MB 219.3 MB
SelectSort2 223.6 MB 5.4 MB 223.6 MB
Table 2: Similarity Microbenchmarks
6. STRUCTURAL VS. RABIN SIMILARITY
In this section, we address the second question raised in
Section 4: How important is Ganesh"s structural similarity detection
relative to Rabin fingerprinting-based similarity detecting? To
answer this question, we used microbenchmarks and the BBOARD and
AUCTION benchmarks. As Ganesh always performed better than
Rabin fingerprinting, we only present a subset of the results here in
the interests of brevity.
6.1 Microbenchmarks
Two microbenchmarks show an example of the effects of data
reordering on Rabin fingerprinting algorithm. In the first
microbenchmark, SelectSort1, a query with a specified sort order selects
223.6 MB of data spread over approximately 280 K rows. The
query is then repeated with a different sort attribute. While the
same number of rows and the same data is returned, the order of
rows is different. In such a scenario, one would expect a large
amount of similarity to be detected between both results. As
Table 2 shows, Ganesh"s row-based algorithm achieves a 97.6%
reduction while the Rabin fingerprinting algorithm, with the average
chunk size parameter set to 4 KB, only achieves a 1% reduction.
The reason, as shown earlier in Figure 2, is that with Rabin
fingerprinting, the spans of data between two consecutive boundaries
usually cross row boundaries. With the order of the rows changing
in the second result and the Rabin fingerprints now spanning
different rows, the algorithm is unable to detect significant similarity.
The small gain seen is mostly for those single rows that are large
enough to be broken into multiple chunks.
SelectSort2, another micro-benchmark executed the same queries
but increased the minimum chunk size of the Rabin fingerprinting
algorithm. As can be seen in Table 2, even the small gain from the
previous microbenchmark disappears as the minimum chunk size
was greater than the average row size. While one can partially
address these problems by dynamically varying the parameters of the
Rabin fingerprinting algorithm, this can be computationally
expensive, especially in the presence of changing workloads.
6.2 Application Benchmarks
We ran the BBOARD benchmark described in Section 4.1.1 on
two versions of Ganesh: the first with Rabin fingerprinting used as
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
317
0
50
100
150
200
250
300
350
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Requests/sec
Native Ganesh
0.001
0.01
0.1
1
10
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Avg.Resp.Time(sec)
Native Ganesh
Note Logscale
(a) Throughput: Bidding Mix (b) Response Time: Bidding Mix
0
50
100
150
200
250
300
350
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Requests/sec
Native Ganesh
0.001
0.01
0.1
1
10
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Avg.Resp.Time(sec)
Native Ganesh
Note Logscale
(c) Throughput: Browsing Mix (d) Response Time: Browsing Mix
Mean of three trials. The maximum standard deviation for throughput and response time was 2.2% and 11.8% of the corresponding mean.
Figure 8: AUCTION Benchmark - Throughput and Average Response Time
the chunking algorithm and the second with Ganesh"s row-based
algorithm. Rabin"s results for the Browsing Mix are normalized to
Ganesh"s results and presented in Figure 9.
As Figure 9 (a) shows, at 5 Mb/s, independent of the test client
configuration, Rabin significantly underperforms Ganesh. This
happens because of a combination of two reasons. First, as outlined
in Section 3.1, Rabin finds less similarity as it does not exploit
the result"s structural information. Second, this benchmark
contained some queries that generated large results. In this case,
Rabin, with a small average chunk size, generated a large number of
objects that evicted other useful data from the cache. In contrast,
Ganesh was able to detect these large rows and correspondingly
increase the size of the chunks. This was confirmed as cache
statistics showed that Ganesh"s hit ratio was roughly three time that of
Rabin. Throughput measurements at 20 Mb/s were similar with
the exception of Rabin"s performance with 400 test clients. In this
case, Ganesh was not network limited and, in fact, the throughput
was the same as 400 clients at 5 Mb/s. Rabin, however, took
advantage of the bandwidth increase from 5 to 20 Mb/s to deliver a
slightly better performance. At 100 Mb/s, Rabin"s throughput was
almost similar to Ganesh as bandwidth was no longer a bottleneck.
The normalized response time, presented in Figure 9 (b), shows
similar trends. At 5 and 20 Mb/s, the addition of test clients
decreases the normalized response time as Ganesh"s average response
time increases faster than Rabin"s. However, at no point does Rabin
outperform Ganesh. Note that at 400 and 800 clients at 100 Mb/s,
Rabin does have a higher overhead even when it is not bandwidth
constrained. As mentioned in Section 3.1, this is due to the fact that
Rabin has to hash each ResultSet twice. The overhead
disappears with 1200 and 1600 clients as the database CPU is saturated
and limits the performance of both Ganesh and Rabin.
7. PROXY OVERHEAD
In this section, we address the third question raised in Section 4:
Is the overhead of Ganesh"s proxy-based design acceptable? To
answer this question, we concentrate on its performance at the higher
bandwidths. Our evaluation in Section 5 showed that Ganesh, when
compared to Native, can deliver a substantial throughput
improvement at lower bandwidths. It is only at higher bandwidths that
latency, measured by the average response time for a client request,
and throughput, measured by the number of client requests that can
be serviced per second, overheads would be visible.
Looking at the Authoring mix of the original BBOARD
benchmark, there are no visible gains from Ganesh at 100 Mb/s. Ganesh,
however, still tracks Native in terms of throughput. While the
average response time is higher for Ganesh, the absolute difference is
in between 0.01 and 0.04 seconds and would be imperceptible to
the end-user. The Browsing mix shows an even smaller difference
in average response times. The results from the filter variant of the
BBOARD benchmarks are similar. Even for the AUCTION
benchmark, the difference between Native and Ganesh"s response time at
100 Mb/s was never greater than 0.02 seconds. The only exception
to the above results was seen in the filter variant of the BBOARD
benchmark where Ganesh at 1600 test clients added 0.85 seconds
to the average response time. Thus, even for much faster networks
where the WAN link is not the bottleneck, Ganesh always delivers
throughput equivalent to Native. While some extra latency is added
by the proxy-based design, it is usually imperceptible.
8. RELATED WORK
To the best of our knowledge, Ganesh is the first system that
combines the use of hash-based techniques with caching of database
results to improve throughput and response times for applications
with dynamic content. We also believe that it is also the first
system to demonstrate the benefits of using structural information for
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
318
0.0
0.2
0.4
0.6
0.8
1.0
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Norm.Throughput
31.8
3.8
2.8
2.3
23.8
32.8
5.8
3.6
1.8
2.1
1.1
1.0
0
5
10
15
20
25
30
35
400
800
1200
1600
400
800
1200
1600
400
800
1200
1600
5 Mb/s 20 Mb/s 100 Mb/s
Test Clients
Norm.ResponseTime
(a) Normalized Throughput: Higher is better (b) Normalized Response Time: Higher is worse
For throughput, a normalized result greater than 1 implies that Rabin is better, For response time, a normalized result greater than 1 implies
that Ganesh is better. Mean of three trials. The maximum standard deviation for throughput and response time was 9.1% and 13.9% of
the corresponding mean.
Figure 9: Normalized Comparison of Ganesh vs. Rabin - BBOARD Browsing Mix
detecting similarity. In this section, we first discuss alternative
approaches to caching dynamic content and then examine other uses
of hash-based primitives in distributed systems.
8.1 Caching Dynamic Content
At the database layer, a number of systems have advocated
middletier caching where parts of the database are replicated at the edge or
server [3, 4, 20]. These systems either cache entire tables in what
is essentially a replicated database or use materialized views from
previous query replies [19]. They require tight integration with the
back-end database to ensure a time bound on the propagation of
updates. These systems are also usually targeted towards
workloads that do not require strict consistency and can tolerate stale
data. Further, unlike Ganesh, some of these mid-tier caching
solutions [2, 3], suffer from the complexity of having to participate in
query planing and distributed query processing.
Gao et al. [15] propose using a distributed object replication
architecture where the data store"s consistency requirements are
adapted on a per-application basis. These solutions require
substantial developer resources and detailed understanding of the
application being modified. While systems that attempt to automate
the partitioning and replication of an application"s database
exist [34], they do not provide full transaction semantics. In
comparison, Ganesh does not weaken any of the semantics provided by
the underlying database.
Recent work in the evaluation of edge caching options for
dynamic web sites [38] has suggested that, without careful planning,
employing complex offloading strategies can hurt performance.
Instead, the work advocates for an architecture in which all tiers
except the database should be offloaded to the edge. Our evaluation of
Ganesh has shown that it would benefit these scenarios. To improve
database scalability, C-JDBC [10], SSS [22], and Ganymed [28]
also advocate the use of an interposition-based architecture to
transparently cluster and replicate databases at the middleware level.
The approaches of these architectures and Ganesh are
complementary and they would benefit each other.
Moving up to the presentation layer, there has been widespread
adoption of fragment-based caching [14], which improves cache
utilization by separately caching different parts of generated web
pages. While fragment-based caching works at the edge, a recent
proposal has proposed moving web page assembly to the clients to
optimize content delivery [31]. While Ganesh is not used at the
presentation layer, the same principles have been applied in Duplicate
Transfer Detection [25] to increase web cache efficiency as well as
for web access across bandwidth limited links [33].
8.2 Hash-based Systems
The past few years have seen the emergence of many systems
that exploit hash-based techniques. At the heart of all these
systems is the idea of detecting similarity in data without requiring
interpretation of that data. This simple yet elegant idea relies on
cryptographic hashing, as discussed earlier in Section 2. Successful
applications of this idea span a wide range of storage systems.
Examples include peer-to-peer backup of personal computing files [11],
storage-efficient archiving of data [29], and finding similar files [21].
Spring and Wetherall [35] apply similar principles at the network
level. Using synchronized caches at both ends of a network link,
duplicated data is replaced by smaller tokens for transmission and
then restored at the remote end. This and other hash-based systems
such as the CASPER [37] and LBFS [26] filesystems, and Layer-2
bandwidth optimizers such as Riverbed and Peribit use Rabin
fingerprinting [30] to discover spans of commonality in data. This
approach is especially useful when data items are modified in-place
through insertions, deletions, and updates. However, as Section 6
shows, the performance of this technique can show a dramatic drop
in the presence of data reordering. Ganesh instead uses row
boundaries as dividers for detecting similarity.
The most aggressive use of hash-based techniques is by systems
that use hashes as the primary identifiers for objects in persistent
storage. Storage systems such as CFS [12] and PAST [13] that
have been built using distributed hash tables fall into this category.
Single Instance Storage [6] and Venti [29] are other examples of
such systems. As discussed in Section 2.2, the use of cryptographic
hashes for addressing persistent data represents a deeper level of
faith in their collision-resistance than that assumed by Ganesh. If
time reveals shortcomings in the hash algorithm, the effort involved
in correcting the flaw is much greater. In Ganesh, it is merely a
matter of replacing the hash algorithm.
9. CONCLUSION
The growing use of dynamic web content generated from
relational databases places increased demands on WAN bandwidth.
Traditional caching solutions for bandwidth and latency reduction
are often ineffective for such content. This paper shows that the
impact of WAN accesses to databases can be substantially reduced
through the Ganesh architecture without any compromise of the
database"s strict consistency semantics. The essence of the Ganesh
architecture is the use of computation at the edges to reduce
communication through the Internet. Ganesh is able to use
cryptographic hashes to detect similarity with previous results and send
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
319
compact recipes of results rather than full results. Our design uses
interposition to achieve complete transparency: clients, application
servers, and database servers are all unaware of Ganesh"s presence
and require no modification.
Our experimental evaluation confirms that Ganesh, while
conceptually simple, can be highly effective in improving throughput
and response time. Our results also confirm that exploiting the
structure present in database results to detect similarity is crucial
to this performance improvement.
10. REFERENCES
[1] AKELLA, A., SESHAN, S., AND SHAIKH, A. An empirical
evaluation of wide-area internet bottlenecks. In Proc. 3rd
ACM SIGCOMM Conference on Internet Measurement
(Miami Beach, FL, USA, Oct. 2003), pp. 101-114.
[2] ALTINEL, M., BORNH ¨OVD, C., KRISHNAMURTHY, S.,
MOHAN, C., PIRAHESH, H., AND REINWALD, B. Cache
tables: Paving the way for an adaptive database cache. In
Proc. of 29th VLDB (Berlin, Germany, 2003), pp. 718-729.
[3] ALTINEL, M., LUO, Q., KRISHNAMURTHY, S., MOHAN,
C., PIRAHESH, H., LINDSAY, B. G., WOO, H., AND
BROWN, L. Dbcache: Database caching for web application
servers. In Proc. 2002 ACM SIGMOD (2002), pp. 612-612.
[4] AMIRI, K., PARK, S., TEWARI, R., AND PADMANABHAN,
S. Dbproxy: A dynamic data cache for web applications. In
Proc. IEEE International Conference on Data Engineering
(ICDE) (Mar. 2003).
[5] BLACK, J. Compare-by-hash: A reasoned analysis. In Proc.
2006 USENIX Annual Technical Conference (Boston, MA,
May 2006), pp. 85-90.
[6] BOLOSKY, W. J., CORBIN, S., GOEBEL, D., , AND
DOUCEUR, J. R. Single instance storage in windows 2000.
In Proc. 4th USENIX Windows Systems Symposium (Seattle,
WA, Aug. 2000), pp. 13-24.
[7] BREWER, E. A. Lessons from giant-scale services. IEEE
Internet Computing 5, 4 (2001), 46-55.
[8] BRODER, A., GLASSMAN, S., MANASSE, M., AND
ZWEIG, G. Syntactic clustering of the web. In Proc. 6th
International WWW Conference (1997).
[9] CECCHET, E., CHANDA, A., ELNIKETY, S.,
MARGUERITE, J., AND ZWAENEPOEL, W. Performance
comparison of middleware architectures for generating
dynamic web content. In Proc. Fourth ACM/IFIP/USENIX
International Middleware Conference (Rio de Janeiro,
Brazil, June 2003).
[10] CECCHET, E., MARGUERITE, J., AND ZWAENEPOEL, W.
C-JDBC: Flexible database clustering middleware. In Proc.
2004 USENIX Annual Technical Conference (Boston, MA,
June 2004).
[11] COX, L. P., MURRAY, C. D., AND NOBLE, B. D. Pastiche:
Making backup cheap and easy. In OSDI: Symposium on
Operating Systems Design and Implementation (2002).
[12] DABEK, F., KAASHOEK, M. F., KARGER, D., MORRIS,
R., AND STOICA, I. Wide-area cooperative storage with
CFS. In 18th ACM Symposium on Operating Systems
Principles (Banff, Canada, Oct. 2001).
[13] DRUSCHEL, P., AND ROWSTRON, A. PAST: A large-scale,
persistent peer-to-peer storage utility. In HotOS VIII (Schloss
Elmau, Germany, May 2001), pp. 75-80.
[14] Edge side includes. http://www.esi.org.
[15] GAO, L., DAHLIN, M., NAYATE, A., ZHENG, J., AND
IYENGAR, A. Application specific data replication for edge
services. In WWW "03: Proc. Twelfth International
Conference on World Wide Web (2003), pp. 449-460.
[16] HEMMINGER, S. Netem - emulating real networks in the lab.
In Proc. 2005 Linux Conference Australia (Canberra,
Australia, Apr. 2005).
[17] HENSON, V. An analysis of compare-by-hash. In Proc. 9th
Workshop on Hot Topics in Operating Systems (HotOS IX)
(May 2003), pp. 13-18.
[18] Jmob benchmarks. http://jmob.objectweb.org/.
[19] LABRINIDIS, A., AND ROUSSOPOULOS, N. Balancing
performance and data freshness in web database servers. In
Proc. 29th VLDB Conference (Sept. 2003).
[20] LARSON, P.-A., GOLDSTEIN, J., AND ZHOU, J.
Transparent mid-tier database caching in sql server. In Proc.
2003 ACM SIGMOD (2003), pp. 661-661.
[21] MANBER, U. Finding similar files in a large file system. In
Proc. USENIX Winter 1994 Technical Conference (San
Fransisco, CA, 17-21 1994), pp. 1-10.
[22] MANJHI, A., AILAMAKI, A., MAGGS, B. M., MOWRY,
T. C., OLSTON, C., AND TOMASIC, A. Simultaneous
scalability and security for data-intensive web applications.
In Proc. 2006 ACM SIGMOD (June 2006), pp. 241-252.
[23] MENEZES, A. J., VANSTONE, S. A., AND OORSCHOT, P.
C. V. Handbook of Applied Cryptography. CRC Press, 1996.
[24] MILLER, R. B. Response time in man-computer
conversational transactions. In Proc. AFIPS Fall Joint
Computer Conference (1968), pp. 267-277.
[25] MOGUL, J. C., CHAN, Y. M., AND KELLY, T. Design,
implementation, and evaluation of duplicate transfer
detection in http. In Proc. First Symposium on Networked
Systems Design and Implementation (San Francisco, CA,
Mar. 2004).
[26] MUTHITACHAROEN, A., CHEN, B., AND MAZIERES, D. A
low-bandwidth network file system. In Proc. 18th ACM
Symposium on Operating Systems Principles (Banff, Canada,
Oct. 2001).
[27] PFEIFER, D., AND JAKSCHITSCH, H. Method-based
caching in multi-tiered server applications. In Proc. Fifth
International Symposium on Distributed Objects and
Applications (Catania, Sicily, Italy, Nov. 2003).
[28] PLATTNER, C., AND ALONSO, G. Ganymed: Scalable
replication for transactional web applications. In Proc. 5th
ACM/IFIP/USENIX International Conference on
Middleware (2004), pp. 155-174.
[29] QUINLAN, S., AND DORWARD, S. Venti: A new approach
to archival storage. In Proc. FAST 2002 Conference on File
and Storage Technologies (2002).
[30] RABIN, M. Fingerprinting by random polynomials. In
Harvard University Center for Research in Computing
Technology Technical Report TR-15-81 (1981).
[31] RABINOVICH, M., XIAO, Z., DOUGLIS, F., AND
KALMANEK, C. Moving edge side includes to the real edge
- the clients. In Proc. 4th USENIX Symposium on Internet
Technologies and Systems (Seattle, WA, Mar. 2003).
[32] REESE, G. Database Programming with JDBC and Java,
1st ed. O"Reilly, June 1997.
[33] RHEA, S., LIANG, K., AND BREWER, E. Value-based web
caching. In Proc. Twelfth International World Wide Web
Conference (May 2003).
[34] SIVASUBRAMANIAN, S., ALONSO, G., PIERRE, G., AND
VAN STEEN, M. Globedb: Autonomic data replication for
web applications. In WWW "05: Proc. 14th International
World-Wide Web conference (May 2005).
[35] SPRING, N. T., AND WETHERALL, D. A
protocol-independent technique for eliminating redundant
network traffic. In Proc. of ACM SIGCOMM (Aug. 2000).
[36] TOLIA, N., HARKES, J., KOZUCH, M., AND
SATYANARAYANAN, M. Integrating portable and distributed
storage. In Proc. 3rd USENIX Conference on File and
Storage Technologies (San Francisco, CA, Mar. 2004).
[37] TOLIA, N., KOZUCH, M., SATYANARAYANAN, M., KARP,
B., PERRIG, A., AND BRESSOUD, T. Opportunistic use of
content addressable storage for distributed file systems. In
Proc. 2003 USENIX Annual Technical Conference (San
Antonio, TX, June 2003), pp. 127-140.
[38] YUAN, C., CHEN, Y., AND ZHANG, Z. Evaluation of edge
caching/offloading for dynamic content delivery. In WWW
"03: Proc. Twelfth International Conference on World Wide
Web (2003), pp. 461-471.
WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content
320 | natural chunk boundary;read-write operation;bboard benchmark;content addressable storage;redundancy;relational database system;database cache;relational database;reciperesultset;temporal locality;caching dynamic database content;jdbc driver;bandwidth optimization;wide area network;database content;proxy;resultset object;hash-based technique |
train_C-81 | Adaptive Duty Cycling for Energy Harvesting Systems | Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used. | 1. INTRODUCTION
Energy supply has always been a crucial issue in designing
battery-powered wireless sensor networks because the lifetime and
utility of the systems are limited by how long the batteries are able to
sustain the operation. The fidelity of the data produced by a sensor
network begins to degrade once sensor nodes start to run out of
battery power. Therefore, harvesting energy from the environment
has been proposed to supplement or completely replace battery
supplies to enhance system lifetime and reduce the maintenance cost
of replacing batteries periodically.
However, metrics for evaluating energy harvesting systems are
different from those used for battery powered systems.
Environmental energy is distinct from battery energy in two ways.
First it is an inexhaustible supply which, if appropriately used, can
allow the system to last forever, unlike the battery which is a limited
resource. Second, there is an uncertainty associated with its
availability and measurement, compared to the energy stored in the
battery which can be known deterministically. Thus, power
management methods based on battery status are not always
applicable to energy harvesting systems. In addition, most power
management schemes designed for battery-powered systems only
account for the dynamics of the energy consumers (e.g., CPU, radio)
but not the dynamics of the energy supply. Consequently, battery
powered systems usually operate at the lowest performance level that
meets the minimum data fidelity requirement in order to maximize
the system life. Energy harvesting systems, on the other hand, can
provide enhanced performance depending on the available energy.
In this paper, we will study how to adapt the performance of the
available energy profile. There exist many techniques to accomplish
performance scaling at the node level, such as radio transmit power
adjustment [1], dynamic voltage scaling [2], and the use of low
power modes [3]. However, these techniques require hardware
support and may not always be available on resource constrained
sensor nodes. Alternatively, a common performance scaling
technique is duty cycling. Low power devices typically provide at
least one low power mode in which the node is shut down and the
power consumption is negligible. In addition, the rate of duty cycling
is directly related to system performance metrics such as network
latency and sampling frequency. We will use duty cycle adjustment
as the primitive performance scaling technique in our algorithms.
2. RELATED WORK
Energy harvesting has been explored for several different types
of systems, such as wearable computers [4], [5], [6], sensor networks
[7], etc. Several technologies to extract energy from the environment
have been demonstrated including solar, motion-based, biochemical,
vibration-based [8], [9], [10], [11], and others are being developed
[12], [13]. While several energy harvesting sensor node platforms
have been prototyped [14], [15], [16], there is a need for systematic
power management techniques that provide performance guarantees
during system operation. The first work to take environmental
energy into account for data routing was [17], followed by [18].
While these works did demonstrate that environment aware
decisions improve performance compared to battery aware decisions,
their objective was not to achieve energy neutral operation. Our
proposed techniques attempt to maximize system performance while
maintaining energy-neutral operation.
3. SYSTEM MODEL
The energy usage considerations in a harvesting system vary
significantly from those in a battery powered system, as mentioned
earlier. We propose the model shown in Figure 1 for designing
energy management methods in a harvesting system. The functions
of the various blocks shown in the figure are discussed below. The
precise methods used in our system to achieve these functions will
be discussed in subsequent sections.
Harvested Energy Tracking: This block represents the mechanisms
used to measure the energy received from the harvesting device,
such as the solar panel. Such information is useful for determining
the energy availability profile and adapting system performance
based on it. Collecting this information requires that the node
hardware be equipped with the facility to measure the power
generated from the environment, and the Heliomote platform [14]
we used for evaluating the algorithms has this capability.
Energy Generation Model: For wireless sensor nodes with limited
storage and processing capabilities to be able to use the harvested
energy data, models that represent the essential components of this
information without using extensive storage are required. The
purpose of this block is to provide a model for the energy available
to the system in a form that may be used for making power
management decisions. The data measured by the energy tracking
block is used here to predict future energy availability. A good
prediction model should have a low prediction error and provide
predicted energy values for durations long enough to make
meaningful performance scaling decisions. Further, for energy
sources that exhibit both long-term and short-term patterns (e.g.,
diurnal and climate variations vs. weather patterns for solar energy),
the model must be able to capture both characteristics. Such a model
can also use information from external sources such as local weather
forecast service to improve its accuracy.
Energy Consumption Model: It is also important to have detailed
information about the energy usage characteristics of the system, at
various performance levels. For general applicability of our design,
we will assume that only one sleep mode is available. We assume
that the power consumption in the sleep and active modes is known.
It may be noted that for low power systems with more advanced
capabilities such as dynamic voltage scaling (DVS), multiple low
power modes, and the capability to shut down system components
selectively, the power consumption in each of the states and the
resultant effect on application performance should be known to make
power management decisions.
Energy Storage Model: This block represents the model for the
energy storage technology. Since all the generated energy may not be
used instantaneously, the harvesting system will usually have some
energy storage technology. Storage technologies (e.g., batteries and
ultra-capacitors) are non-ideal, in that there is some energy loss while
storing and retrieving energy from them. These characteristics must be
known to efficiently manage energy usage and storage. This block also
includes the system capability to measure the residual stored energy.
Most low power systems use batteries to store energy and provide
residual battery status. This is commonly based on measuring the
battery voltage which is then mapped to the residual battery energy
using the known charge to voltage relationship for the battery
technology in use. More sophisticated methods which track the flow of
energy into and out of the battery are also available.
Harvesting-aware Power Management: The inputs provided by the
previously mentioned blocks are used here to determine the suitable
power management strategy for the system. Power management
could be carried to meet different objectives in different applications.
For instance, in some systems, the harvested energy may marginally
supplement the battery supply and the objective may be to maximize
the system lifetime. A more interesting case is when the harvested
energy is used as the primary source of energy for the system with
the objective of achieving indefinitely long system lifetime. In such
cases, the power management objective is to achieve energy neutral
operation. In other words, the system should only use as much
energy as harvested from the environment and attempt to maximize
performance within this available energy budget.
4. THEORETICALLY OPTIMAL POWER
MANAGEMENT
We develop the following theory to understand the energy
neutral mode of operation. Let us define Ps(t) as the energy harvested
from the environment at time t, and the energy being consumed by
the load at that time is Pc(t). Further, we model the non-ideal storage
buffer by its round-trip efficiency η (strictly less than 1) and a
constant leakage power Pleak. Using this notation, applying the rule
of energy conservation leads to the following inequality:
0 0 00
[ ( ) ( )] [ ( ) ( )] 0
T T T
s c c s leakP t P t dt P t P t dt P dtB η
+ +
− − − ≥+ −∫ ∫ ∫ (1)
where B0 is the initial battery level and the function [X]+
= X if X > 0
and zero otherwise.
DEFINITION 1 (ρ,σ1,σ2) function: A non-negative, continuous and
bounded function P (t) is said to be a (ρ,σ1,σ2) function if and only if
for any value of finite real number T , the following are satisfied:
2 1( )
T
T P t dt Tρ σ ρ σ− ≤ ≤ +∫ (2)
This function can be used to model both energy sources and loads.
If the harvested energy profile Ps(t) is a (ρ1,σ1,σ2) function, then the
average rate of available energy over long durations becomes ρ1, and
the burstiness is bounded by σ1 and σ2 . Similarly, Pc(t) can be modeled
as a (ρ2,σ3) function, when ρ2 and σ3 are used to place an upper bound
on power consumption (the inequality on the right side) while there are
no minimum power consumption constraints.
The condition for energy neutrality, equation (1), leads to the
following theorem, based on the energy production, consumption, and
energy buffer models discussed above.
THEOREM 1 (ENERGY NEUTRAL OPERATION): Consider
a harvesting system in which the energy production profile is
characterized by a (ρ1, σ1, σ2) function, the load is characterized by
a (ρ2, σ3) function and the energy buffer is characterized by
parameters η for storage efficiency, and Pleak for leakage power. The
following conditions are sufficient for the system to achieve energy
neutrality:
ρ2 ≤ ηρ1 − Pleak (3)
B0 ≥ ησ2 + σ3 (4)
B ≥ B0 (5)
where B0 is the initial energy stored in the buffer and provides a
lower bound on the capacity of the energy buffer B. The proof is
presented in our prior work [19].
To adjust the duty cycle D using our performance scaling
algorithm, we assume the following relation between duty cycle and
the perceived utility of the system to the user: Suppose the utility of
the application to the user is represented by U(D) when the system
operates at a duty cycle D. Then,
min
1 min max
2 max
( ) 0,
( ) ,
( ) ,
U D if D D
U D k D if D D D
U D k if D D
β
= <
= + ≤ ≤
= >
This is a fairly general and simple model and the specific values of
Dmin and Dmax may be determined as per application requirements. As
an example, consider a sensor node designed to detect intrusion across
a periphery. In this case, a linear increase in duty cycle translates into a
linear increase in the detection probability. The fastest and the slowest
speeds of the intruders may be known, leading to a minimum and
Harvested Energy
Tracking
Energy Consumption
Model
Energy Storage Model
Harvestingaware Power
Mangement
Energy Generation
Model
LOAD
Figure 1. System model for an energy harvesting system.
181
maximum sensing delay tolerable, which results in the relevant Dmax
and Dmin for the sensor node. While there may be cases where the
relationship between utility and duty cycle may be non-linear, in this
paper, we restrict our focus on applications that follow this linear
model. In view of the above models for the system components and
the required performance, the objective of our power management
strategy is adjust the duty cycle D(i) dynamically so as to maximize
the total utility U(D) over a period of time, while ensuring energy
neutral operation for the sensor node.
Before discussing the performance scaling methods for harvesting
aware duty cycle adaptation, let us first consider the optimal power
management strategy that is possible for a given energy generation
profile. For the calculation of the optimal strategy, we assume
complete knowledge of the energy availability profile at the node,
including the availability in the future. The calculation of the optimal is
a useful tool for evaluating the performance of our proposed algorithm.
This is particularly useful for our algorithm since no prior algorithms
are available to serve as a baseline for comparison.
Suppose the time axis is partitioned into discrete slots of duration
ΔT, and the duty cycle adaptation calculation is carried out over a
window of Nw such time slots. We define the following energy profile
variables, with the index i ranging over {1,…, Nw}: Ps(i) is the power
output from the harvested source in time slot i, averaged over the slot
duration, Pc is the power consumption of the load in active mode, and
D(i) is the duty cycle used in slot i, whose value is to be determined.
B(i) is the residual battery energy at the beginning of slot i. Following
this convention, the battery energy left after the last slot in the window
is represented by B(Nw+1). The values of these variables will depend
on the choice of D(i).
The energy used directly from the harvested source and the energy
stored and used from the battery must be accounted for differently.
Figure 2 shows two possible cases for Ps(i) in a time slot. Ps(i) may
either be less than or higher than Pc , as shown on the left and right
respectively. When Ps(i) is lower than Pc, some of the energy used by
the load comes from the battery, while when Ps(i) is higher than Pc, all
the energy used is supplied directly from the harvested source. The
crosshatched area shows the energy that is available for storage into
the battery while the hashed area shows the energy drawn from the
battery. We can write the energy used from the battery in any slot i as:
( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( )[ ]1 1c cs s sB i B i TD i P P i TP i D i TD i P i Pη η
+ +
− + = Δ − − Δ − − − (6)
In equation (6), the first term on the right hand side measures the
energy drawn from the battery when Ps(i) < Pc, the next term measures
the energy stored into the battery when the node is in sleep mode, and
the last term measures the energy stored into the battery in active mode
if Ps(i) > Pc. For energy neutral operation, we require the battery at the
end of the window of Nw slots to be greater than or equal to the starting
battery. Clearly, battery level will go down when the harvested energy
is not available and the system is operated from stored energy.
However, the window Nw is judiciously chosen such that over that
duration, we expect the environmental energy availability to complete
a periodic cycle. For instance, in the case of solar energy harvesting,
Nw could be chosen to be a twenty-four hour duration, corresponding
to the diurnal cycle in the harvested energy. This is an approximation
since an ideal choice of the window size would be infinite, but a finite
size must be used for analytical tractability. Further, the battery level
cannot be negative at any time, and this is ensured by having a large
enough initial battery level B0 such that node operation is sustained
even in the case of total blackout during a window period. Stating the
above constraints quantitatively, we can express the calculation of the
optimal duty cycles as an optimization problem below:
1
max ( )
wN
i
D i
=
∑ (7)
( ) ( ) ( ) ( ) ( ) ( ){ } ( ) ( )1 1c s s s cB i B i TD i P P i TP i D i TD i P i Pη η
+ +
⎡ ⎤ ⎡ ⎤− + = Δ − − Δ − − −⎣ ⎦ ⎣ ⎦
(8)
0(1)B B= (9)
0( 1)wB N B+ ≥ (10)
min w( ) i {1,...,N }D i D≥ ∀ ∈ (11)
max w( ) i {1,...,N }D i D≤ ∀ ∈ (12)
The solution to the optimization problem yields the duty cycles
that must be used in every slot and the evolution of residual battery
over the course of Nw slots. Note that while the constraints above
contain the non-linear function [x]+
, the quantities occurring within
that function are all known constants. The variable quantities occur
only in linear terms and hence the above optimization problem can
be solved using standard linear programming techniques, available
in popular optimization toolboxes.
5. HARVESTING-AWARE POWER
MANAGEMENT
We now present a practical algorithm for power management that
may be used for adapting the performance based on harvested energy
information. This algorithm attempts to achieve energy neutral
operation without using knowledge of the future energy availability
and maximizes the achievable performance within that constraint.
The harvesting-aware power management strategy consists of
three parts. The first part is an instantiation of the energy generation
model which tracks past energy input profiles and uses them to
predict future energy availability. The second part computes the
optimal duty cycles based on the predicted energy, and this step
uses our computationally tractable method to solve the optimization
problem. The third part consists of a method to dynamically adapt
the duty cycle in response to the observed energy generation profile
in real time. This step is required since the observed energy
generation may deviate significantly from the predicted energy
availability and energy neutral operation must be ensured with the
actual energy received rather than the predicted values.
5.1. Energy Prediction Model
We use a prediction model based on Exponentially Weighted
Moving-Average (EWMA). The method is designed to exploit the
diurnal cycle in solar energy but at the same time adapt to the seasonal
variations. A historical summary of the energy generation profile is
maintained for this purpose. While the storage data size is limited to a
vector length of Nw values in order to minimize the memory overheads
of the power management algorithm, the window size is effectively
infinite as each value in the history window depends on all the
observed data up to that instant. The window size is chosen to be 24
hours and each time slot is taken to be 30 minutes as the variation in
generated power by the solar panel using this setting is less than 10%
between each adjacent slots. This yields Nw = 48. Smaller slot
durations may be used at the expense of a higher Nw.
The historical summary maintained is derived as follows. On a
typical day, we expect the energy generation to be similar to the energy
generation at the same time on the previous days. The value of energy
generated in a particular slot is maintained as a weighted average of the
energy received in the same time-slot during all observed days. The
weights are exponential, resulting in decaying contribution from older
Figure 2. Two possible cases for energy calculations
Slot i Slot k
Pc
Pc
P(i)
P(i)
Active Sleep
182
data. More specifically, the historical average maintained for each slot
is given by:
1 (1 )k k kx x xα α−= + −
where α is the value of the weighting factor, kx is the observed value
of energy generated in the slot, and 1kx −
is the previously stored
historical average. In this model, the importance of each day relative to
the previous one remains constant because the same weighting factor
was used for all days.
The average value derived for a slot is treated as an estimate of
predicted energy value for the slot corresponding to the subsequent
day. This method helps the historical average values adapt to the
seasonal variations in energy received on different days. One of the
parameters to be chosen in the above prediction method is the
parameter α, which is a measure of rate of shift in energy pattern over
time. Since this parameter is affected by the characteristics of the
energy and sensor node location, the system should have a training
period during which this parameter will be determined. To determine a
good value of α, we collected energy data over 72 days and compared
the average error of the prediction method for various values of α. The
error based on the different values of α is shown in Figure 3. This
curve suggests an optimum value of α = 0.15 for minimum prediction
error and this value will be used in the remainder of this paper.
5.2. Low-complexity Solution
The energy values predicted for the next window of Nw slots are
used to calculated the desired duty cycles for the next window,
assuming the predicted values match the observed values in the future.
Since our objective is to develop a practical algorithm for embedded
computing systems, we present a simplified method to solve the linear
programming problem presented in Section 4. To this end, we define
the sets S and D as follows:
{ }
{ }
| ( ) 0
| ( ) 0
s c
c s
S i P i P
D i P P i
= − ≥
= − >
The two sets differ by the condition that whether the node operation
can be sustained entirely from environmental energy. In the case that
energy produced from the environment is not sufficient, battery will be
discharged to supplement the remaining energy. Next we sum up both
sides of (6) over the entire Nw window and rewrite it with the new
notation.
1
1 1 1
( )[ ( )] ( ) ( ) ( ) ( )[ ( ) ]
Nw Nw Nw
i i c s s s s c
i i D i i i S
B B TD i P P i TP i TP i D i TD i P i Pη η η+
= ∈ = = ∈
− = Δ − − Δ + Δ − Δ −∑ ∑ ∑ ∑ ∑
The term on the left hand side is actually the battery energy used over
the entire window of Nw slots, which can be set to 0 for energy neutral
operation. After some algebraic manipulation, this yields:
1
1
( ) ( ) ( ) 1 ( )
Nw
c
s s c
i i D i S
P
P i D i P i P D i
η η= ∈ ∈
⎛ ⎞⎛ ⎞
= + − +⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠
∑ ∑ ∑ (13)
The term on the left hand side is the total energy received in Nw
slots. The first term on the right hand side can be interpreted as the
total energy consumed during the D slots and the second term is the
total energy consumed during the S slots. We can now replace three
constraints (8), (9), and (10) in the original problem with (13), restating
the optimization problem as follows:
1
max ( )
wN
i
D i
=
∑
1
1
( ) ( ) ( ) 1 ( )
Nw
c
s s c
i i D i S
P
P i D i P i P D i
η η= ∈ ∈
⎛ ⎞⎛ ⎞
= + − +⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠
∑ ∑ ∑
min
max
D(i) D {1,...,Nw)
D(i) D {1,...,Nw)
i
i
≥ ∀ ∈
≤ ∀ ∈
This form facilitates a low complexity solution that doesn"t require
a general linear programming solver. Since our objective is to
maximize the total system utility, it is preferable to set the duty cycle to
Dmin for time slots where the utility per unit energy is the least. On the
other hand, we would also like the time slots with the highest Ps to
operate at Dmax because of better efficiency of using energy directly
from the energy source. Combining these two characteristics, we
define the utility co-efficient for each slot i as follows:
1
( ) 1
1 ( ) 1
c
c
s
P for i S
W i P
P i for i D
η η
∈⎧
⎪
= ⎛ ⎞⎛ ⎞⎨
+ − ∈⎜ ⎟⎜ ⎟⎪
⎝ ⎠⎝ ⎠⎩
where W(i) is a representation of how efficient the energy usage in a
particular time slot i is. A larger W(i) indicates more system utility per
unit energy in slot i and vice versa. The algorithm starts by assuming
D(i) =Dmin for i = {1…NW} because of the minimum duty cycle
requirement, and computes the remaining system energy R by:
1
1
( ) ( ) ( ) 1 ( ) (14)
Nw
c
s s c
i i D i S
P
R P i D i P i P D i
η η= ∈ ∈
⎛ ⎞⎛ ⎞
= − + − −⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠
∑ ∑ ∑
A negative R concludes that the optimization problem is infeasible,
meaning the system cannot achieve energy neutrality even at the
minimum duty cycle. In this case, the system designer is responsible
for increasing the environment energy availability (e.g., by using larger
solar panels). If R is positive, it means the system has excess energy
that is not being used, and this may be allocated to increase the duty
cycle beyond Dmin for some slots. Since our objective is to maximize
the total system utility, the most efficient way to allocate the excess
energy is to assign duty cycle Dmax to the slots with the highest W(i).
So, the coefficients W(i) are arranged in decreasing order and duty
cycle Dmax is assigned to the slots beginning with the largest
coefficients until the excess energy available, R (computed by (14) in
every iteration), is insufficient to assign Dmax to another slot. The
remaining energy, RLast, is used to increase the duty cycle to some
value between Dmin and Dmax in the slot with the next lower coefficient.
Denoting this slot with index j, the duty cycle is given by:
D(j)=
min
/
( ( ) ) / ( )
Last c
Last
s c s
R P if j D
DR
if j S
P j P P jη
∈⎧ ⎫
⎪ ⎪
+⎨ ⎬
∈⎪ ⎪− −⎩ ⎭
The above solution to the optimization problem requires only simple
arithmetic calculations and one sorting step which can be easily
implemented on an embedded platform, as opposed to implementing a
general linear program solver.
5.3. Slot-by-slot continual duty cycle adaptiation.
The observed energy values may vary greatly from the predicted
ones, such as due to the effect of clouds or other sudden changes. It is
thus important to adapt the duty cycles calculated using the predicted
values, to the actual energy measurements in real time to ensure energy
neutrality. Denote the initial duty cycle assignments for each time slot i
computed using the predicted energy values as D(i) = {1, ...,Nw}. First
we compute the difference between predicted power level Ps(i) and
actual power level observed, Ps"(i) in every slot i. Then, the excess
energy in slot i, denoted by X, can be obtained as follows:
( ) '( ) '( )
1
( ) '( ) ( )[ ( ) '( )](1 ) '( )
s s s c
s s s s s c
P i P i if P i P
X
P i P i D i P i P i if P i P
η
− >⎧
⎪
= ⎨
− − − − ≤⎪
⎩
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
2.6
2.7
2.8
2.9
3
alpha
AvgError(mA)
Figure 3. Choice of prediction parameter.
183
The upper term accounts for the energy difference when actual
received energy is more than the power drawn by the load. On the
other hand, if the energy received is less than Pc, we will need to
account for the extra energy used from the battery by the load, which is
a function of duty cycle used in time slot i and battery efficiency factor
η. When more energy is received than predicted, X is positive and that
excess energy is available for use in the subsequent solutes, while if X
is negative, that energy must be compensated from subsequent slots.
CASE I: X<0. In this case, we want to reduce the duty cycles used in
the future slots in order to make up for this shortfall of energy. Since
our objective function is to maximize the total system utility, we have
to reduce the duty cycles for time slots with the smallest normalized
utility coefficient, W(i). This is accomplished by first sorting the
coefficient W(j) ,where j>i. in decreasing order, and then iteratively
reducing Dj to Dmin until the total reduction in energy consumption is
the same as X.
CASE II: X>0. Here, we want to increase the duty cycles used in the
future to utilize this excess energy we received in recent time slot. In
contrast to Case I, the duty cycles of future time slots with highest
utility coefficient W(i) should be increased first in order to maximize
the total system utility.
Suppose the duty cycle is changed by d in slot j. Define a quantity
R(j,d) as follows:
⎪
⎩
⎪
⎨
⎧
<=⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
−+
>⋅
=
lji
l
ljl
PPifP
P
d
PPifP
djR
1
1
d
),(
ηη
The precise procedure to adapt the duty cycle to account for the
above factors is presented in Algorithm 1. This calculation is
performed at the end of every slot to set the duty cycle for the next
slot. We claim that our duty cycling algorithm is energy neutral
because an surplus of energy at the previous time slot will always
translate to additional energy opportunity for future time slots, and
vice versa. The claim may be violated in cases of severe energy
shortages especially towards the end of window. For example, a large
deficit in energy supply can"t be restored if there is no future energy
input until the end of the window. In such case, this offset will be
carried over to the next window so that long term energy neutrality is
still maintained.
6. EVALUATION
Our adaptive duty cycling algorithm was evaluated using an actual
solar energy profile measured using a sensor node called Heliomote,
capable of harvesting solar energy [14]. This platform not only tracks
the generated energy but also the energy flow into and out of the
battery to provide an accurate estimate of the stored energy.
The energy harvesting platform was deployed in a residential area
in Los Angeles from the beginning of June through the middle of
August for a total of 72 days. The sensor node used is a Mica2 mote
running at a fixed 40% duty cycle with an initially full battery. Battery
voltage and net current from the solar panels are sampled at a period of
10 seconds. The energy generation profile for that duration, measured
by tracking the output current from the solar cell is shown in Figure 4,
both on continuous and diurnal scales. We can observe that although
the energy profile varies from day to day, it still exhibits a general
pattern over several days.
0 10 20 30 40 50 60 70
0
10
20
30
40
50
60
70
Day
mA
0 5 10 15 20
0
10
20
30
40
50
60
70
Hour
mA
6.1. Prediction Model
We first evaluate the performance of the prediction model, which
is judged by the amount of absolute error it made between the
predicted and actual energy profile. Figure 5 shows the average error
of each time slot in mA over the entire 72 days. Generally, the amount
of error is larger during the day time because that"s when the factor of
weather can cause deviations in received energy, while the prediction
made for night time is mostly correct.
6.2. Adaptive Duty cycling algorithm
Prior methods to optimize performance while achieving energy
neutral operation using harvested energy are scarce. Instead, we
compare the performance of our algorithm against two extremes: the
theoretical optimal calculated assuming complete knowledge about
future energy availability and a simple approach which attempts to
achieve energy neutrality using a fixed duty cycle without accounting
for battery inefficiency.
The optimal duty cycles are calculated for each slot using the
future knowledge of actual received energy for that slot. For the simple
approach, the duty cycle is kept constant within each day and is
Figure 4 Solar Energy Profile (Left: Continuous, Right: Diurnal)
Input: D: Initial duty cycle, X: Excess energy due to error in the
prediction, P: Predicted energy profile, i: index of current time slot
Output: D: Updated duty cycles in one or more subsequent slots
AdaptDutyCycle()
Iteration: At each time slot do:
if X > 0
Wsorted = W{1, ...,Nw} sorted in decending order.
Q := indices of Wsorted
for k = 1 to |Q|
if Q(k) ≤ i or D(Q(k)) ≥ Dmax //slot is already passed
continue
if R(Q(k), Dmax − D(Q(k))) < X
D(Q(k)) = Dmax
X = X − R(j, Dmax − D(Q(k)))
else
//X insufficient to increase duty cycle to Dmax
if P (Q(k)) > Pl
D(Q(k)) = D(Q(k)) + X/Pl
else
D(Q(k)) = D(Q(k)) +
( ( ))(1 1 ))c s
X
P P Q kη η+ −
if X < 0
Wsorted = W{1, ...,Nw} sorted in ascending order.
Q := indices of Wsorted
for k = 1 to |Q|
if Q(k) ≤ I or D(Q(k)) ≤ Dmin
continue
if R(Q(k), Dmax − D(Q(k))) > X
D(Q(k)) = Dmin
X = X − R(j, Dmin − D(Q(k)))
else
if P (Q(k)) > Pc
D(Q(k)) = D(Q(k)) + X/Pc
else
D(Q(k)) = D(Q(k)) +
( ( ))(1 1 ))c s
X
P P Q kη η+ −
ALGORITHM 1 Pseudocode for the duty-cycle adaptation algorithm
Figure 5. Average Predictor Error in mA
0 5 10 15 20 25
0
2
4
6
8
10
12
Time(H)
abserror(mA)
184
computed by taking the ratio of the predicted energy availability and
the maximum usage, and this guarantees that the senor node will never
deplete its battery running at this duty cycle.
{1.. )
( )s w c
i Nw
D P i N Pη
∈
= ⋅ ⋅∑
We then compare the performance of our algorithm to the two
extremes with varying battery efficiency. Figure 6 shows the results,
using Dmax = 0.8 and Dmin = 0.3. The battery efficiency was varied
from 0.5 to 1 on the x-axis and solar energy utilizations achieved by
the three algorithms are shown on the y-axis. It shows the fraction of
net received energy that is used to perform useful work rather than lost
due to storage inefficiency.
As can be seen from the figure, battery efficiency factor has great
impact on the performance of the three different approaches. The three
approaches all converges to 100% utilization if we have a perfect
battery (η=1), that is, energy is not lost by storing it into the batteries.
When battery inefficiency is taken into account, both the adaptive and
optimal approach have much better solar energy utilization rate than
the simple one. Additionally, the result also shows that our adaptive
duty cycle algorithm performs extremely close to the optimal.
0.4 0.5 0.6 0.7 0.8 0.9 1
0.4
0.5
0.6
0.7
0.8
0.9
1
Eta-batery roundtrip efficiency
SolarEnergyUtilization(%)
Optimal
Adaptive
Simple
We also compare the performance of our algorithm with different
values of Dmin and Dmax for η=0.7, which is typical of NiMH batteries.
These results are shown in Table 1 as the percentage of energy saved
by the optimal and adaptive approaches, and this is the energy which
would normally be wasted in the simple approach. The figures and
table indicate that our real time algorithm is able to achieve a
performance very close to the optimal feasible. In addition, these
results show that environmental energy harvesting with appropriate
power management can achieve much better utilization of the
environmental energy.
Dmax
Dmin
0.8
0.05
0.8
0.1
0.8
0.3
0.5
0.2
0.9
0.2
1.0
0.2
Adaptive 51.0% 48.2% 42.3% 29.4% 54.7% 58.7%
Optimal 52.3% 49.6% 43.7% 36.7% 56.6% 60.8%
7. CONCLUSIONS
We discussed various issues in power management for systems
powered using environmentally harvested energy. Specifically, we
designed a method for optimizing performance subject to the
constraint of energy neutral operation. We also derived a theoretically
optimal bound on the performance and showed that our proposed
algorithm operated very close to the optimal. The proposals were
evaluated using real data collected using an energy harvesting sensor
node deployed in an outdoor environment.
Our method has significant advantages over currently used
methods which are based on a conservative estimate of duty cycle and
can only provide sub-optimal performance. However, this work is only
the first step towards optimal solutions for energy neutral operation. It
is designed for a specific power scaling method based on adapting the
duty cycle. Several other power scaling methods, such as DVS,
submodule power switching and the use of multiple low power modes are
also available. It is thus of interest to extend our methods to exploit
these advanced capabilities.
8. ACKNOWLEDGEMENTS
This research was funded in part through support provided by
DARPA under the PAC/C program, the National Science Foundation
(NSF) under award #0306408, and the UCLA Center for Embedded
Networked Sensing (CENS). Any opinions, findings, conclusions or
recommendations expressed in this paper are those of the authors and
do not necessarily reflect the views of DARPA, NSF, or CENS.
REFERENCES
[1] R Ramanathan, and R Hain, Toplogy Control of Multihop Wireless
Networks Using Transmit Power Adjustment in Proc. Infocom. Vol 2.
26-30 pp. 404-413. March 2000
[2] T.A. Pering, T.D. Burd, and R. W. Brodersen, The simulation and
evaluation of dynamic voltage scaling algorithms, in Proc. ACM
ISLPED, pp. 76-81, 1998
[3] L. Benini and G. De Micheli, Dynamic Power Management: Design
Techniques and CAD Tools. Kluwer Academic Publishers, Norwell, MA,
1997.
[4] John Kymisis, Clyde Kendall, Joseph Paradiso, and Neil Gershenfeld.
Parasitic power harvesting in shoes. In ISWC, pages 132-139. IEEE
Computer Society press, October 1998.
[5] Nathan S. Shenck and Joseph A. Paradiso. Energy scavenging with
shoemounted piezoelectrics. IEEE Micro, 21(3):30ñ42, May-June 2001.
[6] T Starner. Human-powered wearable computing. IBM Systems Journal,
35(3-4), 1996.
[7] Mohammed Rahimi, Hardik Shah, Gaurav S. Sukhatme, John
Heidemann, and D. Estrin. Studying the feasibility of energy harvesting in
a mobile sensor network. In ICRA, 2003.
[8] ChrisMelhuish. The ecobot project. www.ias.uwe.ac.uk/energy
autonomy/EcoBot web page.html.
[9] Jan M.Rabaey, M. Josie Ammer, Julio L. da Silva Jr., Danny Patel, and
Shad Roundy. Picoradio supports ad-hoc ultra-low power wireless
networking. IEEE Computer, pages 42-48, July 2000.
[10] Joseph A. Paradiso and Mark Feldmeier. A compact, wireless,
selfpowered pushbutton controller. In ACM Ubicomp, pages 299-304,
Atlanta, GA, USA, September 2001. Springer-Verlag Berlin Heidelberg.
[11] SE Wright, DS Scott, JB Haddow, andMA Rosen. The upper limit to solar
energy conversion. volume 1, pages 384 - 392, July 2000.
[12] Darpa energy harvesting projects.
http://www.darpa.mil/dso/trans/energy/projects.html.
[13] Werner Weber. Ambient intelligence: industrial research on a visionary
concept. In Proceedings of the 2003 international symposium on Low
power electronics and design, pages 247-251. ACM Press, 2003.
[14] V Raghunathan, A Kansal, J Hsu, J Friedman, and MB Srivastava,
"Design Considerations for Solar Energy Harvesting Wireless Embedded
Systems," (IPSN/SPOTS), April 2005.
[15] Xiaofan Jiang, Joseph Polastre, David Culler, Perpetual Environmentally
Powered Sensor Networks, (IPSN/SPOTS), April 25-27, 2005.
[16] Chulsung Park, Pai H. Chou, and Masanobu Shinozuka, "DuraNode:
Wireless Networked Sensor for Structural Health Monitoring," to appear
in Proceedings of the 4th IEEE International Conference on Sensors,
Irvine, CA, Oct. 31 - Nov. 1, 2005.
[17] Aman Kansal and Mani B. Srivastava. An environmental energy
harvesting framework for sensor networks. In International symposium on
Low power electronicsand design, pages 481-486. ACM Press, 2003.
[18] Thiemo Voigt, Hartmut Ritter, and Jochen Schiller. Utilizing solar power
in wireless sensor networks. In LCN, 2003.
[19] A. Kansal, J. Hsu, S. Zahedi, and M. B. Srivastava. Power management
in energy harvesting sensor networks. Technical Report
TR-UCLA-NESL200603-02, Networked and Embedded Systems Laboratory, UCLA,
March 2006.
Figure 6. Duty Cycles achieved with respect to η
TABLE 1. Energy Saved by adaptive and optimal approach.
185 | sampling frequency;energy harvest;energy harvesting system;duty cycling rate;low power design;harvesting-aware power management;solar panel;duty cycle;performance scaling;power scaling;energy neutral operation;sensor network;energy tracking;storage buffer;environmental energy;power management;network latency |
train_C-83 | Concept and Architecture of a Pervasive Document Editing and Managing System | Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with upto-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example. | 1. INTRODUCTION
Text documents are a valuable resource for virtually any enterprise
and organization. Documents like papers, reports and general
business documentations contain a large part of today"s (business)
knowledge. Documents are mostly stored in a hierarchical folder
structure on file servers and it is difficult to organize them in regard
to classification, versioning etc., although it is of utmost importance
that users can find, retrieve and edit up-to-date versions of
documents whenever they want and, in a user-friendly way.
1.1 Problem Description
With most of the commonly used word-processing applications
documents can be manipulated by only one user at a time: tools for
pervasive collaborative document editing and management, are
rarely deployed in today"s world. Despite the fact, that people strive
for location- and time- independence, the importance of pervasive
collaborative work, i.e. collaborative document editing and
management is totally neglected. Documents could therefore be
seen as a vulnerable source in today"s world, which demands for an
appropriate solution: The need to store, retrieve and edit these
documents collaboratively anytime, everywhere and with almost
every suitable device and with guaranteed mechanisms for security,
consistency, availability and access control, is obvious.
In addition, word processing systems ignore the fact that the history
of a text document contains crucial information for its management.
Such meta data includes creation date, creator, authors, version,
location-based information such as time and place when/where a
user reads/edits a document and so on. Such meta data can be
gathered during the documents creation process and can be used
versatilely. Especially in the field of pervasive document
management, meta data is of crucial importance since it offers
totally new ways of organizing and classifying documents: On the
one hand, the user"s actual situation influences the user"s objectives.
Meta data could be used to give the user the best possible view on
the documents, dependent of his actual information. On the other
hand, as soon as the user starts to work, i.e. reads or edits a
document, new meta data can be gathered in order to make the
system more adaptable and in a sense to the users situation and, to
offer future users a better view on the documents.
As far as we know, no system exists, that satisfies the
aforementioned requirements. A very good overview about
realtime communication and collaboration system is described in [7].
We therefore strive for a pervasive document editing and
management system, which enables pervasive (and collaborative)
document editing and management: users should be able to read and
edit documents whenever, wherever, with whomever and with
whatever device.
In this paper, we present collaborative database-based real-time
word processing, which provides pervasive document editing and
management functionality. It enables the user to work on
documents collaboratively and offers sophisticated document
management facility: the user is always served with up-to-date
documents and can organize and manage documents on the base of
meta data. Additionally document data is treated as ‘first class
citizen" of the database as demanded in [1].
1.2 Underlying Concepts
The concept of our pervasive document editing and management
system requires an appropriate architectural foundation. Our
concept and implementation are based on the TeNDaX [3]
collaborative database-based document editing and management
system, which enables pervasive document editing and managing.
TeNDaX is a Text Native Database eXtension. It enables the
storage of text in databases in a native form so that editing text is
finally represented as real-time transactions. Under the term ‘text
editing" we understand the following: writing and deleting text
(characters), copying & pasting text, defining text layout &
structure, inserting notes, setting access rights, defining business
processes, inserting tables, pictures, and so on i.e. all the actions
regularly carried out by word processing users. With ‘real-time
transaction" we mean that editing text (e.g. writing a
character/word) invokes one or several database transactions so that
everything, which is typed appears within the editor as soon as these
objects are stored persistently. Instead of creating files and storing
them in a file system, the content and all of the meta data belonging
to the documents is stored in a special way in the database, which
enables very fast real-time transactions for all editing tasks [2].
The database schema and the above-mentioned transactions are
created in such a way that everything can be done within a
multiuser environment, as is usual done by database technology. As a
consequence, many of the achievements (with respect to data
organization and querying, recovery, integrity and security
enforcement, multi-user operation, distribution management,
uniform tool access, etc.) are now, by means of this approach, also
available for word processing.
2. APPROACH
Our pervasive editing and management system is based on the
above-mentioned database-based TeNDaX approach, where
document data is stored natively in the database and supports
pervasive collaborative text editing and document management.
We define the pervasive document editing and management system,
as a system, where documents can easily be accessed and
manipulated everywhere (within the network), anytime
(independently of the number of users working on the same
document) and with any device (desktop, notebook, PDA, mobile
phone etc.).
DB 3
RTSC 4
RTSC 1
RTSC 2
RTSC 3
AS 1
AS 3
DB 1
DB 2
AS 2
AS 4
DB 4
A
B
C
D
E
F
G
Figure 1. TeNDaX Application Architecture
In contrast to documents stored locally on the hard drive or on a file
server, our system automatically serves the user with the up-to-date
version of a document and changes done on the document are stored
persistently in the database and immediately propagated to all
clients who are working on the same document. Additionally, meta
data gathered during the whole document creation process enables
sophisticated document management. With the TeXt SQL API as
abstract interface, this approach can be used by any tool and for any
device.
The system is built on the following components (see Figure 1): An
editor in Java implements the presentation layer (A-G in Figure 1).
The aim of this layer is the integration in a well-known
wordprocessing application such as OpenOffice.
The business logic layer represents the interface between the
database and the word-processing application. It consists of the
following three components: The application server (marked as AS
1-4 in Figure 1) enables text editing within the database
environment and takes care of awareness, security, document
management etc., all within a collaborative, real-time and multi-user
environment. The real-time server component (marked as RTSC
14 in Figure 1) is responsible for the propagation of information, i.e.
updates between all of the connected editors.
The storage engine (data layer) primarily stores the content of
documents as well as all related meta data within the database
Databases can be distributed in a peer-to-peer network (DB 1-4 in
Figure 1)..
In the following, we will briefly present the database schema, the
editor and the real-time server component as well as the concept of
dynamic folders, which enables sophisticated document
management on the basis of meta data.
2.1 Application Architecture
A database-based real-time collaborative editor allows the same
document to be opened and edited simultaneously on the same
computer or over a network of several computers and mobile
devices. All concurrency issues, as well as message propagation, are
solved within this approach, while multiple instances of the same
document are being opened [3]. Each insert or delete action is a
database transaction and as such, is immediately stored persistently
in the database and propagated to all clients working on the same
document.
2.1.1 Database Schema
As it was mentioned earlier that text is stored in a native way. Each
character of a text document is stored as a single object in the
database [3]. When storing text in such a native form, the
performance of the employed database system is of crucial
importance. The concept and performance issues of such a text
database are described in [3], collaborative layouting in [2],
dynamic collaborative business processes within documents in [5],
the text editing creation time meta data model in [6] and the relation
to XML databases in [7].
Figure 2 depicts the core database schema. By connecting a client to
the database, a Session instance is created. One important attribute
of the Session is the DocumentSession. This attribute refers to
DocumentSession instances, which administrates all opened
documents. For each opened document, a DocumentSession
instance is created. The DocumentSession is important for the
realtime server component, which, in case of a
42
is beforeis after
Char
(ID)
has
TextElement
(ID)
starts
with
is used
by
InternalFile
(ID)
is in includes
created
at
has
inserted
by
inserted
is active
ir
ir
CharacterValue
(Unicode)
has
List
(ID)
starts
starts
with
ends ends with
FileSize
has
User
(ID)
last read by
last written by
created
at
created by
Style
DTD
(ID)
is used
by
uses
uses
is used
by
Authors
arehas
Description
Password
Picture
UserColors
UserListSecurity
has
has
has
has
has
has
FileNode
(ID)
references/isreferencedby
is dynamic DynStructure
NodeDetails
has
has is NodeType
is parent
of
has
parent
has
Role
(ID)
created
at
created
created
by
Name
has
Description
is user
Name
has
has
main role
FileNodeAccessMatrix
(ID)
has
is
AccessMatrix
read option
grand option
write option
contains
has
access
Times
opened … times with … by
contains/ispartof
ir
ir
is...andincludes
Lineage
(ID)
references
is after
is before
CopyPaste
(ID)
references
is in
is copy
of
is a copy
from
hasCopyPaste
(ID)
is activeLength has
Str (Stream)
has
inserted by / inserted
RegularChar
StartChar EndChar
File
ExternalFile
is from
URL
Type
(extension)
is of
Title
has
DocumentSession
(ID)
is opened
by
has
opened
has
opened
Session
(ID)
isconnectedwith
launched by
VersionNumber
uses
has
read option
grand option
write option
ends with
is used
by
is in has
is unique
DTD (Stream)
has
has
Name
Column
(ID)
has set on
On/off
isvisible…for
false
LanguageProfile
(ID)
has
contains
Name
Profile
Marking
(ID)
has
parent
internal
is copy
from
hasRank
is onPosition
starts
with
ends with
is logical style
is itemized
is italic
is enumerated
is underline
is
is part of
Alignment
Size has
Font has
hasColor
is bold
has
uses
ElementName
StylesheetName
isused
by
Process
(ID)
is running by OS
is web session
MainRoles
Roles has
has
Timestamp
(Date, Time)
created
at
Timestamp
(Date, Time)
Timestamp
(Date, Time)
Timestamp
(Date, Time)
Timestamp
(Date, Time)created
at
Type
has
Port
IP
has
has
MessagePropagator
(ID)
Picture
(Stream)
Name
Picture
(ID)
has
contains
LayoutBlock WorkflowBlockLogicalBlock
contains
BlockDataType
has
property
BlockData is of
WorkflowInstance
(ID)
isin
TaskInstance
(ID)
has
parent
Timestamp
(Date, Time)
Timestamp
(Date, Time)
Timestamp
(Date, Time)
Timestamp
(Date, Time)
last modified at
completed at
started at
created
at
is on
has
Name
created by
has
attached
Comment
Typeis of
Timestamp
(Date, Time)
Timestamp
(Date, Time)
Timestamp
(Date, Time)
created
at
started at
<< last modified at
is
Category
Editors
has
Status
has
Timestamp
(Date, Time)
<< status last modified
Timestamp
(Date, Time)
is due at
DueType
has
Timezone
has
Notes
has
SecurityLevel
hasset
Timestamp
(Date, Time)
<< is completed at
isfollowedby
Task
(Code)
Description
has
Indent
references
hasbeenopenedat...by
Timestamp
RedoHistory
is before
is after
references
hasCharCounter
is inhas
has
Offset
ActionID
(Code)
Timestamp
(Date, Time)
invoked
at
invoked
by
Version
(ID)
isbuild
from
has
created
byarchived
has
Comment
Timestamp
(Date, Time)
<<createdat
UndoHistory
(ID)
starts
ends
has
Name
created
by
Name
has
is before
is after
<< references
CharCounter
has
is in
created
at
Timestamp
is active
created
by
is used
by
Offset
has
created
at
Timestamp
Index
(ID)
lastmodifiedby
Lexicon
(ID)
isof
Frequency
is
occurring
is stop word
Term
is
is in
ends with
starts
with
<< original starts with
WordNumber
SentenceNumber
ParagraphNumber
Citatons
has
is in
is
is in
istemporary
is in
has
Structure
has
ElementPath
createdat
Timestamp
<< describes
SpiderBuild
(ID)
is updated
is deleted
Timestamp
(Date, Time)
<<lastupdatedat
has validated structure
<<neededtoindex
Time
(ms)
IndexUpdate
nextupdatein
hasindexed
isrunningbyOS
lastupdate
enabled
Timestamp
Time
(s)
Documents
StopCharacter
Description
Character
Value
(ASCII)
is sentence stop
is paragraph stop
Name
has
is
is
OptionsSettings
show information show warningsshow exceptions
do lineage recording
do internal lineage recording
ask for unknown source
show intra document
lineage information
are set
for
X
X
X
VirtualBorder
(ID)
isonhas
{1, 2}
{1, 2}
ir
ir
UserMode
(Code)
UserMode
(Code)
Figure 2. TeNDaX Database Schema (Object Role Modeling Diagram)
change on a document done by a client, is responsible for sending
update information to all the clients working on the same
document. The DocumentId in the class DocumentSession points
to a FileNode instance, and corresponds to the ID of the opened
document. Instances of the class FileNode either represent a
folder node or a document node. The folder node corresponds to a
folder of a file system and the document node to that of a file.
Instances of the class Char represent the characters of a
document. The value of a character is stored in the attribute
CharacterValue. The sequence is defined by the attributes After
and Before of the class Char. Particular instances of Char mark
the beginning and the end of a document. The methods
InsertChars and RemoveChars are used to add and delete
characters.
2.1.2 Editor
As seen above, each document is natively stored in the database.
Our editor does not have a replica of one part of the native text
database in the sense of database replicas. Instead, it has a so-called
image as its replica. Even if several authors edit the same text at the
same time, they work on one unique document at all times. The
system guarantees this unique view.
Editing a document involves a number of steps: first, getting the
required information out of the image, secondly, invoking the
corresponding methods within the database, thirdly, changing the
image, and fourthly, informing all other clients about the changes.
2.1.3 Real-Time Server Component
The real-time server component is responsible for the real-time
propagation of any changes on a document done within an editor to
all the editors who are working or have opened the same document.
When an editor connects to the application server, which in turn
connects to the database, the database also establishes a connection
to the real-time server component (if there isn"t already a
connection). The database system informs the real-time server
component about each new editor session (session), which the
realtime server component administrates in his SessionManager. Then,
the editor as well connects to the real-time server component. The
real-time server component adds the editor socket to the client"s
data structure in the SessionManager and is then ready to
communicate.
Each time a change on a document from an editor is persistently
stored in the database, the database sends a message to the real-time
server component, which in turns, sends the changes to all the
43
editors working on the same document. Therefore, a special
communication protocol is used: the update protocol.
Update Protocol
The real-time server component uses the update protocol to
communicate with the database and the editors. Messages are sent
from the database to the real-time server component, which sends
the messages to the affected editors. The update protocol consists of
different message types. Messages consist of two packages:
package one contains information for the real-time server
component whereas package two is passed to the editors and
contains the update information, as depicted in Figure 3.
|| RTSC || Parameter | … | Parameter|| || Editor Data ||
Protocol between database system and
real-time server component
Protocol between real -time server
component and editors
Figure 3. Update Protocol
In the following, two message types are presented:
||u|sessionId,...,sessionId||||editor data||
u: update message, sessionId: Id of the client session
With this message type the real-time server component sends the
editor data package to all editors specified in the sessionId list.
||ud|fileId||||editor data||
ud: update document message, fileId: Id of the file
With this message type, the real-time server component sends the
editor data to all editors who have opened the document with the
indicated file-Id.
Class Model
Figure 4 depicts the class model as well as the environment of the
real-time server component. The environment consists mainly of the
editor and the database, but any other client application that could
make use of the real-time server component can connect.
ConnectionListener: This class is responsible for the connection to
the clients, i.e. to the database and the editors. Depending on the
connection type (database or editor) the connection is passed to an
EditorWorker instance or DatabaseMessageWorker instance
respectively.
EditorWorker: This class manages the connections of type ‘editor".
The connection (a socket and its input and output stream) is stored
in the SessionManager.
SessionManager: This class is similar to an ‘in-memory database":
all editor session information, e.g. the editor sockets, which editor
has opened which document etc. are stored within this data
structure.
DatabaseMessageWorker: This class is responsible for the
connections of type ‘database". At run-time, only one connection
exists for each database. Update messages from the database are
sent to the DatabaseMessageWorker and, with the help of
additional information from the SessionManager, sent to the
corresponding clients.
ServiceClass: This class offers a set of methods for reading, writing
and logging messages.
tdb.mp.editor tdb.mp.database
tdb.mp.mgmt
EditorWorker
DatabaseMessageWorker
SessionManager
MessageHandler
ConnectionListener
ServiceClass
MessageQueue
tdb.mp.listener tdb.mp.service
junit.tests
1
*
1
*
1 *
1
*
1*
1
*
Editors Datenbanksystem 1
2
1
*
1
*
1
*
TCP/IP
Figure 4. Real-Time Server Component Class Diagram
2.1.4 Dynamic Folders
As mentioned above, every editing action invoked by a user is
immediately transferred to the database. At the same time, more
information about the current transaction is gathered.
As all information is stored in the database, one character can hold a
multitude of information, which can later be used for the retrieval of
documents. Meta data is collected at character level, from document
structure (layout, workflow, template, semantics, security,
workflow and notes), on the level of a document section and on the
level of the whole document [6].
All of the above-mentioned meta data is crucial information for
creating content and knowledge out of word processing documents.
This meta data can be used to create an alternative storage system
for documents. In any case, it is not an easy task to change users"
familiarity to the well known hierarchical file system. This is also
the main reason why we do not completely disregard the classical
file system, but rather enhance it. Folders which correspond to the
classical hierarchical file system will be called static folders.
Folders where the documents are organized according to meta data,
will be called dynamic folders. As all information is stored in the
database, the file system, too, is based on the database.
The dynamic folders build up sub-trees, which are guided by the
meta data selected by the user. Thus, the first step in using a
dynamic folder is the definition of how it should be built. For each
level of a dynamic folder, exactly one meta data item is used to. The
following example illustrates the steps which have to be taken in
order to define a dynamic folder, and the meta data which should be
used.
As a first step, the meta data which will be used for the dynamic
folder must be chosen (see Table 1): The sequence of the meta data
influences the structure of the folder. Furthermore, for each meta
data used, restrictions and granularity must be defined by the user;
if no restrictions are defined, all accessible documents are listed.
The granularity therefore influences the number of sub-folders
which will be created for the partitioning of the documents.
44
As the user enters the tree structure of the dynamic folder, he can
navigate through the branches to arrive at the document(s) he is
looking for. The directory names indicate which meta data
determines the content of the sub-folder in question. At each level,
the documents, which have so far been found to match the meta
data, can be inspected.
Table 1. Defining dynamic folders (example)
Level Meta data Restrictions Granularity
1 Creator Only show documents
which have been created
by the users Leone or
Hodel or Gall
One folder per
creator
2 Current
location
Only show documents
which where read at my
current location
One folder per
task status
3 Authors Only show documents
where at least 40% was
written by user ‘Leone"
Each 20% one
folder
ad-hoc changes of granularity and restrictions are possible in order
to maximize search comfort for the user. It is possible to predefine
dynamic folders for frequent use, e.g. a location-based folder, as
well as to create and modify dynamic folders on an ad-hoc basis.
Furthermore, the content of such dynamic folders can change from
one second to another, depending on the changes made by other
users at that moment.
3. VALIDATION
The proposed architecture is validated on the example of a character
insertion. Insert operations are the mostly used operations in a
(collaborative) editing system. The character insertion is based on
the TeNDaX Insert Algorithm which is formally described in the
following. The algorithm is simplified for this purpose.
3.1 Insert Characters Algorithm
The symbol c stands for the object character, p stands for the
previous character, n stands for the next character of a character
object c and the symbol l stands for a list of character objects.
c = character
p=previous character
n = next character
l = list of characters
The symbol c1 stands for the first character in the list l, ci stands
for a character in the list l at the position i, whereas i is a value
between 1 and the length of the list l, and cn stands for the last
character in the list l.
c1 = first character in list l
ci = character at position i in list l
cn = last character in list l
The symbol β stands for the special character that marks the
beginning of a document and ε stands for the special character
that marks the end of a document.
β=beginning of document
ε=end of document
The function startTA starts a transaction.
startTA = start transaction
The function commitTA commits a transaction that was started.
commitTA = commit transaction
The function checkWriteAccess checks if the write access for a
document session s is granted.
checkWriteAccess(s) = check if write access for document session
s is granted
The function lock acquires an exclusive lock for a character c and
returns 1 for a success and 0 for no success.
lock(c) = acquire the lock for character c
success : return 1, no success : return 0
The function releaseLocks releases all locks that a transaction has
acquired so far.
releaseLocks = release all locks
The function getPrevious returns the previous character and
getNext returns the next character of a character c.
getPrevious(c) = return previous character of character c
getNext(c) = return next character of character c
The function linkBefore links a preceding character p with a
succeeding character x and the function linkAfter links a
succeeding character n with a preceding character y.
linkBefore(p,x) = link character p to character x
linkAfter(n,y) = link character n to character y
The function updateString links a character p with the first
character c1 of a character list l and a character n with the last
character cn of a character list l
updateString(l, p, n) = linkBefore(p cl)∧ linkAfter(n, cn )
The function insertChar inserts a character c in the table Char
with the fields After set to a character p and Before set to a
character n.
insertChar(c, p, n) = linkAfter(c,p) ∧ linkBefore(c,n) ∧
linkBefore(p,c) ∧ linkAfter(n,c)
The function checkPreceding determines the previous character's
CharacterValue of a character c and if the previous character's
status is active.
checkPreceding(c) = return status and CharacterValue of the
previous character
The function checkSucceeding determines the next character's
CharacterValue of a character c and if the next character's status is
active.
45
checkSucceeding(c) = return status and CharacterValue of the
next character
The function checkCharValue determines the CharacterValue of a
character c.
checkCharValue(c) = return CharacterValue of character c
The function sendUpdate sends an update message
(UpdateMessage) from the database to the real-time server
component.
sendUpdate(UpdateMessage)
The function Read is used in the real-time server component to
read the UpdateMessage.
Read(UpdateInformationMessage)
The function AllocatEditors checks on the base of the
UpdateMessage and the SessionManager, which editors have to
be informed.
AllocateEditors(UpdateInformationMessage, SessionManager) =
returns the affected editors
The function SendMessage(EditorData) sends the editor part of
the UpdateMessage to the editors
SendMessage(EditorData)
In TeNDaX, the Insert Algorithm is implemented in the class
method InsertChars of the class Char which is depicted in Figure
2. The relevant parameters for the definitions beneath, are
introduced in the following list:
- nextCharacterOID: OID of the character situated next to the
string to be inserted
- previousCharacterOID: OID of the character situated
previously to the string to be inserted
- characterOIDs (List): List of character which have to be
inserted
Thus, the insertion of characters can be defined stepwise as
follows:
Start a transaction.
startTA
Select the character that is situated before the character that
follows the string to be inserted.
getPrevious(nextCharacterOID) = PrevChar(prevCharOID) ⇐
Π After ϑOID = nextCharacterOID(Char))
Acquire the lock for the character that is situated in the document
before the character that follows the string which shall be inserted.
lock(prevCharId)
At this time the list characterOIDs contains the characters c1 to cn
that shall be inserted.
characterOIDs={ c1, …, cn }
Each character of the string is inserted at the appropriate position
by linking the preceding and the succeeding character to it.
For each character ci of characterOIDs:
insertChar(ci, p, n)
Whereas ci ∈ { c1,…, cn }
Check if the preceding and succeeding characters are active or if it
is the beginning or the end of the document.
checkPreceding(prevCharOID) = IsOK(IsActive,
CharacterValue) ⇐ Π IsActive, CharacterValue (ϑ OID =
nextCharacterOID(Char))
checkSucceeding(nextCharacterOID) = IsOK(IsActive,
CharacterValue)⇐ Π IsActive, CharacterValue (ϑ OID =
nextCharacterOID(Char))
Update characters before and after the string to be inserted.
updateString(characterOIDs, prevCharOID, nextCharacterOID)
Release all locks and commit Transaction.
releaseLocks
commitTA
Send update information to the real-time server component
sendUpdate(UpdatenMessage)
Read update message and inform affected editors of the change
Read(UpdateMessage)
Allocate Editors(UpdateMessage, SessionManager)
SendMessage(EditorData)
3.2 Insert Characters Example
Figure 1 gives a snapshot the system, i.e. of its architecture: four
databases are distributed over a peer-to-peer network. Each
database is connected to an application server (AS) and each
application server is connected to a real-time server component
(RTSC). Editors are connected to one or more real-time server
components and to the corresponding databases.
Considering that editor A (connected to database 1 and 4) and
editor B (connected to database 1 and 2) are working on the same
document stored in database 1. Editor B now inserts a character
into this document. The insert operation is passed to application
server 1, which in turns, passes it to the database 1, where an
insert operation is invoked; the characters are inserted according
to the algorithm discussed in the previous section. After the
insertion, database 1 sends an update message (according to the
update protocol discussed before) to real-time server component 1
(via AS 1). RTCS 1 combines the received update information
with the information in his SessionManager and sends the editor
data to the affected editors, in this case to editor A and B, where
the changes are immediately shown.
Occurring collaboration conflicts are solved and described in [3].
4. SUMMARY
With the approach presented in this paper and the implemented
prototype, we offer real-time collaborative editing and management
of documents stored in a special way in a database. With this
approach we provide security, consistency and availability of
documents and consequently offer pervasive document editing and
management. Pervasive document editing and management is
enabled due to the proposed architecture with the embedded
real46
time server component, which propagates changes to a document
immediately and consequently offers up-to-date documents.
Document editing and managing is consequently enabled anywhere,
anytime and with any device.
The above-descried system is implemented in a running prototype.
The system will be tested soon in line with a student workshop next
autumn.
REFERENCES
[1] Abiteboul, S., Agrawal, R., et al.: The Lowell Database
Research Self Assessment. Massachusetts, USA, 2003.
[2] Hodel, T. B., Businger, D., and Dittrich, K. R.: Supporting
Collaborative Layouting in Word Processing. IEEE
International Conference on Cooperative Information
Systems (CoopIS), Larnaca, Cyprus, IEEE, 2004.
[3] Hodel, T. B. and Dittrich, K. R.: "Concept and prototype of a
collaborative business process environment for document
processing." Data & Knowledge Engineering 52, Special
Issue: Collaborative Business Process Technologies(1):
61120, 2005.
[4] Hodel, T. B., Dubacher, M., and Dittrich, K. R.: Using
Database Management Systems for Collaborative Text
Editing. ACM European Conference of
Computersupported Cooperative Work (ECSCW CEW 2003),
Helsinki, Finland, 2003.
[5] Hodel, T. B., Gall, H., and Dittrich, K. R.: Dynamic
Collaborative Business Processes within Documents. ACM
Special Interest Group on Design of Communication
(SIGDOC) , Memphis, USA, 2004.
[6] Hodel, T. B., R. Hacmac, and Dittrich, K. R.: Using Text
Editing Creation Time Meta Data for Document
Management. Conference on Advanced Information
Systems Engineering (CAiSE'05), Porto, Portugal, Springer
Lecture Notes, 2005.
[7] Hodel, T. B., Specker, F. and Dittrich, K. R.: Embedded
SOAP Server on the Operating System Level for ad-hoc
Automatic Real-Time Bidirectional Communication.
Information Resources Management Association (IRMA),
San Diego, USA, 2005.
[8] O"Kelly, P.: Revolution in Real-Time Communication and
Collaboration: For Real This Time. Application Strategies:
In-Depth Research Report. Burton Group, 2005.
47 | collaborative layouting;restriction;hierarchical file system;computer support collaborative work;cscw;computer supported collaborative work;real-time server component;text editing;collaborative document;pervasive document edit and management system;real-time transaction;character insertion;granularity;pervasive document editing and managing system;collaborative document processing;business logic layer |
train_C-84 | Selfish Caching in Distributed Systems: A Game-Theoretic Analysis | We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate. | 1. INTRODUCTION
Wide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer
caches [15, 16], and web caches [6, 10] have become popular over
the last few years. Caching1
of files in selected servers is widely
used to enhance the performance, availability, and reliability of
these systems. However, most such systems assume that servers
cooperate with one another by following protocols optimized for
overall system performance, regardless of the costs incurred by
each server.
In reality, servers may behave selfishly - seeking to maximize
their own benefit. For example, parties in different
administrative domains utilize their local resources (servers) to better
support clients in their own domains. They have obvious incentives to
cache objects2
that maximize the benefit in their domains, possibly
at the expense of globally optimum behavior. It has been an open
question whether these caching scenarios and protocols maintain
their desirable global properties (low total social cost, for example)
in the face of selfish behavior.
In this paper, we take a game-theoretic approach to analyzing
the problem of caching in networks of selfish servers through
theoretical analysis and simulations. We model selfish caching as a
non-cooperative game. In the basic model, the servers have two
possible actions for each object. If a replica of a requested object
is located at a nearby node, the server may be better off accessing
the remote replica. On the other hand, if all replicas are located too
far away, the server is better off caching the object itself. Decisions
about caching the replicas locally are arrived at locally, taking into
account only local costs. We also define a more elaborate payment
model, in which each server bids for having an object replicated at
another site. Each site now has the option of replicating an object
and collecting the related bids. Once all servers have chosen a
strategy, each game specifies a configuration, that is, the set of servers
that replicate the object, and the corresponding costs for all servers.
Game theory predicts that such a situation will end up in a Nash
equilibrium, that is, a set of (possibly randomized) strategies with
the property that no player can benefit by changing its strategy
while the other players keep their strategies unchanged [28].
Foundational considerations notwithstanding, it is not easy to accept
randomized strategies as the behavior of rational agents in a
distributed system (see [28] for an extensive discussion) - but this
is what classical game theory can guarantee. In certain very
fortunate situations, however (see [9]), the existence of pure (that is,
deterministic) Nash equilibria can be predicted.
With or without randomization, however, the lack of
coordination inherent in selfish decision-making may incur costs well
beyond what would be globally optimum. This loss of efficiency is
1
We will use caching and replication interchangeably.
2
We use the term object as an abstract entity that represents files
and other data objects.
21
quantified by the price of anarchy [21]. The price of anarchy is
the ratio of the social (total) cost of the worst possible Nash
equilibrium to the cost of the social optimum. The price of anarchy
bounds the worst possible behavior of a selfish system, when left
completely on its own. However, in reality there are ways whereby
the system can be guided, through seeding or incentives, to a
preselected Nash equilibrium. This optimistic version of the price of
anarchy [3] is captured by the smallest ratio between a Nash
equilibrium and the social optimum.
In this paper we address the following questions :
• Do pure strategy Nash equilibria exist in the caching game?
• If pure strategy Nash equilibria do exist, how efficient are
they (in terms of the price of anarchy, or its optimistic
counterpart) under different placement costs, network topologies,
and demand distributions?
• What is the effect of adopting payments? Will the Nash
equilibria be improved?
We show that pure strategy Nash equilibria always exist in the
caching game. The price of anarchy of the basic game model can
be O(n), where n is the number of servers; the intuitive reason is
undersupply. Under certain topologies, the price of anarchy does
have tighter bounds. For complete graphs and stars, it is O(1). For
D-dimensional grids, it is O(n
D
D+1 ). Even the optimistic price of
anarchy can be O(n). In the payment model, however, the game
can always implement a Nash equilibrium that is same as the social
optimum, so the optimistic price of anarchy is one.
Our simulation results show several interesting phases. As the
placement cost increases from zero, the price of anarchy increases.
When the placement cost first exceeds the maximum distance
between servers, the price of anarchy is at its highest due to
undersupply problems. As the placement cost further increases, the price
of anarchy decreases, and the effect of replica misplacement
dominates the price of anarchy.
The rest of the paper is organized as follows. In Section 2 we
discuss related work. Section 3 discusses details of the basic game and
analyzes the bounds of the price of anarchy. In Section 4 we discuss
the payment game and analyze its price of anarchy. In Section 5 we
describe our simulation methodology and study the properties of
Nash equilibria observed. We discuss extensions of the game and
directions for future work in Section 6.
2. RELATED WORK
There has been considerable research on wide-area peer-to-peer
file systems such as OceanStore [22], CFS [5], PAST [32],
FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and
SummaryCache [10], and peer-to-peer caches such as Squirrel [16].
Most of these systems use caching for performance, availability,
and reliability. The caching protocols assume obedience to the
protocol and ignore participants" incentives. Our work starts from the
assumption that servers are selfish and quantifies the cost of the
lack of coordination when servers behave selfishly.
The placement of replicas in the caching problem is the most
important issue. There is much work on the placement of web
replicas, instrumentation servers, and replicated resources. All
protocols assume obedience and ignore participants" incentives. In [14],
Gribble et al. discuss the data placement problem in peer-to-peer
systems. Ko and Rubenstein propose a self-stabilizing, distributed
graph coloring algorithm for the replicated resource placement [20].
Chen, Katz, and Kubiatowicz propose a dynamic replica
placement algorithm exploiting underlying distributed hash tables [4].
Douceur and Wattenhofer describe a hill-climbing algorithm to
exchange replicas for reliability in FARSITE [8]. RaDar is a
system that replicates and migrates objects for an Internet hosting
service [31]. Tang and Chanson propose a coordinated en-route web
caching that caches objects along the routing path [34].
Centralized algorithms for the placement of objects, web proxies, mirrors,
and instrumentation servers in the Internet have been studied
extensively [18,19,23,30].
The facility location problem has been widely studied as a
centralized optimization problem in theoretical computer science and
operations research [27]. Since the problem is NP-hard,
approximation algorithms based on primal-dual techniques, greedy
algorithms, and local search have been explored [17, 24, 26]. Our
caching game is different from all of these in that the optimization
process is performed among distributed selfish servers.
There is little research in non-cooperative facility location games,
as far as we know. Vetta [35] considers a class of problems where
the social utility is submodular (submodularity means decreasing
marginal utility). In the case of competitive facility location among
corporations he proves that any Nash equilibrium gives an expected
social utility within a factor of 2 of optimal plus an additive term
that depends on the facility opening cost. Their results are not
directly applicable to our problem, however, because we consider
each server to be tied to a particular location, while in their model
an agent is able to open facilities in multiple locations. Note that in
that paper the increase of the price of anarchy comes from
oversupply problems due to the fact that competing corporations can open
facilities at the same location. On the other hand, the significant
problems in our game are undersupply and misplacement.
In a recent paper, Goemans et al. analyze content distribution on
ad-hoc wireless networks using a game-theoretic approach [12]. As
in our work, they provide monetary incentives to mobile users for
caching data items, and provide tight bounds on the price of
anarchy and speed of convergence to (approximate) Nash equilibria.
However, their results are incomparable to ours because their
payoff functions neglect network latencies between users, they
consider multiple data items (markets), and each node has a limited
budget to cache items.
Cost sharing in the facility location problem has been studied
using cooperative game theory [7, 13, 29]. Goemans and Skutella
show strong connections between fair cost allocations and linear
programming relaxations for facility location problems [13]. P´al
and Tardos develop a method for cost-sharing that is approximately
budget-balanced and group strategyproof and show that the method
recovers 1/3 of the total cost for the facility location game [29].
Devanur, Mihail, and Vazirani give a strategyproof cost allocation
for the facility location problem, but cannot achieve group
strategyproofness [7].
3. BASIC GAME
The caching problem we study is to find a configuration that
meets certain objectives (e.g., minimum total cost). Figure 1 shows
examples of caching among four servers. In network (a), A stores
an object. Suppose B wants to access the object. If it is cheaper
to access the remote replica than to cache it, B accesses the remote
replica as shown in network (b). In network (c), C wants to access
the object. If C is far from A, C caches the object instead of
accessing the object from A. It is possible that in an optimal configuration
it would be better to place replicas in A and B. Understanding the
placement of replicas by selfish servers is the focus of our study.
The caching problem is abstracted as follows. There is a set N of
n servers and a set M of m objects. The distance between servers
can be represented as a distance matrix D (i.e., dij is the distance
22
Server
Server
Server
Server
A
B
C
D
(a)
Server
Server
Server
Server
A
B
C
D
(b)
Server
Server
Server
Server
A
B
C
D
(c)
Figure 1: Caching. There are four servers labeled A, B, C, and D. The rectangles are object replicas. In (a), A stores an object. If B incurs less cost
accessing A"s replica than it would caching the object itself, it accesses the object from A as in (b). If the distance cost is too high, the server caches
the object itself, as C does in (c). This figure is an example of our caching game model.
from server i to server j). D models an underlying network
topology. For our analysis we assume that the distances are symmetric
and the triangle inequality holds on the distances (for all servers
i, j, k: dij + djk ≥ dik). Each server has demand from clients
that is represented by a demand matrix W (i.e., wij is the demand
of server i for object j). When a server caches objects, the server
incurs some placement cost that is represented by a matrix α (i.e.,
αij is a placement cost of server i for object j).
In this study, we assume that servers have no capacity limit. As
we discuss in the next section, this fact means that the caching
behavior with respect to each object can be examined separately.
Consequently, we can talk about configurations of the system with
respect to a given object:
DEFINITION 1. A configuration X for some object O is the set
of servers replicating this object.
The goal of the basic game is to find configurations that are achieved
when servers optimize their cost functions locally.
3.1 Game Model
We take a game-theoretic approach to analyzing the
uncapacitated caching problem among networked selfish servers. We model
the selfish caching problem as a non-cooperative game with n
players (servers/nodes) whose strategies are sets of objects to cache. In
the game, each server chooses a pure strategy that minimizes its
cost. Our focus is to investigate the resulting configuration, which
is the Nash equilibrium of the game. It should be emphasized that
we consider only pure strategy Nash equilibria in this paper.
The cost model is an important part of the game. Let Ai be the
set of feasible strategies for server i, and let Si ∈ Ai be the strategy
chosen by server i. Given a strategy profile S = (S1, S2, ..., Sn),
the cost incurred by server i is defined as:
Ci(S) =
j∈Si
αij +
j /∈Si
wij di (i,j). (1)
where αij is the placement cost of object j, wij is the demand that
server i has for object j, (i, j) is the closest server to i that caches
object j, and dik is the distance between i and k. When no server
caches the object, we define distance cost di (i,j) to be dM -large
enough that at least one server will choose to cache the object.
The placement cost can be further divided into first-time
installation cost and maintenance cost:
αij = k1i + k2i
UpdateSizej
ObjectSizej
1
T
Pj
k
wkj , (2)
where k1i is the installation cost, k2i is the relative weight
between the maintenance cost and the installation cost, Pj is the
ratio of the number of writes over the number of reads and writes,
UpdateSizej is the size of an update, ObjectSizej is the size of
the object, and T is the update period. We see tradeoffs between
different parameters in this equation. For example, placing replicas
becomes more expensive as UpdateSizej increases, Pj increases,
or T decreases. However, note that by varying αij itself we can
capture the full range of behaviors in the game. For our analysis,
we use only αij .
Since there is no capacity limit on servers, we can look at each
single object as a separate game and combine the pure strategy
equilibria of these games to obtain a pure strategy equilibrium of
the multi-object game. Fabrikant, Papadimitriou, and Talwar
discuss this existence argument: if two games are known to have pure
equilibria, and their cost functions are cross-monotonic, then their
union is also guaranteed to have pure Nash equilibria, by a
continuity argument [9]. A Nash equilibrium for the multi-object game is
the cross product of Nash equilibria for single-object games.
Therefore, we can focus on the single object game in the rest of this paper.
For single object selfish caching, each server i has two strategies
- to cache or not to cache. The object under consideration is j.
We define Si to be 1 when server i caches j and 0 otherwise. The
cost incurred by server i is
Ci(S) = αij Si + wij di (i,j)(1 − Si). (3)
We refer to this game as the basic game. The extent to which Ci(S)
represents actual cost incurred by server i is beyond the scope of
this paper; we will assume that an appropriate cost function of the
form of Equation 3 can be defined.
3.2 Nash Equilibrium Solutions
In principle, we can start with a random configuration and let
this configuration evolve as each server alters its strategy and
attempts to minimize its cost. Game theory is interested in stable
solutions called Nash equilibria. A pure strategy Nash equilibrium
is reached when no server can benefit by unilaterally changing its
strategy. A Nash equilibrium3
(S∗
i , S∗
−i) for the basic game
specifies a configuration X such that ∀i ∈ N, i ∈ X ⇔ S∗
i = 1.
Thus, we can consider a set E of all pure strategy Nash equilibrium
configurations:
X ∈ E ⇔ ∀i ∈ N,
∀Si ∈ Ai, Ci(S∗
i , S∗
−i) ≤ Ci(Si, S∗
−i)
(4)
By this definition, no server has incentive to deviate in the
configurations since it cannot reduce its cost.
For the basic game, we can easily see that:
X ∈ E ⇔ ∀i ∈ N, ∃j ∈ X s.t. dji ≤ α
and ∀j ∈ X, ¬∃k ∈ X s.t. dkj < α
(5)
The first condition guarantees that there is a server that places the
replica within distance α of each server i. If the replica is not placed
3
The notation for strategy profile (S∗
i , S∗
−i) separates node i s
strategy (S∗
i ) from the strategies of other nodes (S∗
−i).
23
A B1−α
0
0
0
0
0 0
0
0
0
0
2
n
nodes
2
n
nodes
(a)
A B1−α
0
0
0
0
0 0
0
0
0
0
2
n
nodes
2
n
nodes
(b)
A B1−α
2
n
nodes
2
n
nodes
n2
n2
n2
n2
n2 n2
n2
n2
n2
n2
(c)
Figure 2: Potential inefficiency of Nash equilibria illustrated by two clusters of n
2
servers. The intra-cluster distances are all zero and the distance
between clusters is α − 1, where α is the placement cost. The dark nodes replicate the object. Network (a) shows a Nash equilibrium in the basic
game, where one server in a cluster caches the object. Network (b) shows the social optimum where two replicas, one for each cluster, are placed. The
price of anarchy is O(n) and even the optimistic price of anarchy is O(n). This high price of anarchy comes from the undersupply of replicas due to
the selfish nature of servers. Network (c) shows a Nash equilibrium in the payment game, where two replicas, one for each cluster, are placed. Each
light node in each cluster pays 2/n to the dark node, and the dark node replicates the object. Here, the optimistic price of anarchy is one.
at i, then it is placed at another server within distance α of i, so i has
no incentive to cache. If the replica is placed at i, then the second
condition ensures there is no incentive to drop the replica because
no two servers separated by distance less than α both place replicas.
3.3 Social Optimum
The social cost of a given strategy profile is defined as the total
cost incurred by all servers, namely:
C(S) =
n−1
i=0
Ci(S) (6)
where Ci(S) is the cost incurred by server i given by Equation 1.
The social optimum cost, referred to as C(SO) for the remainder
of the paper, is the minimum social cost. The social optimum cost
will serve as an important base case against which to measure the
cost of selfish caching. We define C(SO) as:
C(SO) = min
S
C(S) (7)
where S varies over all possible strategy profiles. Note that in the
basic game, this means varying configuration X over all possible
configurations. In some sense, C(SO) represents the best possible
caching behavior - if only nodes could be convinced to cooperate
with one another.
The social optimum configuration is a solution of a mini-sum
facility location problem, which is NP-hard [11]. To find such
configurations, we formulate an integer programming problem:
minimize
Èi
Èj
¢αij xij +
Èk wij dikyijk
£
subject to
∀i, j
Èk yijk = I(wij)
∀i, j, k xij − ykji ≥ 0
∀i, j xij ∈ {0, 1}
∀i, j, k yijk ∈ {0, 1}
(8)
Here, xij is 1 if server i replicates object j and 0 otherwise; yijk
is 1 if server i accesses object j from server k and 0 otherwise;
I(w) returns 1 if w is nonzero and 0 otherwise. The first constraint
specifies that if server i has demand for object j, then it must access
j from exactly one server. The second constraint ensures that server
i replicates object j if any other server accesses j from i.
3.4 Analysis
To analyze the basic game, we first give a proof of the existence
of pure strategy Nash equilibria. We discuss the price of anarchy in
general and then on specific underlying topologies. In this analysis
we use simply α in place of αij , since we deal with a single object
and we assume placement cost is the same for all servers. In
addition, when we compute the price of anarchy, we assume that all
nodes have the same demand (i.e., ∀i ∈ N wij = 1).
THEOREM 1. Pure strategy Nash equilibria exist in the basic
game.
PROOF. We show a constructive proof. First, initialize the set
V to N. Then, remove all nodes with zero demand from V . Each
node x defines βx, where βx = α
wxj
. Furthermore, let Z(y) =
{z : dzy ≤ βz, z ∈ V }; Z(y) represents all nodes z for which y
lies within βz from z.
Pick a node y ∈ V such that βy ≤ βx for all x ∈ V . Place a
replica at y and then remove y and all z ∈ Z(y) from V . No such z
can have incentive to replicate the object because it can access y"s
replica at lower (or equal) cost. Iterate this process of placing
replicas until V is empty. Because at each iteration y is the remaining
node with minimum β, no replica will be placed within distance
βy of any such y by this process. The resulting configuration is a
pure-strategy Nash equilibrium of the basic game.
The Price of Anarchy (POA): To quantify the cost of lack of
coordination, we use the price of anarchy [21] and the optimistic
price of anarchy [3]. The price of anarchy is the ratio of the social
costs of the worst-case Nash equilibrium and the social optimum,
and the optimistic price of anarchy is the ratio of the social costs of
the best-case Nash equilibrium and the social optimum.
We show general bounds on the price of anarchy. Throughout
our discussion, we use C(SW ) to represent the cost of worst case
Nash equilibrium, C(SO) to represent the cost of social optimum,
and PoA to represent the price of anarchy, which is C(SW )
C(SO)
.
The worst case Nash equilibrium maximizes the total cost
under the constraint that the configuration meets the Nash condition.
Formally, we can define C(SW ) as follows.
C(SW ) = max
X∈E
(α|X| +
i
min
j∈X
dij) (9)
where minj∈X dij is the distance to the closest replica (including i
itself) from node i and X varies through Nash equilibrium
configurations.
Bounds on the Price of Anarchy: We show bounds of the price
of anarchy varying α. Let dmin = min(i,j)∈N×N,i=j dij and
dmax = max(i,j)∈N×N dij . We see that if α ≤ dmin, PoA = 1
24
Topology PoA
Complete graph 1
Star ≤ 2
Line O(
√
n)
D-dimensional grid O(n
D
D+1 )
Table 1: PoA in the basic game for specific topologies
trivially, since every server caches the object for both Nash
equilibrium and social optimum. When α > dmax, there is a transition
in Nash equilibria: since the placement cost is greater than any
distance cost, only one server caches the object and other servers
access it remotely. However, the social optimum may still place
multiple replicas. Since α ≤ C(SO) ≤ α+minj∈N
Èi dij when α >
dmax, we obtain
α+maxj∈N
Èi dij
α+minj∈N
Èi dij
≤ PoA ≤
α+maxj∈N
Èi dij
α
.
Note that depending on the underlying topology, even the lower
bound of PoA can be O(n). Finally, there is a transition when
α > maxj∈N
Èi dij. In this case, PoA =
α+maxj∈N
Èi dij
α+minj∈N
Èi dij
and
it is upper bounded by 2.
Figure 2 shows an example of the inefficiency of a Nash
equilibrium. In the network there are two clusters of servers whose
size is n
2
. The distance between two clusters is α − 1 where α is
the placement cost. Figure 2(a) shows a Nash equilibrium where
one server in a cluster caches the object. In this case, C(SW ) =
α + (α − 1)n
2
, since all servers in the other cluster accesses the
remote replica. However, the social optimum places two replicas, one
for each cluster, as shown in Figure 2(b). Therefore, C(SO) = 2α.
PoA =
α+(α−1) n
2
2α
, which is O(n). This bad price of anarchy
comes from an undersupply of replicas due to the selfish nature of
the servers. Note that all Nash equilibria have the same cost; thus
even the optimistic price of anarchy is O(n).
In Appendix A, we analyze the price of anarchy with specific
underlying topologies and show that PoA can have tighter bounds
than O(n) for the complete graph, star, line, and D-dimensional
grid. In these topologies, we set the distance between directly
connected nodes to one. We describe the case where α > 1, since
PoA = 1 trivially when α ≤ 1. A summary of the results is
shown in Table 1.
4. PAYMENT GAME
In this section, we present an extension to the basic game with
payments and analyze the price of anarchy and the optimistic price
of anarchy of the game.
4.1 Game Model
The new game, which we refer to as the payment game, allows
each player to offer a payment to another player to give the latter
incentive to replicate the object. The cost of replication is shared
among the nodes paying the server that replicates the object.
The strategy for each player i is specified by a triplet (vi, bi, ti) ∈
{N, Ê+, Ê+}. vi specifies the player to whom i makes a bid,
bi ≥ 0 is the value of the bid, and ti ≥ 0 denotes a threshold
for payments beyond which i will replicate the object. In addition,
we use Ri to denote the total amount of bids received by a node i
(Ri =
Èj:vj =i bj).
A node i replicates the object if and only if Ri ≥ ti, that is, the
amount of bids it receives is greater than or equal to its threshold.
Let Ii denote the corresponding indicator variable, that is, Ii equals
1 if i replicates the object, and 0 otherwise. We make the rule that
if a node i makes a bid to another node j and j replicates the object,
then i must pay j the amount bi. If j does not replicate the object,
i does not pay j.
Given a strategy profile, the outcome of the game is the set of
tuples {(Ii, vi, bi, Ri)}. Ii tells us whether player i replicates the
object or not, bi is the payment player i makes to player vi, and
Ri is the total amount of bids received by player i. To compute
the payoffs given the outcome, we must now take into account the
payments a node makes, in addition to the placement costs and
access costs of the basic game.
By our rules, a server node i pays bi to node vi if vi replicates
the object, and receives a payment of Ri if it replicates the object
itself. Its net payment is biIvi − RiIi. The total cost incurred by
each node is the sum of its placement cost, access cost, and net
payment. It is defined as
Ci(S) = αij Ii + wij di (i,j)(1 − Ii) + biIvi − RiIi. (10)
The cost of social optimum for the payment game is same as that
for the basic game, since the net payments made cancel out.
4.2 Analysis
In analyzing the payment model, we first show that a Nash
equilibrium in the basic game is also a Nash equilibrium in the payment
game. We then present an important positive result - in the
payment game the socially optimal configuration can always be
implemented by a Nash equilibrium. We know from the counterexample
in Figure 2 that this is not guaranteed in the the basic game. In this
analysis we use α to represent αij .
THEOREM 2. Any configuration that is a pure strategy Nash
equilibrium in the basic game is also a pure strategy Nash
equilibrium in the payment game. Therefore, the price of anarchy of the
payment game is at least that of the basic game.
PROOF. Consider any Nash equilibrium configuration in the
basic game. For each node i replicating the object, set its threshold ti
to 0; everyone else has threshold α. Also, for all i, bi = 0.
A node that replicates the object does not have incentive to change
its strategy: changing the threshold does not decrease its cost, and
it would have to pay at least α to access a remote replica or
incentivize a nearby node to cache. Therefore it is better off keeping its
threshold and bid at 0 and replicating the object.
A node that is not replicating the object can access the object
remotely at a cost less than or equal to α. Lowering its threshold does
not decrease its cost, since all bi are zero. The payment necessary
for another server to place a replica is at least α.
No player has incentive to deviate, so the current configuration
is a Nash equilibrium.
In fact, Appendix B shows that the PoA of the payment game
can be more than that of the basic game in a given topology.
Now let us look at what happens to the example shown in
Figure 2 in the best case. Suppose node B"s neighbors each decide
to pay node B an amount 2/n. B does not have an incentive to
deviate, since accessing the remote replica does not decrease its
cost. The same argument holds for A because of symmetry in the
graph. Since no one has an incentive to deviate, the configuration is
a Nash equilibrium. Its total cost is 2α, the same as in the socially
optimal configuration shown in Figure 2(b). Next we prove that
indeed the payment game always has a strategy profile that
implements the socially optimal configuration as a Nash equilibrium. We
first present the following observation, which is used in the proof,
about thresholds in the payment game.
OBSERVATION 1. If node i replicates the object, j is the
nearest node to i among the other nodes that replicate the object, and
dij < α in a Nash equilibrium, then i should have a threshold at
25
least (α − dij). Otherwise, it cannot collect enough payment to
compensate for the cost of replicating the object and is better off
accessing the replica at j.
THEOREM 3. In the payment game, there is always a pure
strategy Nash equilibrium that implements the social optimum
configuration. The optimistic price of anarchy in the payment game is
therefore always one.
PROOF. Consider the socially optimal configuration φopt. Let
No be the set of nodes that replicate the object and Nc = N − No
be the rest of the nodes. Also, for each i in No, let Qi denote the
set of nodes that access the object from i, not including i itself. In
the socially optimal configuration, dij ≤ α for all j in Qi.
We want to find a set of payments and thresholds that makes this
configuration implementable. The idea is to look at each node i in
No and distribute the minimum payment needed to make i replicate
the object among the nodes that access the object from i. For each
i in No, and for each j in Qi, we define
δj = min{α, min
k∈No−{i}
djk} − dji (11)
Note that δj is the difference between j"s cost for accessing the
replica at i and j"s next best option among replicating the object
and accessing some replica other than i. It is clear that δj ≥ 0.
CLAIM 1. For each i ∈ No, let be the nearest node to i in
No. Then,
Èj∈Qi
δj ≥ α − di .
PROOF. (of claim) Assume the contrary, that is,
Èj∈Qi
δj <
α − di . Consider the new configuration φnew wherein i does not
replicate and each node in Qi chooses its next best strategy (either
replicating or accessing the replica at some node in No − {i}). In
addition, we still place replicas at each node in No − {i}. It is easy
to see that cost of φopt minus cost of φnew is at least:
(α +
j∈Qi
dij) − (di +
j∈Qi
min{α, min
k∈No−{i}
dik})
= α − di −
j∈Qi
δj > 0,
which contradicts the optimality of φopt.
We set bids as follows. For each i in No, bi = 0 and for each j
in Qi, j bids to i (i.e., vj = i) the amount:
bj = max{0, δj − i/(|Qi| + 1)}, j ∈ Qi (12)
where i =
Èj∈Qi
δj − α + di ≥ 0 and |Qi| is the cardinality of
Qi. For the thresholds, we have:
ti =
α if i ∈ Nc;Èj∈Qi
bj if i ∈ No.
(13)
This fully specifies the strategy profile of the nodes, and it is easy
to see that the outcome is indeed the socially optimal configuration.
Next, we verify that the strategies stipulated constitute a Nash
equilibrium. Having set ti to α for i in Nc means that any node
in N is at least as well off lowering its threshold and replicating
as bidding α to some node in Nc to make it replicate, so we may
disregard the latter as a profitable strategy. By observation 1, to
ensure that each i in No does not deviate, we require that if is the
nearest node to i in No, then
Èj∈Qi
bj is at least (α − di ).
Otherwise, i will raise ti above
Èj∈Qi
bj so that it does not replicate
and instead accesses the replica at . We can easily check that
j∈Qi
bj ≥
j∈Qi
δj −
|Qi| i
|Qi| + 1
= α − di +
i
|Qi| + 1
≥ α − di .
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
0 20 40 60 80 100 120 140 160 180 200
1
10
100
C(NE)/C(SO)
AverageNumberofReplicas
alpha
PoA
Ratio
OPoA
Replica (SO)
Replica (NE)
Figure 3: We present P oA, Ratio, and OP oA results for the basic
game, varying α on a 100-node line topology, and we show number
of replicas placed by the Nash equilibria and by the optimal solution.
We see large peaks in P oA and OP oA at α = 100, where a phase
transition causes an abrupt transition in the lines.
Therefore, each node i ∈ No does not have incentive to change
ti since i loses its payments received or there is no change, and i
does not have incentive to bi since it replicates the object. Each
node j in Nc has no incentive to change tj since changing tj does
not reduce its cost. It also does not have incentive to reduce bj
since the node where j accesses does not replicate and j has to
replicate the object or to access the next closest replica, which costs
at least the same from the definition of bj . No player has incentive
to deviate, so this strategy profile is a Nash equilibrium.
5. SIMULATION
We run simulations to compare Nash equilibria for the
singleobject caching game with the social optimum computed by solving
the integer linear program described in Equation 8 using Mosek [1].
We examine price of anarchy (PoA), optimistic price of anarchy
(OPoA), and the average ratio of the costs of Nash equilibria and
social optima (Ratio), and when relevant we also show the average
numbers of replicas placed by the Nash equilibrium (Replica(NE))
and the social optimum (Replica(SO)). The PoA and OPoA are
taken from the worst and best Nash equilibria, respectively, that we
observe over the runs. Each data point in our figures is based on
1000 runs, randomly varying the initial strategy profile and player
order. The details of the simulations including protocols and a
discussion of convergence are presented in Appendix C.
In our evaluation, we study the effects of variation in four
categories: placement cost, underlying topology, demand distribution,
and payments. As we vary the placement cost α, we directly
influence the tradeoff between caching and not caching. In order to get
a clear picture of the dependency of PoA on α in a simple case, we
first analyze the basic game with a 100-node line topology whose
edge distance is one.
We also explore transit-stub topologies generated using the
GTITM library [36] and power-law topologies (Router-level
BarabasiAlbert model) generated using the BRITE topology generator [25].
For these topologies, we generate an underlying physical graph of
3050 physical nodes. Both topologies have similar minimum,
average, and maximum physical node distances. The average distance
is 0.42. We create an overlay of 100 server nodes and use the same
overlay for all experiments with the given topology.
In the game, each server has a demand whose distribution is
Bernoulli(p), where p is the probability of having demand for the
object; the default unless otherwise specified is p = 1.0.
26
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
10
100
C(NE)/C(SO)
AverageNumberofReplicas
alpha
PoA
Ratio
OPoA
Replica (SO)
Replica (NE)
(a)
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
10
100
C(NE)/C(SO)
AverageNumberofReplicas
alpha
PoA
Ratio
OPoA
Replica (SO)
Replica (NE)
(b)
Figure 4: Transit-stub topology: (a) basic game, (b) payment game. We show the P oA, Ratio, OP oA, and the number of replicas placed while
varying α between 0 and 2 with 100 servers on a 3050-physical-node transit-stub topology.
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
10
100
C(NE)/C(SO)
AverageNumberofReplicas
alpha
PoA
Ratio
OPoA
Replica (SO)
Replica (NE)
(a)
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
10
100
C(NE)/C(SO)
AverageNumberofReplicas
alpha
PoA
Ratio
OPoA
Replica (SO)
Replica (NE)
(b)
Figure 5: Power-law topology: (a) basic game, (b) payment game. We show the P oA, Ratio, OP oA, and the number of replicas placed while
varying α between 0 and 2 with 100 servers on a 3050-physical-node power-law topology.
5.1 Varying Placement Cost
Figure 3 shows PoA, OPoA, and Ratio, as well as number
of replicas placed, for the line topology as α varies. We observe
two phases. As α increases the PoA rises quickly to a peak at
100. After 100, there is a gradual decline. OPoA and Ratio show
behavior similar to PoA.
These behaviors can be explained by examining the number of
replicas placed by Nash equilibria and by optimal solutions. We see
that when α is above one, Nash equilibrium solutions place fewer
replicas than optimal on average. For example, when α is 100,
the social optimum places four replicas, but the Nash equilibrium
places only one. The peak in PoA at α = 100 occurs at the point
for a 100-node line where the worst-case cost of accessing a remote
replica is slightly less than the cost of placing a new replica, so
selfish servers will never place a second replica. The optimal solution,
however, places multiple replicas to decrease the high global cost
of access. As α continues to increase, the undersupply problem
lessens as the optimal solution places fewer replicas.
5.2 Different Underlying Topologies
In Figure 4(a) we examine an overlay graph on the more realistic
transit-stub topology. The trends for the PoA, OPoA, and Ratio
are similar to the results for the line topology, with a peak in PoA
at α = 0.8 due to maximal undersupply.
In Figure 5(a) we examine an overlay graph on the power-law
topology. We observe several interesting differences between the
power-law and transit-stub results. First, the PoA peaks at a lower
level in the power-law graph, around 2.3 (at α = 0.9) while the
peak PoA in the transit-stub topology is almost 3.0 (at α = 0.8).
After the peak, PoA and Ratio decrease more slowly as α
increases. OPoA is close to one for the whole range of α values.
This can be explained by the observation in Figure 5(a) that there
is no significant undersupply problem here like there was in the
transit-stub graph. Indeed the high PoA is due mostly to
misplacement problems when α is from 0.7 to 2.0, since there is little
decrease in PoA when the number of replicas in social optimum
changes from two to one. The OPoA is equal to one in the figure
when the same number of replicas are placed.
5.3 Varying Demand Distribution
Now we examine the effects of varying the demand distribution.
The set of servers with demand is random for p < 1, so we
calculate the expected PoA by averaging over 5 trials (each data point
is based on 5000 runs). We run simulations for demand levels of
p ∈ {0.2, 0.6, 1.0} as α is varied on the 100 servers on top of
the transit-stub graph. We observe that as demand falls, so does
expected PoA. As p decreases, the number of replicas placed in
the social optimum decreases, but the number in Nash equilibria
changes little. Furthermore, when α exceeds the overlay diameter,
the number in Nash equilibria stays constant when p varies.
Therefore, lower p leads to a lesser undersupply problem, agreeing with
intuition. We do not present the graph due to space limitations and
redundancy; the PoA for p = 1.0 is identical to PoA in Figure 4(a),
and the lines for p = 0.6 and p = 0.2 are similar but lower and flatter.
27
5.4 Effects of Payment
Finally, we discuss the effects of payments on the efficiency of
Nash equilibria. The results are presented in Figure 4(b) and
Figure 5(b). As shown in the analysis, the simulations achieve OPoA
close to one (it is not exactly one because of randomness in the
simulations). The Ratio for the payment game is much lower than
the Ratio for the basic game, since the protocol for the payment
game tends to explore good regions in the space of Nash
equilibria. We observe in Figure 4 that for α ≥ 0.4, the average number
of replicas of Nash equilibria gets closer with payments to that of
the social optimum than it does without. We observe in Figure 5
that more replicas are placed with payments than without when α
is between 0.7 and 1.3, the only range of significant undersupply in
the power-law case. The results confirm that payments give servers
incentive to replicate the object and this leads to better equilibria.
6. DISCUSSION AND FUTURE WORK
We suggest several interesting extensions and directions. One
extension is to consider multiple objects in the capacitated caching
game, in which servers have capacity limits when placing objects.
Since caching one object affects the ability to cache another, there
is no separability of a multi-object game into multiple single object
games. As studied in [12], one way to formulate this problem is to
find the best response of a server by solving a knapsack problem
and to compute Nash equilibria.
In our analyses, we assume that all nodes have the same demand.
However, nodes could have different demand depending on objects.
We intend to examine the effects of heterogeneous demands (or
heterogeneous placement costs) analytically. We also want to look
at the following aggregation effect. Suppose there are n − 1
clustered nodes with distance of α−1 from a node hosting a replica.
All nodes have demands of one. In that case, the price of anarchy
is O(n). However, if we aggregate n − 1 nodes into one node with
demand n − 1, the price of anarchy becomes O(1), since α should
be greater than (n − 1)(α − 1) to replicate only one object. Such
aggregation can reduce the inefficiency of Nash equilibria.
We intend to compute the bounds of the price of anarchy under
different underlying topologies such as random graphs or
growthrestricted metrics. We want to investigate whether there are certain
distance constraints that guarantee O(1) price of anarchy. In
addition, we want to run large-scale simulations to observe the change
in the price of anarchy as the network size increases.
Another extension is to consider server congestion. Suppose the
distance is the network distance plus γ × (number of accesses)
where γ is an extra delay when an additional server accesses the
replica. Then, when α > γ, it can be shown that PoA is bounded
by α
γ
. As γ increases, the price of anarchy bound decreases, since
the load of accesses is balanced across servers.
While exploring the caching problem, we made several
observations that seem counterintuitive. First, the PoA in the payment
game can be worse than the PoA in the basic game. Another
observation we made was that the number of replicas in a Nash
equilibrium can be more than the number of replicas in the social
optimum even without payments. For example, a graph with diameter
slightly more than α may have a Nash equilibrium configuration
with two replicas at the two ends. However, the social optimum
may place one replica at the center. We leave the investigation of
more examples as an open issue.
7. CONCLUSIONS
In this work we introduce a novel non-cooperative game model
to characterize the caching problem among selfish servers without
any central coordination. We show that pure strategy Nash
equilibria exist in the game and that the price of anarchy can be O(n) in
general, where n is the number of servers, due to undersupply
problems. With specific topologies, we show that the price of anarchy
can have tighter bounds. More importantly, with payments, servers
are incentivized to replicate and the optimistic price of anarchy is
always one. Non-cooperative caching is a more realistic model than
cooperative caching in the competitive Internet, hence this work is
an important step toward viable federated caching systems.
8. ACKNOWLEDGMENTS
We thank Kunal Talwar for enlightening discussions regarding
this work.
9. REFERENCES
[1] http://www.mosek.com.
[2] A. Adya et al. FARSITE: Federated, Available, and Reliable
Storage for an Incompletely Trusted Environment. In Proc.
of USENIX OSDI, 2002.
[3] E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler.
Near-optimal Network Design with Selfish Agents. In Proc.
of ACM STOC, 2003.
[4] Y. Chen, R. H. Katz, and J. D. Kubiatowicz. SCAN: A
Dynamic, Scalable, and Efficient Content Distribution
Network. In Proc. of Intl. Conf. on Pervasive Computing,
2002.
[5] F. Dabek et al. Wide-area Cooperative Storage with CFS. In
Proc. of ACM SOSP, Oct. 2001.
[6] P. B. Danzig. NetCache Architecture and Deploment. In
Computer Networks and ISDN Systems, 1998.
[7] N. Devanur, M. Mihail, and V. Vazirani. Strategyproof
cost-sharing Mechanisms for Set Cover and Facility
Location Games. In Proc. of ACM EC, 2003.
[8] J. R. Douceur and R. P. Wattenhofer. Large-Scale Simulation
of Replica Placement Algorithms for a Serverless Distributed
File System. In Proc. of MASCOTS, 2001.
[9] A. Fabrikant, C. H. Papadimitriou, and K. Talwar. The
Complexity of Pure Nash Equilibria. In Proc. of ACM STOC,
2004.
[10] L. Fan, P. Cao, J. Almeida, and A. Z. Broder. Summary
Cache: A Scalable Wide-area Web Cache Sharing Protocol.
IEEE/ACM Trans. on Networking, 8(3):281-293, 2000.
[11] M. R. Garey and D. S. Johnson. Computers and
Intractability: A Guide to the Theory of NP-Completeness.
W. H. Freeman and Co., 1979.
[12] M. X. Goemans, L. Li, V. S. Mirrokni, and M. Thottan.
Market Sharing Games Applied to Content Distribution in
ad-hoc Networks. In Proc. of ACM MOBIHOC, 2004.
[13] M. X. Goemans and M. Skutella. Cooperative Facility
Location Games. In Proc. of ACM-SIAM SODA, 2000.
[14] S. Gribble et al. What Can Databases Do for Peer-to-Peer? In
WebDB Workshop on Databases and the Web, June 2001.
[15] K. P. Gummadi et al. Measurement, Modeling, and Analysis
of a Peer-to-Peer File-Sharing Workload. In Proc. of ACM
SOSP, October 2003.
[16] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A
Decentralized Peer-to-Peer Web Cache. In Proc. of ACM
PODC, 2002.
[17] K. Jain and V. V. Vazirani. Primal-Dual Approximation
Algorithms for Metric Facility Location and k-Median
Problems. In Proc. of IEEE FOCS, 1999.
28
[18] S. Jamin et al. On the Placement of Internet Instrumentation.
In Proc. of IEEE INFOCOM, pages 295-304, 2000.
[19] S. Jamin et al. Constrained Mirror Placement on the Internet.
In Proc. of IEEE INFOCOM, pages 31-40, 2001.
[20] B.-J. Ko and D. Rubenstein. A Distributed, Self-stabilizing
Protocol for Placement of Replicated Resources in Emerging
Networks. In Proc. of IEEE ICNP, 2003.
[21] E. Koutsoupias and C. Papadimitriou. Worst-Case Equilibria.
In STACS, 1999.
[22] J. Kubiatowicz et al. OceanStore: An Architecture for
Global-scale Persistent Storage. In Proc. of ACM ASPLOS.
ACM, November 2000.
[23] B. Li, M. J. Golin, G. F. Italiano, and X. Deng. On the
Optimal Placement of Web Proxies in the Internet. In Proc.
of IEEE INFOCOM, 1999.
[24] M. Mahdian, Y. Ye, and J. Zhang. Improved Approximation
Algorithms for Metric Facility Location Problems. In Proc.
of Intl. Workshop on Approximation Algorithms for
Combinatorial Optimization Problems, 2002.
[25] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE:
Universal Topology Generation from a User"s Perspective.
Technical Report 2001-003, 1 2001.
[26] R. R. Mettu and C. G. Plaxton. The Online Median Problem.
In Proc. of IEEE FOCS, 2000.
[27] P. B. Mirchandani and R. L. Francis. Discrete Location
Theory. Wiley-Interscience Series in Discrete Mathematics
and Optimization, 1990.
[28] M. J. Osborne and A. Rubinstein. A Course in Game Theory.
MIT Press, 1994.
[29] M. Pal and E. Tardos. Group Strategyproof Mechanisms via
Primal-Dual Algorithms. In Proc. of IEEE FOCS, 2003.
[30] L. Qiu, V. N. Padmanabhan, and G. M. Voelker. On the
Placement of Web Server Replicas. In Proc. of IEEE
INFOCOM, 2001.
[31] M. Rabinovich, I. Rabinovich, R. Rajaraman, and
A. Aggarwal. A Dynamic Object Replication and Migration
Protocol for an Internet Hosting Service. In Proc. of IEEE
ICDCS, 1999.
[32] A. Rowstron and P. Druschel. Storage Management and
Caching in PAST, A Large-scale, Persistent Peer-to-peer
Storage Utility. In Proc. of ACM SOSP, October 2001.
[33] Y. Saito, C. Karamanolis, M. Karlsson, and M. Mahalingam.
Taming Aggressive Replication in the Pangaea Wide-Area
File System. In Proc. of USENIX OSDI, 2002.
[34] X. Tang and S. T. Chanson. Coordinated En-route Web
Caching. In IEEE Trans. Computers, 2002.
[35] A. Vetta. Nash Equilibria in Competitive Societies, with
Applications to Facility Location, Traffic Routing, and
Auctions. In Proc. of IEEE FOCS, 2002.
[36] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee. How to
Model an Internetwork. In Proc. of IEEE INFOCOM, 1996.
APPENDIX
A. ANALYZING SPECIFIC TOPOLOGIES
We now analyze the price of anarchy (PoA) for the basic game
with specific underlying topologies and show that PoA can have
better bounds. We look at complete graph, star, line, and
Ddimensional grid. In all these topologies, we set the distance
between two directly connected nodes to one. We describe the case
where α > 1, since PoA = 1 trivially when α ≤ 1.
A BC D3
α
3
α
4
3α
4
α
4
α
Figure 6: Example where the payment game has a Nash equilibrium
which is worse than any Nash equilibrium in the basic game. The
unlabeled distances between the nodes in the cluster are all 1. The
thresholds of white nodes are all α and the thresholds of dark nodes are all
α/4. The two dark nodes replicate the object in this payment game
Nash equilibrium.
For a complete graph, PoA = 1, and for a star, PoA ≤ 2.
For a complete graph, when α > 1, both Nash equilibria and
social optima place one replica at one server, so PoA = 1. For
star, when 1 < α < 2, the worst case Nash equilibrium places
replicas at all leaf nodes. However, the social optimum places
one replica at the center node. Therefore, PoA = (n−1)α+1
α+(n−1)
≤
2(n−1)+1
1+(n−1)
≤ 2. When α > 2, the worst case Nash equilibrium
places one replica at a leaf node and the other nodes access the
remote replica, and the social optimum places one replica at the
center. PoA = α+1+2(n−2)
α+(n−1)
= 1 + n
α+(n−1)
≤ 2.
For a line, the price of anarchy is O(
√
n). When 1 < α < n,
the worst case Nash equilibrium places replicas every 2α so that
there is no overlap between areas covered by two adjacent servers
that cache the object. The social optimum places replicas at least
every
√
2α. The placement of replicas for the social optimum is
as follows. Suppose there are two replicas separated by distance
d. By placing an additional replica in the middle, we want to have
the reduction of distance to be at least α. The distance reduction
is d/2 + 2{((d/2 − 1) − 1) + ((d/2 − 2) − 2) + ... + ((d/2 −
d/4) − d/4)} ≥ d2
/8. d should be at most 2
√
2α. Therefore, the
distance between replicas in the social optimum is at most
√
2α.
C(SW ) = α(n−1)
2α
+ α(α+1)
2
(n−1)
2α
= Θ(αn). C(SO) ≥ α n−1√
2α
+
2
√
2α/2(
√
2α/2+1)
2
n−1√
2α
. C(SO) = Ω(
√
αn). Therefore, PoA =
O(
√
α). When α > n − 1, the worst case Nash equilibrium places
one replica at a leaf node and C(SW ) = α + (n−1)n
2
. However,
the social optimum still places replicas every
√
2α. If we view
PoA as a continuous function of α and compute a derivative of
PoA, the derivative becomes 0 when α is Θ(n2
), which means
the function decreases as α increases from n. Therefore, PoA is
maximum when α is n, and PoA = Θ(n2
)
Ω(
√
nn)
= O(
√
n). When
α > (n−1)n
2
, the social optimum also places only one replica, and
PoA is trivially bounded by 2. This result holds for the ring and
it can be generalized to the D-dimensional grid. As the dimension
in the grid increases, the distance reduction of additional replica
placement becomes Ω(dD+1
) where d is the distance between two
adjacent replicas. Therefore, PoA = Θ(n2)
Ω(n
1
D+1 n)
= O(n
D
D+1 ).
B. PAYMENT CAN DO WORSE
Consider the network in Figure 6 where α > 1+α/3. Any Nash
equilibrium in the basic game model would have exactly two
replicas - one in the left cluster, and one in the right. It is easy to verify
that the worst placement (in terms of social cost) of two replicas
occurs when they are placed at nodes A and B. This placement can
be achieved as a Nash equilibrium in the payment game, but not in
the basic game since A and B are a distance 3α/4 apart.
29
Algorithm 1 Initialization for the Basic Game
L1 = a random subset of servers
for each node i in N do
if i ∈ L1 then
Si = 1 ; replicate the object
else
Si = 0
Algorithm 2 Move Selection of i for the Basic Game
Cost1 = α
Cost2 = minj∈X−{i} dij ; X is the current configuration
Costmin = min{Cost1, Cost2}
if Costnow > Costmin then
if Costmin == Cost1 then
Si = 1
else
Si = 0
C. NASH DYNAMICS PROTOCOLS
The simulator initializes the game according to the given
parameters and a random initial strategy profile and then iterates through
rounds. Initially the order of player actions is chosen randomly. In
each round, each server performs the Nash dynamics protocol that
adjusts its strategies greedily in the chosen order. When a round
passes without any server changing its strategy, the simulation ends
and a Nash equilibrium is reached.
In the basic game, we pick a random initial subset of servers to
replicate the object as shown in Algorithm 1. After the
initialization, each player runs the move selection procedure described in
Algorithm 2 (in algorithms 2 and 4, Costnow represents the
current cost for node i). This procedure chooses greedily between
replication and non-replication. It is not hard to see that this Nash
dynamics protocol converges in two rounds.
In the payment game, we pick a random initial subset of servers
to replicate the object by setting their thresholds to 0. In addition,
we initialize a second random subset of servers to replicate the
object with payments from other servers. The details are shown in
Algorithm 3. After the initialization, each player runs the move
selection procedure described in Algorithm 4. This procedure chooses
greedily between replication and accessing a remote replica, with
the possibilities of receiving and making payments, respectively.
In the protocol, each node increases its threshold value by incr if it
does not replicate the object. By this ramp up procedure, the cost of
replicating an object is shared fairly among the nodes that access a
replica from a server that does cache. If incr is small, cost is shared
more fairly, and the game tends to reach equilibria that encourages
more servers to store replicas, though the convergence takes longer.
If incr is large, the protocol converges quickly, but it may miss
efficient equilibria. In the simulations we set incr to 0.1. Most of our
A
B C
a
b
c
α/3+1
2α/3−1
2α/3
Figure 7: An example where the Nash dynamics protocol does not
converge in the payment game.
Algorithm 3 Initialization for the Payment Game
L1 = a random subset of servers
for each node i in N do
bi = 0
if i ∈ L1 then
ti = 0 ; replicate the object
else
ti = α
L2 = {}
for each node i in N do
if coin toss == head then
Mi = {j : d(j, i) < mink∈L1∪L2 d(j, k)}
if Mi != ∅ then
for each node j ∈ Mi do
bj = max{
α+
Èk∈Mi
d(i,k)
|Mi|
− d(i, j), 0}
L2 = L2 ∪ {i}
Algorithm 4 Move Selection of i for the Payment Game
Cost1 = α − Ri
Cost2 = minj∈N−{i}{tj − Rj + dij }
Costmin = min{Cost1, Cost2}
if Costnow > Costmin then
if Costmin == Cost1 then
ti = Ri
else
ti = Ri + incr
vi = argminj{tj − Rj + dij}
bi = tvi − Rvi
simulation runs converged, but there were a very few cases where
the simulation did not converge due to the cycles of dynamics. The
protocol does not guarantee convergence within a certain number
of rounds like the protocol for the basic game.
We provide an example graph and an initial condition such that
the Nash dynamics protocol does not converge in the payment game
if started from this initial condition. The graph is represented by
a shortest path metric on the network shown in Figure 7. In the
starting configuration, only A replicates the object, and a pays it
an amount α/3 to do so. The thresholds for A, B and C are α/3
each, and the thresholds for a, b and c are 2α/3. It is not hard to
verify that the Nash dynamics protocol will never converge if we
start with this condition.
The Nash dynamics protocol for the payment game needs
further investigation. The dynamics protocol for the payment game
should avoid cycles of actions to achieve stabilization of the
protocol. Finding a self-stabilizing dynamics protocol is an interesting
problem. In addition, a fixed value of incr cannot adapt to changing
environments. A small value of incr can lead to efficient equilibria,
but it can take long time to converge. An important area for future
research is looking at adaptively changing incr.
30 | caching problem;nash equilibrium;remote replica;social utility;submodularity;peer-to-peer file system;demand distribution;game-theoretic model;cache;anarchy price;aggregation effect;network topology;game-theoretic approach;distribute system;group strategyproofness;instrumentation server;price of anarchy;primal-dual technique;peer-to-peer system |
train_H-35 | AdaRank: A Boosting Algorithm for Information Retrieval | In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs ‘weak rankers" on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost. | 1. INTRODUCTION
Recently ‘learning to rank" has gained increasing attention in
both the fields of information retrieval and machine learning. When
applied to document retrieval, learning to rank becomes a task as
follows. In training, a ranking model is constructed with data
consisting of queries, their corresponding retrieved documents, and
relevance levels given by humans. In ranking, given a new query, the
corresponding retrieved documents are sorted by using the trained
ranking model. In document retrieval, usually ranking results are
evaluated in terms of performance measures such as MAP (Mean
Average Precision) [1] and NDCG (Normalized Discounted
Cumulative Gain) [15]. Ideally, the ranking function is created so that the
accuracy of ranking in terms of one of the measures with respect to
the training data is maximized.
Several methods for learning to rank have been developed and
applied to document retrieval. For example, Herbrich et al. [13]
propose a learning algorithm for ranking on the basis of Support
Vector Machines, called Ranking SVM. Freund et al. [8] take a
similar approach and perform the learning by using boosting,
referred to as RankBoost. All the existing methods used for
document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss
functions loosely related to the IR performance measures, not loss
functions directly based on the measures. For example, Ranking
SVM and RankBoost train ranking models by minimizing
classification errors on instance pairs.
In this paper, we aim to develop a new learning algorithm that
can directly optimize any performance measure used in document
retrieval. Inspired by the work of AdaBoost for classification [9],
we propose to develop a boosting algorithm for information
retrieval, referred to as AdaRank. AdaRank utilizes a linear
combination of ‘weak rankers" as its model. In learning, it repeats the
process of re-weighting the training sample, creating a weak ranker,
and calculating a weight for the ranker.
We show that AdaRank algorithm can iteratively optimize an
exponential loss function based on any of IR performance measures.
A lower bound of the performance on training data is given, which
indicates that the ranking accuracy in terms of the performance
measure can be continuously improved during the training process.
AdaRank offers several advantages: ease in implementation,
theoretical soundness, efficiency in training, and high accuracy in ranking.
Experimental results indicate that AdaRank can outperform the
baseline methods of BM25, Ranking SVM, and RankBoost, on four
benchmark datasets including OHSUMED, WSJ, AP, and .Gov.
Tuning ranking models using certain training data and a
performance measure is a common practice in IR [1]. As the number of
features in the ranking model gets larger and the amount of
training data gets larger, the tuning becomes harder. From the viewpoint
of IR, AdaRank can be viewed as a machine learning method for
ranking model tuning.
Recently, direct optimization of performance measures in
learning has become a hot research topic. Several methods for
classification [17] and ranking [5, 19] have been proposed. AdaRank can
be viewed as a machine learning method for direct optimization of
performance measures, based on a different approach.
The rest of the paper is organized as follows. After a summary
of related work in Section 2, we describe the proposed AdaRank
algorithm in details in Section 3. Experimental results and
discussions are given in Section 4. Section 5 concludes this paper and
gives future work.
2. RELATED WORK
2.1 Information Retrieval
The key problem for document retrieval is ranking, specifically,
how to create the ranking model (function) that can sort documents
based on their relevance to the given query. It is a common practice
in IR to tune the parameters of a ranking model using some labeled
data and one performance measure [1]. For example, the
state-ofthe-art methods of BM25 [24] and LMIR (Language Models for
Information Retrieval) [18, 22] all have parameters to tune. As
the ranking models become more sophisticated (more features are
used) and more labeled data become available, how to tune or train
ranking models turns out to be a challenging issue.
Recently methods of ‘learning to rank" have been applied to
ranking model construction and some promising results have been
obtained. For example, Joachims [16] applies Ranking SVM to
document retrieval. He utilizes click-through data to deduce
training data for the model creation. Cao et al. [4] adapt Ranking
SVM to document retrieval by modifying the Hinge Loss function
to better meet the requirements of IR. Specifically, they introduce
a Hinge Loss function that heavily penalizes errors on the tops of
ranking lists and errors from queries with fewer retrieved
documents. Burges et al. [3] employ Relative Entropy as a loss function
and Gradient Descent as an algorithm to train a Neural Network
model for ranking in document retrieval. The method is referred to
as ‘RankNet".
2.2 Machine Learning
There are three topics in machine learning which are related to
our current work. They are ‘learning to rank", boosting, and direct
optimization of performance measures.
Learning to rank is to automatically create a ranking function
that assigns scores to instances and then rank the instances by
using the scores. Several approaches have been proposed to tackle
the problem. One major approach to learning to rank is that of
transforming it into binary classification on instance pairs. This
‘pair-wise" approach fits well with information retrieval and thus is
widely used in IR. Typical methods of the approach include
Ranking SVM [13], RankBoost [8], and RankNet [3]. For other
approaches to learning to rank, refer to [2, 11, 31].
In the pair-wise approach to ranking, the learning task is
formalized as a problem of classifying instance pairs into two categories
(correctly ranked and incorrectly ranked). Actually, it is known
that reducing classification errors on instance pairs is equivalent to
maximizing a lower bound of MAP [16]. In that sense, the
existing methods of Ranking SVM, RankBoost, and RankNet are only
able to minimize loss functions that are loosely related to the IR
performance measures.
Boosting is a general technique for improving the accuracies of
machine learning algorithms. The basic idea of boosting is to
repeatedly construct ‘weak learners" by re-weighting training data
and form an ensemble of weak learners such that the total
performance of the ensemble is ‘boosted". Freund and Schapire have
proposed the first well-known boosting algorithm called AdaBoost
(Adaptive Boosting) [9], which is designed for binary
classification (0-1 prediction). Later, Schapire & Singer have introduced a
generalized version of AdaBoost in which weak learners can give
confidence scores in their predictions rather than make 0-1
decisions [26]. Extensions have been made to deal with the problems
of multi-class classification [10, 26], regression [7], and ranking
[8]. In fact, AdaBoost is an algorithm that ingeniously constructs
a linear model by minimizing the ‘exponential loss function" with
respect to the training data [26]. Our work in this paper can be
viewed as a boosting method developed for ranking, particularly
for ranking in IR.
Recently, a number of authors have proposed conducting direct
optimization of multivariate performance measures in learning. For
instance, Joachims [17] presents an SVM method to directly
optimize nonlinear multivariate performance measures like the F1
measure for classification. Cossock & Zhang [5] find a way to
approximately optimize the ranking performance measure DCG [15].
Metzler et al. [19] also propose a method of directly maximizing
rank-based metrics for ranking on the basis of manifold learning.
AdaRank is also one that tries to directly optimize multivariate
performance measures, but is based on a different approach. AdaRank
is unique in that it employs an exponential loss function based on
IR performance measures and a boosting technique.
3. OUR METHOD: ADARANK
3.1 General Framework
We first describe the general framework of learning to rank for
document retrieval. In retrieval (testing), given a query the system
returns a ranking list of documents in descending order of the
relevance scores. The relevance scores are calculated with a ranking
function (model). In learning (training), a number of queries and
their corresponding retrieved documents are given. Furthermore,
the relevance levels of the documents with respect to the queries are
also provided. The relevance levels are represented as ranks (i.e.,
categories in a total order). The objective of learning is to construct
a ranking function which achieves the best results in ranking of the
training data in the sense of minimization of a loss function. Ideally
the loss function is defined on the basis of the performance measure
used in testing.
Suppose that Y = {r1, r2, · · · , r } is a set of ranks, where denotes
the number of ranks. There exists a total order between the ranks
r r −1 · · · r1, where ‘ " denotes a preference relationship.
In training, a set of queries Q = {q1, q2, · · · , qm} is given. Each
query qi is associated with a list of retrieved documents di = {di1, di2,
· · · , di,n(qi)} and a list of labels yi = {yi1, yi2, · · · , yi,n(qi)}, where n(qi)
denotes the sizes of lists di and yi, dij denotes the jth
document in
di, and yij ∈ Y denotes the rank of document di j. A feature
vector xij = Ψ(qi, di j) ∈ X is created from each query-document pair
(qi, di j), i = 1, 2, · · · , m; j = 1, 2, · · · , n(qi). Thus, the training set
can be represented as S = {(qi, di, yi)}m
i=1.
The objective of learning is to create a ranking function f : X →
, such that for each query the elements in its corresponding
document list can be assigned relevance scores using the function and
then be ranked according to the scores. Specifically, we create a
permutation of integers π(qi, di, f) for query qi, the
corresponding list of documents di, and the ranking function f. Let di =
{di1, di2, · · · , di,n(qi)} be identified by the list of integers {1, 2, · · · , n(qi)},
then permutation π(qi, di, f) is defined as a bijection from {1, 2, · · · ,
n(qi)} to itself. We use π( j) to denote the position of item j (i.e.,
di j). The learning process turns out to be that of minimizing the
loss function which represents the disagreement between the
permutation π(qi, di, f) and the list of ranks yi, for all of the queries.
Table 1: Notations and explanations.
Notations Explanations
qi ∈ Q ith
query
di = {di1, di2, · · · , di,n(qi)} List of documents for qi
yi j ∈ {r1, r2, · · · , r } Rank of di j w.r.t. qi
yi = {yi1, yi2, · · · , yi,n(qi)} List of ranks for qi
S = {(qi, di, yi)}m
i=1 Training set
xij = Ψ(qi, dij) ∈ X Feature vector for (qi, di j)
f(xij) ∈ Ranking model
π(qi, di, f) Permutation for qi, di, and f
ht(xi j) ∈ tth
weak ranker
E(π(qi, di, f), yi) ∈ [−1, +1] Performance measure function
In the paper, we define the rank model as a linear combination of
weak rankers: f(x) = T
t=1 αtht(x), where ht(x) is a weak ranker, αt
is its weight, and T is the number of weak rankers.
In information retrieval, query-based performance measures are
used to evaluate the ‘goodness" of a ranking function. By query
based measure, we mean a measure defined over a ranking list
of documents with respect to a query. These measures include
MAP, NDCG, MRR (Mean Reciprocal Rank), WTA (Winners Take
ALL), and Precision@n [1, 15]. We utilize a general function
E(π(qi, di, f), yi) ∈ [−1, +1] to represent the performance
measures. The first argument of E is the permutation π created using
the ranking function f on di. The second argument is the list of
ranks yi given by humans. E measures the agreement between π
and yi. Table 1 gives a summary of notations described above.
Next, as examples of performance measures, we present the
definitions of MAP and NDCG. Given a query qi, the corresponding
list of ranks yi, and a permutation πi on di, average precision for qi
is defined as:
AvgPi =
n(qi)
j=1 Pi( j) · yij
n(qi)
j=1 yij
, (1)
where yij takes on 1 and 0 as values, representing being relevant or
irrelevant and Pi( j) is defined as precision at the position of dij:
Pi( j) =
k:πi(k)≤πi(j) yik
πi(j)
, (2)
where πi( j) denotes the position of di j.
Given a query qi, the list of ranks yi, and a permutation πi on di,
NDCG at position m for qi is defined as:
Ni = ni ·
j:πi(j)≤m
2yi j − 1
log(1 + πi( j))
, (3)
where yij takes on ranks as values and ni is a normalization
constant. ni is chosen so that a perfect ranking π∗
i "s NDCG score at
position m is 1.
3.2 Algorithm
Inspired by the AdaBoost algorithm for classification, we have
devised a novel algorithm which can optimize a loss function based
on the IR performance measures. The algorithm is referred to as
‘AdaRank" and is shown in Figure 1.
AdaRank takes a training set S = {(qi, di, yi)}m
i=1 as input and
takes the performance measure function E and the number of
iterations T as parameters. AdaRank runs T rounds and at each round it
creates a weak ranker ht(t = 1, · · · , T). Finally, it outputs a ranking
model f by linearly combining the weak rankers.
At each round, AdaRank maintains a distribution of weights over
the queries in the training data. We denote the distribution of weights
Input: S = {(qi, di, yi)}m
i=1, and parameters E and T
Initialize P1(i) = 1/m.
For t = 1, · · · , T
• Create weak ranker ht with weighted distribution Pt on
training data S .
• Choose αt
αt =
1
2
· ln
m
i=1 Pt(i){1 + E(π(qi, di, ht), yi)}
m
i=1 Pt(i){1 − E(π(qi, di, ht), yi)}
.
• Create ft
ft(x) =
t
k=1
αkhk(x).
• Update Pt+1
Pt+1(i) =
exp{−E(π(qi, di, ft), yi)}
m
j=1 exp{−E(π(qj, dj, ft), yj)}
.
End For
Output ranking model: f(x) = fT (x).
Figure 1: The AdaRank algorithm.
at round t as Pt and the weight on the ith
training query qi at round
t as Pt(i). Initially, AdaRank sets equal weights to the queries. At
each round, it increases the weights of those queries that are not
ranked well by ft, the model created so far. As a result, the learning
at the next round will be focused on the creation of a weak ranker
that can work on the ranking of those ‘hard" queries.
At each round, a weak ranker ht is constructed based on training
data with weight distribution Pt. The goodness of a weak ranker is
measured by the performance measure E weighted by Pt:
m
i=1
Pt(i)E(π(qi, di, ht), yi).
Several methods for weak ranker construction can be considered.
For example, a weak ranker can be created by using a subset of
queries (together with their document list and label list) sampled
according to the distribution Pt. In this paper, we use single features
as weak rankers, as will be explained in Section 3.6.
Once a weak ranker ht is built, AdaRank chooses a weight αt > 0
for the weak ranker. Intuitively, αt measures the importance of ht.
A ranking model ft is created at each round by linearly
combining the weak rankers constructed so far h1, · · · , ht with weights
α1, · · · , αt. ft is then used for updating the distribution Pt+1.
3.3 Theoretical Analysis
The existing learning algorithms for ranking attempt to minimize
a loss function based on instance pairs (document pairs). In
contrast, AdaRank tries to optimize a loss function based on queries.
Furthermore, the loss function in AdaRank is defined on the basis
of general IR performance measures. The measures can be MAP,
NDCG, WTA, MRR, or any other measures whose range is within
[−1, +1]. We next explain why this is the case.
Ideally we want to maximize the ranking accuracy in terms of a
performance measure on the training data:
max
f∈F
m
i=1
E(π(qi, di, f), yi), (4)
where F is the set of possible ranking functions. This is equivalent
to minimizing the loss on the training data
min
f∈F
m
i=1
(1 − E(π(qi, di, f), yi)). (5)
It is difficult to directly optimize the loss, because E is a
noncontinuous function and thus may be difficult to handle. We instead
attempt to minimize an upper bound of the loss in (5)
min
f∈F
m
i=1
exp{−E(π(qi, di, f), yi)}, (6)
because e−x
≥ 1 − x holds for any x ∈ . We consider the use of a
linear combination of weak rankers as our ranking model:
f(x) =
T
t=1
αtht(x). (7)
The minimization in (6) then turns out to be
min
ht∈H,αt∈ +
L(ht, αt) =
m
i=1
exp{−E(π(qi, di, ft−1 + αtht), yi)}, (8)
where H is the set of possible weak rankers, αt is a positive weight,
and ( ft−1 + αtht)(x) = ft−1(x) + αtht(x). Several ways of computing
coefficients αt and weak rankers ht may be considered. Following
the idea of AdaBoost, in AdaRank we take the approach of ‘forward
stage-wise additive modeling" [12] and get the algorithm in Figure
1. It can be proved that there exists a lower bound on the ranking
accuracy for AdaRank on training data, as presented in Theorem 1.
T 1. The following bound holds on the ranking
accuracy of the AdaRank algorithm on training data:
1
m
m
i=1
E(π(qi, di, fT ), yi) ≥ 1 −
T
t=1
e−δt
min 1 − ϕ(t)2,
where ϕ(t) = m
i=1 Pt(i)E(π(qi, di, ht), yi), δt
min = mini=1,··· ,m δt
i, and
δt
i = E(π(qi, di, ft−1 + αtht), yi) − E(π(qi, di, ft−1), yi)
−αtE(π(qi, di, ht), yi),
for all i = 1, 2, · · · , m and t = 1, 2, · · · , T.
A proof of the theorem can be found in appendix. The theorem
implies that the ranking accuracy in terms of the performance
measure can be continuously improved, as long as e−δt
min 1 − ϕ(t)2 < 1
holds.
3.4 Advantages
AdaRank is a simple yet powerful method. More importantly, it
is a method that can be justified from the theoretical viewpoint, as
discussed above. In addition AdaRank has several other advantages
when compared with the existing learning to rank methods such as
Ranking SVM, RankBoost, and RankNet.
First, AdaRank can incorporate any performance measure,
provided that the measure is query based and in the range of [−1, +1].
Notice that the major IR measures meet this requirement. In
contrast the existing methods only minimize loss functions that are
loosely related to the IR measures [16].
Second, the learning process of AdaRank is more efficient than
those of the existing learning algorithms. The time complexity of
AdaRank is of order O((k+T)·m·n log n), where k denotes the
number of features, T the number of rounds, m the number of queries
in training data, and n is the maximum number of documents for
queries in training data. The time complexity of RankBoost, for
example, is of order O(T · m · n2
) [8].
Third, AdaRank employs a more reasonable framework for
performing the ranking task than the existing methods. Specifically in
AdaRank the instances correspond to queries, while in the existing
methods the instances correspond to document pairs. As a result,
AdaRank does not have the following shortcomings that plague the
existing methods. (a) The existing methods have to make a strong
assumption that the document pairs from the same query are
independently distributed. In reality, this is clearly not the case and this
problem does not exist for AdaRank. (b) Ranking the most relevant
documents on the tops of document lists is crucial for document
retrieval. The existing methods cannot focus on the training on the
tops, as indicated in [4]. Several methods for rectifying the problem
have been proposed (e.g., [4]), however, they do not seem to
fundamentally solve the problem. In contrast, AdaRank can naturally
focus on training on the tops of document lists, because the
performance measures used favor rankings for which relevant documents
are on the tops. (c) In the existing methods, the numbers of
document pairs vary from query to query, resulting in creating models
biased toward queries with more document pairs, as pointed out in
[4]. AdaRank does not have this drawback, because it treats queries
rather than document pairs as basic units in learning.
3.5 Differences from AdaBoost
AdaRank is a boosting algorithm. In that sense, it is similar to
AdaBoost, but it also has several striking differences from AdaBoost.
First, the types of instances are different. AdaRank makes use of
queries and their corresponding document lists as instances. The
labels in training data are lists of ranks (relevance levels). AdaBoost
makes use of feature vectors as instances. The labels in training
data are simply +1 and −1.
Second, the performance measures are different. In AdaRank,
the performance measure is a generic measure, defined on the
document list and the rank list of a query. In AdaBoost the
corresponding performance measure is a specific measure for binary
classification, also referred to as ‘margin" [25].
Third, the ways of updating weights are also different. In
AdaBoost, the distribution of weights on training instances is
calculated according to the current distribution and the performance of
the current weak learner. In AdaRank, in contrast, it is calculated
according to the performance of the ranking model created so far,
as shown in Figure 1. Note that AdaBoost can also adopt the weight
updating method used in AdaRank. For AdaBoost they are
equivalent (cf., [12] page 305). However, this is not true for AdaRank.
3.6 Construction of Weak Ranker
We consider an efficient implementation for weak ranker
construction, which is also used in our experiments. In the
implementation, as weak ranker we choose the feature that has the optimal
weighted performance among all of the features:
max
k
m
i=1
Pt(i)E(π(qi, di, xk), yi).
Creating weak rankers in this way, the learning process turns out
to be that of repeatedly selecting features and linearly combining
the selected features. Note that features which are not selected in
the training phase will have a weight of zero.
4. EXPERIMENTAL RESULTS
We conducted experiments to test the performances of AdaRank
using four benchmark datasets: OHSUMED, WSJ, AP, and .Gov.
Table 2: Features used in the experiments on OHSUMED,
WSJ, and AP datasets. C(w, d) represents frequency of word
w in document d; C represents the entire collection; n denotes
number of terms in query; | · | denotes the size function; and
id f(·) denotes inverse document frequency.
1 wi∈q d ln(c(wi, d) + 1) 2 wi∈q d ln( |C|
c(wi,C)
+ 1)
3 wi∈q d ln(id f(wi)) 4 wi∈q d ln(c(wi,d)
|d|
+ 1)
5 wi∈q d ln(c(wi,d)
|d|
· id f(wi) + 1) 6 wi∈q d ln(c(wi,d)·|C|
|d|·c(wi,C)
+ 1)
7 ln(BM25 score)
0.2
0.3
0.4
0.5
0.6
MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10
BM25
Ranking SVM
RarnkBoost
AdaRank.MAP
AdaRank.NDCG
Figure 2: Ranking accuracies on OHSUMED data.
4.1 Experiment Setting
Ranking SVM [13, 16] and RankBoost [8] were selected as
baselines in the experiments, because they are the state-of-the-art
learning to rank methods. Furthermore, BM25 [24] was used as a
baseline, representing the state-of-the-arts IR method (we actually used
the tool Lemur1
).
For AdaRank, the parameter T was determined automatically
during each experiment. Specifically, when there is no
improvement in ranking accuracy in terms of the performance measure, the
iteration stops (and T is determined). As the measure E, MAP and
NDCG@5 were utilized. The results for AdaRank using MAP and
NDCG@5 as measures in training are represented as AdaRank.MAP
and AdaRank.NDCG, respectively.
4.2 Experiment with OHSUMED Data
In this experiment, we made use of the OHSUMED dataset [14]
to test the performances of AdaRank. The OHSUMED dataset
consists of 348,566 documents and 106 queries. There are in total
16,140 query-document pairs upon which relevance judgments are
made. The relevance judgments are either ‘d" (definitely relevant),
‘p" (possibly relevant), or ‘n"(not relevant). The data have been
used in many experiments in IR, for example [4, 29].
As features, we adopted those used in document retrieval [4].
Table 2 shows the features. For example, tf (term frequency), idf
(inverse document frequency), dl (document length), and
combinations of them are defined as features. BM25 score itself is also a
feature. Stop words were removed and stemming was conducted in
the data.
We randomly divided queries into four even subsets and
conducted 4-fold cross-validation experiments. We tuned the
parameters for BM25 during one of the trials and applied them to the other
trials. The results reported in Figure 2 are those averaged over four
trials. In MAP calculation, we define the rank ‘d" as relevant and
1
http://www.lemurproject.com
Table 3: Statistics on WSJ and AP datasets.
Dataset # queries # retrieved docs # docs per query
AP 116 24,727 213.16
WSJ 126 40,230 319.29
0.40
0.45
0.50
0.55
0.60
MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10
BM25
Ranking SVM
RankBoost
AdaRank.MAP
AdaRank.NDCG
Figure 3: Ranking accuracies on WSJ dataset.
the other two ranks as irrelevant. From Figure 2, we see that both
AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking
SVM, and RankBoost in terms of all measures. We conducted
significant tests (t-test) on the improvements of AdaRank.MAP over
BM25, Ranking SVM, and RankBoost in terms of MAP. The
results indicate that all the improvements are statistically significant
(p-value < 0.05). We also conducted t-test on the improvements
of AdaRank.NDCG over BM25, Ranking SVM, and RankBoost
in terms of NDCG@5. The improvements are also statistically
significant.
4.3 Experiment with WSJ and AP Data
In this experiment, we made use of the WSJ and AP datasets
from the TREC ad-hoc retrieval track, to test the performances of
AdaRank. WSJ contains 74,520 articles of Wall Street Journals
from 1990 to 1992, and AP contains 158,240 articles of
Associated Press in 1988 and 1990. 200 queries are selected from the
TREC topics (No.101 ∼ No.300). Each query has a number of
documents associated and they are labeled as ‘relevant" or ‘irrelevant"
(to the query). Following the practice in [28], the queries that have
less than 10 relevant documents were discarded. Table 3 shows the
statistics on the two datasets.
In the same way as in section 4.2, we adopted the features listed
in Table 2 for ranking. We also conducted 4-fold cross-validation
experiments. The results reported in Figure 3 and 4 are those
averaged over four trials on WSJ and AP datasets, respectively. From
Figure 3 and 4, we can see that AdaRank.MAP and AdaRank.NDCG
outperform BM25, Ranking SVM, and RankBoost in terms of all
measures on both WSJ and AP. We conducted t-tests on the
improvements of AdaRank.MAP and AdaRank.NDCG over BM25,
Ranking SVM, and RankBoost on WSJ and AP. The results
indicate that all the improvements in terms of MAP are statistically
significant (p-value < 0.05). However only some of the improvements
in terms of NDCG@5 are statistically significant, although overall
the improvements on NDCG scores are quite high (1-2 points).
4.4 Experiment with .Gov Data
In this experiment, we further made use of the TREC .Gov data
to test the performance of AdaRank for the task of web retrieval.
The corpus is a crawl from the .gov domain in early 2002, and
has been used at TREC Web Track since 2002. There are a total
0.40
0.45
0.50
0.55
MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10
BM25
Ranking SVM
RankBoost
AdaRank.MAP
AdaRank.NDCG
Figure 4: Ranking accuracies on AP dataset.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10
BM25
Ranking SVM
RankBoost
AdaRank.MAP
AdaRank.NDCG
Figure 5: Ranking accuracies on .Gov dataset.
Table 4: Features used in the experiments on .Gov dataset.
1 BM25 [24] 2 MSRA1000 [27]
3 PageRank [21] 4 HostRank [30]
5 Relevance Propagation [23] (10 features)
of 1,053,110 web pages with 11,164,829 hyperlinks in the data.
The 50 queries in the topic distillation task in the Web Track of
TREC 2003 [6] were used. The ground truths for the queries are
provided by the TREC committee with binary judgment: relevant
or irrelevant. The number of relevant pages vary from query to
query (from 1 to 86).
We extracted 14 features from each query-document pair.
Table 4 gives a list of the features. They are the outputs of some
well-known algorithms (systems). These features are different from
those in Table 2, because the task is different.
Again, we conducted 4-fold cross-validation experiments. The
results averaged over four trials are reported in Figure 5. From the
results, we can see that AdaRank.MAP and AdaRank.NDCG
outperform all the baselines in terms of all measures. We conducted
ttests on the improvements of AdaRank.MAP and AdaRank.NDCG
over BM25, Ranking SVM, and RankBoost. Some of the
improvements are not statistically significant. This is because we have only
50 queries used in the experiments, and the number of queries is
too small.
4.5 Discussions
We investigated the reasons that AdaRank outperforms the
baseline methods, using the results of the OHSUMED dataset as examples.
First, we examined the reason that AdaRank has higher
performances than Ranking SVM and RankBoost. Specifically we
com0.58
0.60
0.62
0.64
0.66
0.68
d-n d-p p-n
accuracy
pair type
Ranking SVM
RankBoost
AdaRank.MAP
AdaRank.NDCG
Figure 6: Accuracy on ranking document pairs with
OHSUMED dataset.
0
2
4
6
8
10
12
numberofqueries
number of document pairs per query
Figure 7: Distribution of queries with different number of
document pairs in training data of trial 1.
pared the error rates between different rank pairs made by
Ranking SVM, RankBoost, AdaRank.MAP, and AdaRank.NDCG on the
test data. The results averaged over four trials in the 4-fold cross
validation are shown in Figure 6. We use ‘d-n" to stand for the pairs
between ‘definitely relevant" and ‘not relevant", ‘d-p" the pairs
between ‘definitely relevant" and ‘partially relevant", and ‘p-n" the
pairs between ‘partially relevant" and ‘not relevant". From
Figure 6, we can see that AdaRank.MAP and AdaRank.NDCG make
fewer errors for ‘d-n" and ‘d-p", which are related to the tops of
rankings and are important. This is because AdaRank.MAP and
AdaRank.NDCG can naturally focus upon the training on the tops
by optimizing MAP and NDCG@5, respectively.
We also made statistics on the number of document pairs per
query in the training data (for trial 1). The queries are clustered into
different groups based on the the number of their associated
document pairs. Figure 7 shows the distribution of the query groups. In
the figure, for example, ‘0-1k" is the group of queries whose
number of document pairs are between 0 and 999. We can see that the
numbers of document pairs really vary from query to query. Next
we evaluated the accuracies of AdaRank.MAP and RankBoost in
terms of MAP for each of the query group. The results are reported
in Figure 8. We found that the average MAP of AdaRank.MAP
over the groups is two points higher than RankBoost. Furthermore,
it is interesting to see that AdaRank.MAP performs particularly
better than RankBoost for queries with small numbers of document
pairs (e.g., ‘0-1k", ‘1k-2k", and ‘2k-3k"). The results indicate that
AdaRank.MAP can effectively avoid creating a model biased
towards queries with more document pairs. For AdaRank.NDCG,
similar results can be observed.
0.2
0.3
0.4
0.5
MAP
query group
RankBoost
AdaRank.MAP
Figure 8: Differences in MAP for different query groups.
0.30
0.31
0.32
0.33
0.34
trial 1 trial 2 trial 3 trial 4
MAP
AdaRank.MAP
AdaRank.NDCG
Figure 9: MAP on training set when model is trained with MAP
or NDCG@5.
We further conducted an experiment to see whether AdaRank has
the ability to improve the ranking accuracy in terms of a measure
by using the measure in training. Specifically, we trained ranking
models using AdaRank.MAP and AdaRank.NDCG and evaluated
their accuracies on the training dataset in terms of both MAP and
NDCG@5. The experiment was conducted for each trial. Figure
9 and Figure 10 show the results in terms of MAP and NDCG@5,
respectively. We can see that, AdaRank.MAP trained with MAP
performs better in terms of MAP while AdaRank.NDCG trained
with NDCG@5 performs better in terms of NDCG@5. The results
indicate that AdaRank can indeed enhance ranking performance in
terms of a measure by using the measure in training.
Finally, we tried to verify the correctness of Theorem 1. That is,
the ranking accuracy in terms of the performance measure can be
continuously improved, as long as e−δt
min 1 − ϕ(t)2 < 1 holds. As
an example, Figure 11 shows the learning curve of AdaRank.MAP
in terms of MAP during the training phase in one trial of the cross
validation. From the figure, we can see that the ranking accuracy
of AdaRank.MAP steadily improves, as the training goes on, until
it reaches to the peak. The result agrees well with Theorem 1.
5. CONCLUSION AND FUTURE WORK
In this paper we have proposed a novel algorithm for learning
ranking models in document retrieval, referred to as AdaRank. In
contrast to existing methods, AdaRank optimizes a loss function
that is directly defined on the performance measures. It employs
a boosting technique in ranking model learning. AdaRank offers
several advantages: ease of implementation, theoretical soundness,
efficiency in training, and high accuracy in ranking. Experimental
results based on four benchmark datasets show that AdaRank can
significantly outperform the baseline methods of BM25, Ranking
SVM, and RankBoost.
0.49
0.50
0.51
0.52
0.53
trial 1 trial 2 trial 3 trial 4
NDCG@5
AdaRank.MAP
AdaRank.NDCG
Figure 10: NDCG@5 on training set when model is trained
with MAP or NDCG@5.
0.29
0.30
0.31
0.32
0 50 100 150 200 250 300 350
MAP
number of rounds
Figure 11: Learning curve of AdaRank.
Future work includes theoretical analysis on the generalization
error and other properties of the AdaRank algorithm, and further
empirical evaluations of the algorithm including comparisons with
other algorithms that can directly optimize performance measures.
6. ACKNOWLEDGMENTS
We thank Harry Shum, Wei-Ying Ma, Tie-Yan Liu, Gu Xu, Bin
Gao, Robert Schapire, and Andrew Arnold for their valuable
comments and suggestions to this paper.
7. REFERENCES
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information
Retrieval. Addison Wesley, May 1999.
[2] C. Burges, R. Ragno, and Q. Le. Learning to rank with
nonsmooth cost functions. In Advances in Neural
Information Processing Systems 18, pages 395-402. MIT
Press, Cambridge, MA, 2006.
[3] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds,
N. Hamilton, and G. Hullender. Learning to rank using
gradient descent. In ICML 22, pages 89-96, 2005.
[4] Y. Cao, J. Xu, T.-Y. Liu, H. Li, Y. Huang, and H.-W. Hon.
Adapting ranking SVM to document retrieval. In SIGIR 29,
pages 186-193, 2006.
[5] D. Cossock and T. Zhang. Subset ranking using regression.
In COLT, pages 605-619, 2006.
[6] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.
Overview of the TREC 2003 web track. In TREC, pages
78-92, 2003.
[7] N. Duffy and D. Helmbold. Boosting methods for regression.
Mach. Learn., 47(2-3):153-200, 2002.
[8] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An
efficient boosting algorithm for combining preferences.
Journal of Machine Learning Research, 4:933-969, 2003.
[9] Y. Freund and R. E. Schapire. A decision-theoretic
generalization of on-line learning and an application to
boosting. J. Comput. Syst. Sci., 55(1):119-139, 1997.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic
regression: A statistical view of boosting. The Annals of
Statistics, 28(2):337-374, 2000.
[11] G. Fung, R. Rosales, and B. Krishnapuram. Learning
rankings via convex hull separation. In Advances in Neural
Information Processing Systems 18, pages 395-402. MIT
Press, Cambridge, MA, 2006.
[12] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of
Statistical Learning. Springer, August 2001.
[13] R. Herbrich, T. Graepel, and K. Obermayer. Large Margin
rank boundaries for ordinal regression. MIT Press,
Cambridge, MA, 2000.
[14] W. Hersh, C. Buckley, T. J. Leone, and D. Hickam.
Ohsumed: an interactive retrieval evaluation and new large
test collection for research. In SIGIR, pages 192-201, 1994.
[15] K. Jarvelin and J. Kekalainen. IR evaluation methods for
retrieving highly relevant documents. In SIGIR 23, pages
41-48, 2000.
[16] T. Joachims. Optimizing search engines using clickthrough
data. In SIGKDD 8, pages 133-142, 2002.
[17] T. Joachims. A support vector method for multivariate
performance measures. In ICML 22, pages 377-384, 2005.
[18] J. Lafferty and C. Zhai. Document language models, query
models, and risk minimization for information retrieval. In
SIGIR 24, pages 111-119, 2001.
[19] D. A. Metzler, W. B. Croft, and A. McCallum. Direct
maximization of rank-based metrics for information
retrieval. Technical report, CIIR, 2005.
[20] R. Nallapati. Discriminative models for information retrieval.
In SIGIR 27, pages 64-71, 2004.
[21] L. Page, S. Brin, R. Motwani, and T. Winograd. The
pagerank citation ranking: Bringing order to the web.
Technical report, Stanford Digital Library Technologies
Project, 1998.
[22] J. M. Ponte and W. B. Croft. A language modeling approach
to information retrieval. In SIGIR 21, pages 275-281, 1998.
[23] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A
study of relevance propagation for web search. In SIGIR 28,
pages 408-415, 2005.
[24] S. E. Robertson and D. A. Hull. The TREC-9 filtering track
final report. In TREC, pages 25-40, 2000.
[25] R. E. Schapire, Y. Freund, P. Barlett, and W. S. Lee. Boosting
the margin: A new explanation for the effectiveness of voting
methods. In ICML 14, pages 322-330, 1997.
[26] R. E. Schapire and Y. Singer. Improved boosting algorithms
using confidence-rated predictions. Mach. Learn.,
37(3):297-336, 1999.
[27] R. Song, J. Wen, S. Shi, G. Xin, T. yan Liu, T. Qin, X. Zheng,
J. Zhang, G. Xue, and W.-Y. Ma. Microsoft Research Asia at
web track and terabyte track of TREC 2004. In TREC, 2004.
[28] A. Trotman. Learning to rank. Inf. Retr., 8(3):359-381, 2005.
[29] J. Xu, Y. Cao, H. Li, and Y. Huang. Cost-sensitive learning
of SVM for ranking. In ECML, pages 833-840, 2006.
[30] G.-R. Xue, Q. Yang, H.-J. Zeng, Y. Yu, and Z. Chen.
Exploiting the hierarchical structure for link analysis. In
SIGIR 28, pages 186-193, 2005.
[31] H. Yu. SVM selective sampling for ranking with application
to data retrieval. In SIGKDD 11, pages 354-363, 2005.
APPENDIX
Here we give the proof of Theorem 1.
P. Set ZT = m
i=1 exp {−E(π(qi, di, fT ), yi)} and φ(t) = 1
2
(1 +
ϕ(t)). According to the definition of αt, we know that eαt = φ(t)
1−φ(t)
.
ZT =
m
i=1
exp {−E(π(qi, di, fT−1 + αT hT ), yi)}
=
m
i=1
exp −E(π(qi, di, fT−1), yi) − αT E(π(qi, di, hT ), yi) − δT
i
≤
m
i=1
exp {−E(π(qi, di, fT−1), yi)} exp {−αT E(π(qi, di, hT ), yi)} e−δT
min
= e−δT
min ZT−1
m
i=1
exp {−E(π(qi, di, fT−1), yi)}
ZT−1
exp{−αT E(π(qi, di, hT ), yi)}
= e−δT
min ZT−1
m
i=1
PT (i) exp{−αT E(π(qi, di, hT ), yi)}.
Moreover, if E(π(qi, di, hT ), yi) ∈ [−1, +1] then,
ZT ≤ e−δT
minZT−1
m
i=1
PT (i)
1+E(π(qi, di, hT ), yi)
2
e−αT
+
1−E(π(qi, di, hT ), yi)
2
eαT
= e−δT
min ZT−1
φ(T)
1 − φ(T)
φ(T)
+ (1 − φ(T))
φ(T)
1 − φ(T)
= ZT−1e−δT
min 4φ(T)(1 − φ(T))
≤ ZT−2
T
t=T−1
e−δt
min 4φ(t)(1 − φ(t))
≤ Z1
T
t=2
e−δt
min 4φ(t)(1 − φ(t))
= m
m
i=1
1
m
exp{−E(π(qi, di, α1h1), yi)}
T
t=2
e−δt
min 4φ(t)(1 − φ(t))
= m
m
i=1
1
m
exp{−α1E(π(qi, di, h1), yi) − δ1
i }
T
t=2
e−δt
min 4φ(t)(1 − φ(t))
≤ me−δ1
min
m
i=1
1
m
exp{−α1E(π(qi, di, h1), yi)}
T
t=2
e−δt
min 4φ(t)(1 − φ(t))
≤ m e−δ1
min 4φ(1)(1 − φ(1))
T
t=2
e−δt
min 4φ(t)(1 − φ(t))
= m
T
t=1
e−δt
min 1 − ϕ(t)2.
∴
1
m
m
i=1
E(π(qi, di, fT ), yi) ≥
1
m
m
i=1
{1 − exp(−E(π(qi, di, fT ), yi))}
≥ 1 −
T
t=1
e−δt
min 1 − ϕ(t)2. | support vector machine;trained ranking model;rankboost;weak ranker;ranking model tuning;ranking model;boost;novel learning algorithm;learn to rank;training process;machine learning;new learning algorithm;document retrieval;information retrieval;re-weighted training datum |
train_H-37 | Relaxed Online SVMs for Spam Filtering | Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks. | 1. INTRODUCTION
Electronic communication is increasingly plagued by
unwanted or harmful content known as spam. The most well
known form of spam is email spam, which remains a major
problem for large email systems. Other forms of spam are
also becoming problematic, including blog spam, in which
spammers post unwanted comments in blogs [21], and splogs,
which are fake blogs constructed to enable link spam with
the hope of boosting the measured importance of a given
webpage in the eyes of automated search engines [17]. There
are a variety of methods for identifying these many forms
of spam, including compiling blacklists of known spammers,
and conducting link analysis.
The approach of content analysis has shown particular
promise and generality for combating spam. In content
analysis, the actual message text (often including hyper-text and
meta-text, such as HTML and headers) is analyzed using
machine learning techniques for text classification to
determine if the given content is spam. Content analysis has
been widely applied in detecting email spam [11], and has
also been used for identifying blog spam [21] and splogs [17].
In this paper, we do not explore the related problem of link
spam, which is currently best combated by link analysis [13].
1.1 An Anti-Spam Controversy
The anti-spam community has been divided on the choice
of the best machine learning method for content-based spam
detection. Academic researchers have tended to favor the
use of Support Vector Machines (SVMs), a statistically
robust machine learning method [7] which yields
state-of-theart performance on general text classification [14]. However,
SVMs typically require training time that is quadratic in the
number of training examples, and are impractical for
largescale email systems. Practitioners requiring content-based
spam filtering have typically chosen to use the faster (if
less statistically robust) machine learning method of Naive
Bayes text classification [11, 12, 20]. This Bayesian method
requires only linear training time, and is easily implemented
in an online setting with incremental updates. This allows a
deployed system to easily adapt to a changing environment
over time. Other fast methods for spam filtering include
compression models [1] and logistic regression [10]. It has
not yet been empirically demonstrated that SVMs give
improved performance over these methods in an online spam
detection setting [4].
1.2 Contributions
In this paper, we address the anti-spam controversy and
offer a potential resolution. We first demonstrate that
online SVMs do indeed provide state-of-the-art spam detection
through empirical tests on several large benchmark data sets
of email spam. We then analyze the effect of the tradeoff
parameter in the SVM objective function, which shows that
the expensive SVM methodology may, in fact, be overkill for
spam detection. We reduce the computational cost of SVM
learning by relaxing this requirement on the maximum
margin in online settings, and create a Relaxed Online SVM,
ROSVM, appropriate for high performance content-based
spam filtering in large-scale settings.
2. SPAM AND ONLINE SVMS
The controversy between academics and practitioners in
spam filtering centers on the use of SVMs. The former
advocate their use, but have yet to demonstrate strong
performance with SVMs on online spam filtering. Indeed, the
results of [4] show that, when used with default parameters,
SVMs actually perform worse than other methods. In this
section, we review the basic workings of SVMs and describe
a simple Online SVM algorithm. We then show that Online
SVMs indeed achieve state-of-the-art performance on
filtering email spam, blog comment spam, and splogs, so long as
the tradeoff parameter C is set to a high value. However, the
cost of Online SVMs turns out to be prohibitive for
largescale applications. These findings motivate our proposal of
Relaxed Online SVMs in the following section.
2.1 Background: SVMs
SVMs are a robust machine learning methodology which
has been shown to yield state-of-the-art performance on text
classification [14]. by finding a hyperplane that separates
two classes of data in data space while maximizing the
margin between them.
We use the following notation to describe SVMs, which
draws from [23]. A data set X contains n labeled example
vectors {(x1, y1) . . . (xn, yn)}, where each xi is a vector
containing features describing example i, and each yi is the class
label for that example. In spam detection, the classes spam
and ham (i.e., not spam) are assigned the numerical class
labels +1 and −1, respectively. The linear SVMs we employ
in this paper use a hypothesis vector w and bias term b to
classify a new example x, by generating a predicted class
label f(x):
f(x) = sign(< w, x > +b)
SVMs find the hypothesis w, which defines the separating
hyperplane, by minimizing the following objective function
over all n training examples:
τ(w, ξ) =
1
2
||w||2
+ C
nX
i=i
ξi
under the constraints that
∀i = {1..n} : yi(< w, xi > +b) ≥ 1 − ξi, ξi ≥ 0
In this objective function, each slack variable ξi shows the
amount of error that the classifier makes on a given example
xi. Minimizing the sum of the slack variables corresponds
to minimizing the loss function on the training data, while
minimizing the term 1
2
||w||2
corresponds to maximizing the
margin between the two classes [23]. These two optimization
goals are often in conflict; the tradeoff parameter C
determines how much importance to give each of these tasks.
Linear SVMs exploit data sparsity to classify a new
instance in O(s) time, where s is the number of non-zero
features. This is the same classification time as other linear
Given: data set X = (x1, y1), . . . , (xn, yn), C, m:
Initialize w := 0, b := 0, seenData := { }
For Each xi ∈ X do:
Classify xi using f(xi) = sign(< w, xi > +b)
IF yif(xi) < 1
Find w , b using SMO on seenData,
using w, b as seed hypothesis.
Add xi to seenData
done
Figure 1: Pseudo code for Online SVM.
classifiers, and as Naive Bayesian classification. Training
SVMs, however, typically takes O(n2
) time, for n training
examples. A variant for linear SVMs was recently proposed
which trains in O(ns) time [15], but because this method
has a high constant, we do not explore it here.
2.2 Online SVMs
In many traditional machine learning applications, SVMs
are applied in batch mode. That is, an SVM is trained on
an entire set of training data, and is then tested on a
separate set of testing data. Spam filtering is typically tested
and deployed in an online setting, which proceeds
incrementally. Here, the learner classifies a new example, is told if
its prediction is correct, updates its hypothesis accordingly,
and then awaits a new example. Online learning allows a
deployed system to adapt itself in a changing environment.
Re-training an SVM from scratch on the entire set of
previously seen data for each new example is cost prohibitive.
However, using an old hypothesis as the starting point for
re-training reduces this cost considerably. One method of
incremental and decremental SVM learning was proposed in
[2]. Because we are only concerned with incremental
learning, we apply a simpler algorithm for converting a batch
SVM learner into an online SVM (see Figure 1 for
pseudocode), which is similar to the approach of [16].
Each time the Online SVM encounters an example that
was poorly classified, it retrains using the old hypothesis as
a starting point. Note that due to the Karush-Kuhn-Tucker
(KKT) conditions, it is not necessary to re-train on
wellclassified examples that are outside the margins [23].
We used Platt"s SMO algorithm [22] as a core SVM solver,
because it is an iterative method that is well suited to
converge quickly from a good initial hypothesis. Because
previous work (and our own initial testing) indicates that binary
feature values give the best results for spam filtering [20,
9], we optimized our implementation of the Online SMO to
exploit fast inner-products with binary vectors. 1
2.3 Feature Mapping Spam Content
Extracting machine learning features from text may be
done in a variety of ways, especially when that text may
include hyper-content and meta-content such as HTML and
header information. However, previous research has shown
that simple methods from text classification, such as bag
of words vectors, and overlapping character-level n-grams,
can achieve strong results [9]. Formally, a bag of words
vector is a vector x with a unique dimension for each possible
1
Our source code is freely available at
www.cs.tufts.edu/∼dsculley/onlineSMO.
1
0.999
0.995
0.99
0.985
0.98
0.1 1 10 100 1000
ROCArea
C
2-grams
3-grams
4-grams
words
Figure 2: Tuning the Tradeoff Parameter C. Tests
were conducted with Online SMO, using binary
feature vectors, on the spamassassin data set of 6034
examples. Graph plots C versus Area under the
ROC curve.
word, defined as a contiguous substring of non-whitespace
characters. An n-gram vector is a vector x with a unique
dimension for each possible substring of n total characters.
Note that n-grams may include whitespace, and are
overlapping. We use binary feature scoring, which has been shown
to be most effective for a variety of spam detection
methods [20, 9]. We normalize the vectors with the Euclidean
norm. Furthermore, with email data, we reduce the impact
of long messages (for example, with attachments) by
considering only the first 3,000 characters of each string. For blog
comments and splogs, we consider the whole text,
including any meta-data such as HTML tags, as given. No other
feature selection or domain knowledge was used.
2.4 Tuning the Tradeoff Parameter, C
The SVM tradeoff parameter C must be tuned to balance
the (potentially conflicting) goals of maximizing the
margin and minimizing the training error. Early work on SVM
based spam detection [9] showed that high values of C give
best performance with binary features. Later work has not
always followed this lead: a (low) default setting of C was
used on splog detection [17], and also on email spam [4].
Following standard machine learning practice, we tuned C
on separate tuning data not used for later testing. We used
the publicly available spamassassin email spam data set,
and created an online learning task by randomly interleaving
all 6034 labeled messages to create a single ordered set.
For tuning, we performed a coarse parameter search for C
using powers of ten from .0001 to 10000. We used the Online
SVM described above, and tested both binary bag of words
vectors and n-gram vectors with n = {2, 3, 4}. We used the
first 3000 characters of each message, which included header
information, body of the email, and possibly attachments.
Following the recommendation of [6], we use Area under
the ROC curve as our evaluation measure. The results (see
Figure 2) agree with [9]: there is a plateau of high
performance achieved with all values of C ≥ 10, and performance
degrades sharply with C < 1. For the remainder of our
experiments with SVMs in this paper, we set C = 100. We
will return to the observation that very high values of C do
not degrade performance as support for the intuition that
relaxed SVMs should perform well on spam.
Table 1: Results for Email Spam filtering with
Online SVM on benchmark data sets. Score reported
is (1-ROCA)%, where 0 is optimal.
trec05p-1 trec06p
OnSVM: words 0.015 (.011-.022) 0.034 (.025-.046)
3-grams 0.011 (.009-.015) 0.025 (.017-.035)
4-grams 0.008 (.007-.011) 0.023 (.017-.032)
SpamProbe 0.059 (.049-.071) 0.092 (.078-.110)
BogoFilter 0.048 (.038-.062) 0.077 (.056-.105)
TREC Winners 0.019 (.015-.023) 0.054 (.034-.085)
53-Ensemble 0.007 (.005-.008) 0.020 (.007-.050)
Table 2: Results for Blog Comment Spam Detection
using SVMs and Leave One Out Cross Validation.
We report the same performance measures as in the
prior work for meaningful comparison.
accuracy precision recall
SVM C = 100: words 0.931 0.946 0.954
3-grams 0.951 0.963 0.965
4-grams 0.949 0.967 0.956
Prior best method 0.83 0.874 0.874
2.5 Email Spam and Online SVMs
With C tuned on a separate tuning set, we then tested the
performance of Online SVMs in spam detection. We used
two large benchmark data sets of email spam as our test
corpora. These data sets are the 2005 TREC public data set
trec05p-1 of 92,189 messages, and the 2006 TREC public
data sets, trec06p, containing 37,822 messages in English.
(We do not report our strong results on the trec06c corpus
of Chinese messages as there have been questions raised over
the validity of this test set.) We used the canonical ordering
provided with each of these data sets for fair comparison.
Results for these experiments, with bag of words vectors
and and n-gram vectors appear in Table 1. To compare our
results with previous scores on these data sets, we use the
same (1-ROCA)% measure described in [6], which is one
minus the area under the ROC curve, expressed as a percent.
This measure shows the percent chance of error made by
a classifier asserting that one message is more likely to be
spam than another. These results show that Online SVMs
do give state of the art performance on email spam. The
only known system that out-performs the Online SVMs on
the trec05p-1 data set is a recent ensemble classifier which
combines the results of 53 unique spam filters [19]. To
our knowledge, the Online SVM has out-performed every
other single filter on these data sets, including those using
Bayesian methods [5, 3], compression models [5, 3], logistic
regression [10], and perceptron variants [3], the TREC
competition winners [5, 3], and open source email spam filters
BogoFilter v1.1.5 and SpamProbe v1.4d.
2.6 Blog Comment Spam and SVMs
Blog comment spam is similar to email spam in many
regards, and content-based methods have been proposed for
detecting these spam comments [21]. However, large
benchmark data sets of labeled blog comment spam do not yet
exist. Thus, we run experiments on the only publicly available
data set we know of, which was used in content-based blog
Table 3: Results for Splog vs. Blog Detection using
SVMs and Leave One Out Cross Validation. We
report the same evaluation measures as in the prior
work for meaningful comparison.
features precision recall F1
SVM C = 100: words 0.921 0.870 0.895
3-grams 0.904 0.866 0.885
4-grams 0.928 0.876 0.901
Prior SVM with: words 0.887 0.864 0.875
4-grams 0.867 0.844 0.855
words+urls 0.893 0.869 0.881
comment spam detection experiments by [21]. Because of
the small size of the data set, and because prior researchers
did not conduct their experiments in an on-line setting, we
test the performance of linear SVMs using leave-one-out
cross validation, with SVM-Light, a standard open-source
SVM implementation [14]. We use the parameter setting
C = 100, with the same feature space mappings as above.
We report accuracy, precision, and recall to compare these to
the results given on the same data set by [21]. These results
(see Table 2) show that SVMs give superior performance on
this data set to the prior methodology.
2.7 Splogs and SVMs
As with blog comment spam, there is not yet a large,
publicly available benchmark corpus of labeled splog detection
test data. However, the authors of [17] kindly provided us
with the labeled data set of 1,389 blogs and splogs that they
used to test content-based splog detection using SVMs. The
only difference between our methodology and that of [17] is
that they used default parameters for C, which SVM-Light
sets to 1
avg||x||2
. (For normalized vectors, this default value
sets C = 1.) They also tested several domain-informed
feature mappings, such as giving special features to url tags.
For our experiments, we used the same feature mappings
as above, and tested the effect of setting C = 100. As with
the methodology of [17], we performed leave one out cross
validation for apples-to-apples comparison on this data. The
results (see Table 3) show that a high value of C produces
higher performance for the same feature space mappings,
and even enables the simple 4-gram mapping to out-perform
the previous best mapping which incorporated domain
knowledge by using words and urls.
2.8 Computational Cost
The results presented in this section demonstrate that
linfeatures trec06p trec05p-1
words 12196s 66478s
3-grams 44605s 128924s
4-grams 87519s 242160s
corpus size 32822 92189
Table 4: Execution time for Online SVMs with email
spam detection, in CPU seconds. These times do
not include the time spent mapping strings to
feature vectors. The number of examples in each data
set is given in the last row as corpus size.
A
B
Figure 3: Visualizing the effect of C.
Hyperplane A maximizes the margin while accepting a
small amount of training error. This corresponds
to setting C to a low value. Hyperplane B
accepts a smaller margin in order to reduce
training error. This corresponds to setting C to a high
value. Content-based spam filtering appears to do
best with high values of C.
ear SVMs give state of the art performance on content-based
spam filtering. However, this performance comes at a price.
Although the blog comment spam and splog data sets are
too small for the quadratic training time of SVMs to
appear problematic, the email data sets are large enough to
illustrate the problems of quadratic training cost.
Table 4 shows computation time versus data set size for
each of the online learning tasks (on same system). The
training cost of SVMs are prohibitive for large-scale content
based spam detection, or a large blog host. In the
following section, we reduce this cost by relaxing the expensive
requirements of SVMs.
3. RELAXED ONLINE SVMS (ROSVM)
One of the main benefits of SVMs is that they find a
decision hyperplane that maximizes the margin between classes
in the data space. Maximizing the margin is expensive,
typically requiring quadratic training time in the number
of training examples. However, as we saw in the previous
section, the task of content-based spam detection is best
achieved by SVMs with a high value of C. Setting C to a
high value for this domain implies that minimizing
training loss is more important than maximizing the margin (see
Figure 3).
Thus, while SVMs do create high performance spam
filters, applying them in practice is overkill. The full margin
maximization feature that they provide is unnecessary, and
relaxing this requirement can reduce computational cost.
We propose three ways to relax Online SVMs:
• Reduce the size of the optimization problem by only
optimizing over the last p examples.
• Reduce the number of training updates by only
training on actual errors.
• Reduce the number of iterations in the iterative SVM
Given: dataset X = (x1, y1), . . . , (xn, yn), C, m, p:
Initialize w := 0, b := 0, seenData := { }
For Each xi ∈ X do:
Classify xi using f(xi) = sign(< w, xi > +b)
If yif(xi) < m
Find w , b with SMO on seenData,
using w, b as seed hypothesis.
set (w, b) := (w",b")
If size(seenData) > p
remove oldest example from seenData
Add xi to seenData
done
Figure 4: Pseudo-code for Relaxed Online SVM.
solver by allowing an approximate solution to the
optimization problem.
As we describe in the remainder of this subsection, all of
these methods trade statistical robustness for reduced
computational cost. Experimental results reported in the
following section show that they equal or approach the
performance of full Online SVMs on content-based spam detection.
3.1 Reducing Problem Size
In the full Online SVMs, we re-optimize over the full set
of seen data on every update, which becomes expensive as
the number of seen data points grows. We can bound this
expense by only considering the p most recent examples for
optimization (see Figure 4 for pseudo-code).
Note that this is not equivalent to training a new SVM
classifier from scratch on the p most recent examples,
because each successive optimization problem is seeded with
the previous hypothesis w [8]. This hypothesis may contain
values for features that do not occur anywhere in the p most
recent examples, and these will not be changed. This allows
the hypothesis to remember rare (but informative) features
that were learned further than p examples in the past.
Formally, the optimization problem is now defined most
clearly in the dual form [23]. In this case, the original
softmargin SVM is computed by maximizing at example n:
W (α) =
nX
i=1
αi −
1
2
nX
i,j=1
αiαjyiyj < xi, xj >,
subject to the previous constraints [23]:
∀i ∈ {1, . . . , n} : 0 ≤ αi ≤ C and
nX
i=1
αiyi = 0
To this, we add the additional lookback buffer constraint
∀j ∈ {1, . . . , (n − p)} : αj = cj
where cj is a constant, fixed as the last value found for αj
while j > (n − p). Thus, the margin found by an
optimization is not guaranteed to be one that maximizes the margin
for the global data set of examples {x1, . . . , xn)}, but rather
one that satisfies a relaxed requirement that the margin be
maximized over the examples { x(n−p+1), . . . , xn}, subject
to the fixed constraints on the hyperplane that were found
in previous optimizations over examples {x1, . . . , x(n−p)}.
(For completeness, when p ≥ n, define (n − p) = 1.) This
set of constraints reduces the number of free variables in the
optimization problem, reducing computational cost.
3.2 Reducing Number of Updates
As noted before, the KKT conditions show that a well
classified example will not change the hypothesis; thus it is
not necessary to re-train when we encounter such an
example. Under the KKT conditions, an example xi is considered
well-classified when yif(xi) > 1. If we re-train on every
example that is not well-classified, our hyperplane will be
guaranteed to be optimal at every step.
The number of re-training updates can be reduced by
relaxing the definition of well classified. An example xi is
now considered well classified when yif(xi) > M, for some
0 ≤ M ≤ 1. Here, each update still produces an optimal
hyperplane. The learner may encounter an example that lies
within the margins, but farther from the margins than M.
Such an example means the hypothesis is no longer globally
optimal for the data set, but it is considered good enough
for continued use without immediate retraining.
This update procedure is similar to that used by
variants of the Perceptron algorithm [18]. In the extreme case,
we can set M = 0, which creates a mistake driven Online
SVM. In the experimental section, we show that this
version of Online SVMs, which updates only on actual errors,
does not significantly degrade performance on content-based
spam detection, but does significantly reduce cost.
3.3 Reducing Iterations
As an iterative solver, SMO makes repeated passes over
the data set to optimize the objective function. SMO has
one main loop, which can alternate between passing over
the entire data set, or the smaller active set of current
support vectors [22]. Successive iterations of this loop bring
the hyperplane closer to an optimal value. However, it is
possible that these iterations provide less benefit than their
expense justifies. That is, a close first approximation may
be good enough. We introduce a parameter T to control the
maximum number of iterations we allow. As we will see in
the experimental section, this parameter can be set as low
as 1 with little impact on the quality of results, providing
computational savings.
4. EXPERIMENTS
In Section 2, we argued that the strong performance on
content-based spam detection with SVMs with a high value
of C show that the maximum margin criteria is overkill,
incurring unnecessary computational cost. In Section 3, we
proposed ROSVM to address this issue, as both of these
methods trade away guarantees on the maximum margin
hyperplane in return for reduced computational cost. In this
section, we test these methods on the same benchmark data
sets to see if state of the art performance may be achieved by
these less costly methods. We find that ROSVM is capable
of achieving these high levels of performance with greatly
reduced cost. Our main tests on content-based spam
detection are performed on large benchmark sets of email data.
We then apply these methods on the smaller data sets of
blog comment spam and blogs, with similar performance.
4.1 ROSVM Tests
In Section 3, we proposed three approaches for reducing
the computational cost of Online SMO: reducing the
prob0.005
0.01
0.025
0.05
0.1
10 100 1000 10000 100000
(1-ROCA)%
Buffer Size
trec05p-1
trec06p
0
50000
100000
150000
200000
250000
10 100 1000 10000 100000
CPUSec.
Buffer Size
trec05p-1
trec06p
Figure 5: Reduced Size Tests.
lem size, reducing the number of optimization iterations,
and reducing the number of training updates. Each of these
approaches relax the maximum margin criteria on the global
set of previously seen data. Here we test the effect that each
of these methods has on both effectiveness and efficiency. In
each of these tests, we use the large benchmark email data
sets, trec05p-1 and trec06p.
4.1.1 Testing Reduced Size
For our first ROSVM test, we experiment on the effect
of reducing the size of the optimization problem by only
considering the p most recent examples, as described in the
previous section. For this test, we use the same 4-gram
mappings as for the reference experiments in Section 2, with the
same value C = 100. We test a range of values p in a coarse
grid search. Figure 5 reports the effect of the buffer size p in
relationship to the (1-ROCA)% performance measure (top),
and the number of CPU seconds required (bottom).
The results show that values of p < 100 do result in
degraded performance, although they evaluate very quickly.
However, p values from 500 to 10,000 perform almost as
well as the original Online SMO (represented here as p =
100, 000), at dramatically reduced computational cost.
These results are important for making state of the art
performance on large-scale content-based spam detection
practical with online SVMs. Ordinarily, the training time
would grow quadratically with the number of seen examples.
However, fixing a value of p ensures that the training time
is independent of the size of the data set. Furthermore, a
lookback buffer allows the filter to adjust to concept drift.
0.005
0.01
0.025
0.05
0.1
10521
(1-ROCA)%
Max Iters.
trec06p
trec05p-1
50000
100000
150000
200000
250000
10521
CPUSec.
Max Iters.
trec06p
trec05p-1
Figure 6: Reduced Iterations Tests.
4.1.2 Testing Reduced Iterations
In the second ROSVM test, we experiment with reducing
the number of iterations. Our initial tests showed that the
maximum number of iterations used by Online SMO was
rarely much larger than 10 on content-based spam detection;
thus we tested values of T = {1, 2, 5, ∞}. Other parameters
were identical to the original Online SVM tests.
The results on this test were surprisingly stable (see
Figure 6). Reducing the maximum number of SMO iterations
per update had essentially no impact on classification
performance, but did result in a moderate increase in speed. This
suggests that any additional iterations are spent attempting
to find improvements to a hyperplane that is already very
close to optimal. These results show that for content-based
spam detection, we can reduce computational cost by
allowing only a single SMO iteration (that is, T = 1) with
effectively equivalent performance.
4.1.3 Testing Reduced Updates
For our third ROSVM experiment, we evaluate the impact
of adjusting the parameter M to reduce the total number of
updates. As noted before, when M = 1, the hyperplane is
globally optimal at every step. Reducing M allows a slightly
inconsistent hyperplane to persist until it encounters an
example for which it is too inconsistent. We tested values of
M from 0 to 1, at increments of 0.1. (Note that we used
p = 10000 to decrease the cost of evaluating these tests.)
The results for these tests are appear in Figure 7, and
show that there is a slight degradation in performance with
reduced values of M, and that this degradation in
performance is accompanied by an increase in efficiency. Values of
0.005
0.01
0.025
0.05
0.1
0 0.2 0.4 0.6 0.8 1
(1-ROCA)%
M
trec05p-1
trec06p
5000
10000
15000
20000
25000
30000
35000
40000
0 0.2 0.4 0.6 0.8 1
CPUSec.
M
trec05p-1
trec06p
Figure 7: Reduced Updates Tests.
M > 0.7 give effectively equivalent performance as M = 1,
and still reduce cost.
4.2 Online SVMs and ROSVM
We now compare ROSVM against Online SVMs on the
email spam, blog comment spam, and splog detection tasks.
These experiments show comparable performance on these
tasks, at radically different costs. In the previous section,
the effect of the different relaxation methods was tested
separately. Here, we tested these methods together to
create a full implementation of ROSVM. We chose the values
p = 10000, T = 1, M = 0.8 for the email spam detection
tasks. Note that these parameter values were selected as
those allowing ROSVM to achieve comparable performance
results with Online SVMs, in order to test total difference
in computational cost. The splog and blog data sets were
much smaller, so we set p = 100 for these tasks to allow
meaningful comparisons between the reduced size and full
size optimization problems. Because these values were not
hand-tuned, both generalization performance and runtime
results are meaningful in these experiments.
4.2.1 Experimental Setup
We compared Online SVMs and ROSVM on email spam,
blog comment spam, and splog detection. For the email
spam, we used the two large benchmark corpora, trec05p-1
and trec06p, in the standard online ordering. We randomly
ordered both the blog comment spam corpus and the splog
corpus to create online learning tasks. Note that this is a
different setting than the leave-one-out cross validation task
presented on these corpora in Section 2 - the results are
not directly comparable. However, this experimental design
Table 5: Email Spam Benchmark Data. These
results compare Online SVM and ROSVM on email
spam detection, using binary 4-gram feature space.
Score reported is (1-ROCA)%, where 0 is optimal.
trec05p-1 trec05p-1 trec06p trec06p
(1-ROC)% CPUs (1-ROC)% CPUs
OnSVM 0.0084 242,160 0.0232 87,519
ROSVM 0.0090 24,720 0.0240 18,541
Table 6: Blog Comment Spam. These results
comparing Online SVM and ROSVM on blog comment
spam detection using binary 4-gram feature space.
Acc. Prec. Recall F1 CPUs
OnSVM 0.926 0.930 0.962 0.946 139
ROSVM 0.923 0.925 0.965 0.945 11
does allow meaningful comparison between our two online
methods on these content-based spam detection tasks.
We ran each method on each task, and report the results
in Tables 5, 6, and 7. Note that the CPU time reported for
each method was generated on the same computing system.
This time reflects only the time needed to complete online
learning on tokenized data. We do not report the time taken
to tokenize the data into binary 4-grams, as this is the same
additive constant for all methods on each task. In all cases,
ROSVM was significantly less expensive computationally.
4.3 Discussion
The comparison results shown in Tables 5, 6, and 7 are
striking in two ways. First, they show that the performance
of Online SVMs can be matched and even exceeded by
relaxed margin methods. Second, they show a dramatic
disparity in computational cost. ROSVM is an order of
magnitude more efficient than the normal Online SVM, and gives
comparable results. Furthermore, the fixed lookback buffer
ensures that the cost of each update does not depend on the
size of the data set already seen, unlike Online SVMs. Note
the blog and splog data sets are relatively small, and results
on these data sets must be considered preliminary. Overall,
these results show that there is no need to pay the high cost
of SVMs to achieve this level of performance on
contentbased detection of spam. ROSVMs offer a far cheaper
alternative with little or no performance loss.
5. CONCLUSIONS
In the past, academic researchers and industrial
practitioners have disagreed on the best method for online
contentbased detection of spam on the web. We have presented one
resolution to this debate. Online SVMs do, indeed,
proTable 7: Splog Data Set. These results compare
Online SVM and ROSVM on splog detection using
binary 4-gram feature space.
Acc. Prec. Recall F1 CPUs
OnSVM 0.880 0.910 0.842 0.874 29353
ROSVM 0.878 0.902 0.849 0.875 1251
duce state-of-the-art performance on this task with proper
adjustment of the tradeoff parameter C, but with cost that
grows quadratically with the size of the data set. The high
values of C required for best performance with SVMs show
that the margin maximization of Online SVMs is overkill for
this task. Thus, we have proposed a less expensive
alternative, ROSVM, that relaxes this maximum margin
requirement, and produces nearly equivalent results. These
methods are efficient enough for large-scale filtering of
contentbased spam in its many forms.
It is natural to ask why the task of content-based spam
detection gets strong performance from ROSVM. After all, not
all data allows the relaxation of SVM requirements. We
conjecture that email spam, blog comment spam, and splogs all
share the characteristic that a subset of features are
particularly indicative of content being either spam or not spam.
These indicative features may be sparsely represented in the
data set, because of spam methods such as word obfuscation,
in which common spam words are intentionally misspelled in
an attempt to reduce the effectiveness of word-based spam
detection. Maximizing the margin may cause these sparsely
represented features to be ignored, creating an overall
reduction in performance. It appears that spam data is highly
separable, allowing ROSVM to be successful with high
values of C and little effort given to maximizing the margin.
Future work will determine how applicable relaxed SVMs
are to the general problem of text classification.
Finally, we note that the success of relaxed SVM methods
for content-based spam detection is a result that depends
on the nature of spam data, which is potentially subject to
change. Although it is currently true that ham and spam
are linearly separable given an appropriate feature space,
this assumption may be subject to attack. While our current
methods appear robust against primitive attacks along these
lines, such as the good word attack [24], we must explore the
feasibility of more sophisticated attacks.
6. REFERENCES
[1] A. Bratko and B. Filipic. Spam filtering using
compression models. Technical Report IJS-DP-9227,
Department of Intelligent Systems, Jozef Stefan
Institute, L jubljana, Slovenia, 2005.
[2] G. Cauwenberghs and T. Poggio. Incremental and
decremental support vector machine learning. In
NIPS, pages 409-415, 2000.
[3] G. V. Cormack. TREC 2006 spam track overview. In
To appear in: The Fifteenth Text REtrieval
Conference (TREC 2006) Proceedings, 2006.
[4] G. V. Cormack and A. Bratko. Batch and on-line
spam filter comparison. In Proceedings of the Third
Conference on Email and Anti-Spam (CEAS), 2006.
[5] G. V. Cormack and T. R. Lynam. TREC 2005 spam
track overview. In The Fourteenth Text REtrieval
Conference (TREC 2005) Proceedings, 2005.
[6] G. V. Cormack and T. R. Lynam. On-line supervised
spam filter evaluation. Technical report, David R.
Cheriton School of Computer Science, University of
Waterloo, Canada, February 2006.
[7] N. Cristianini and J. Shawe-Taylor. An introduction to
support vector machines. Cambridge University Press,
2000.
[8] D. DeCoste and K. Wagstaff. Alpha seeding for
support vector machines. In KDD "00: Proceedings of
the sixth ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 345-349,
2000.
[9] H. Drucker, V. Vapnik, and D. Wu. Support vector
machines for spam categorization. IEEE Transactions
on Neural Networks, 10(5):1048-1054, 1999.
[10] J. Goodman and W. Yin. Online discriminative spam
filter training. In Proceedings of the Third Conference
on Email and Anti-Spam (CEAS), 2006.
[11] P. Graham. A plan for spam. 2002.
[12] P. Graham. Better bayesian filtering. 2003.
[13] Z. Gyongi and H. Garcia-Molina. Spam: It"s not just
for inboxes anymore. Computer, 38(10):28-34, 2005.
[14] T. Joachims. Text categorization with suport vector
machines: Learning with many relevant features. In
ECML "98: Proceedings of the 10th European
Conference on Machine Learning, pages 137-142,
1998.
[15] T. Joachims. Training linear svms in linear time. In
KDD "06: Proceedings of the 12th ACM SIGKDD
international conference on Knowledge discovery and
data mining, pages 217-226, 2006.
[16] J. Kivinen, A. Smola, and R. Williamson. Online
learning with kernels. In Advances in Neural
Information Processing Systems 14, pages 785-793.
MIT Press, 2002.
[17] P. Kolari, T. Finin, and A. Joshi. SVMs for the
blogosphere: Blog identification and splog detection.
AAAI Spring Symposium on Computational
Approaches to Analyzing Weblogs, 2006.
[18] W. Krauth and M. M´ezard. Learning algorithms with
optimal stability in neural networks. Journal of
Physics A, 20(11):745-752, 1987.
[19] T. Lynam, G. Cormack, and D. Cheriton. On-line
spam filter fusion. In SIGIR "06: Proceedings of the
29th annual international ACM SIGIR conference on
Research and development in information retrieval,
pages 123-130, 2006.
[20] V. Metsis, I. Androutsopoulos, and G. Paliouras.
Spam filtering with naive bayes - which naive bayes?
Third Conference on Email and Anti-Spam (CEAS),
2006.
[21] G. Mishne, D. Carmel, and R. Lempel. Blocking blog
spam with language model disagreement. Proceedings
of the 1st International Workshop on Adversarial
Information Retrieval on the Web (AIRWeb), May
2005.
[22] J. Platt. Sequenital minimal optimization: A fast
algorithm for training support vector machines. In
B. Scholkopf, C. Burges, and A. Smola, editors,
Advances in Kernel Methods - Support Vector
Learning. MIT Press, 1998.
[23] B. Scholkopf and A. Smola. Learning with Kernels:
Support Vector Machines, Regularization,
Optimization, and Beyond. MIT Press, 2001.
[24] G. L. Wittel and S. F. Wu. On attacking statistical
spam filters. CEAS: First Conference on Email and
Anti-Spam, 2004. | logistic regression;support vector machine;feature mapping;bayesian method;hyperplane;link spam;spam filter;incremental update;link analysis;machine learning technique;blog;spam filtering;content-based spam detection;splog;content-based filtering |
train_H-38 | DiffusionRank: A Possible Penicillin for Web Spamming | While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient γ tends to infinity. In such a case 1/γ = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When γ = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, γ is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when γ = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected. | 1. INTRODUCTION
While the PageRank algorithm [13] has proven to be very
effective for ranking Web pages, inaccurate PageRank
results are induced because of web page manipulations by
people for commercial interests. The manipulation problem is
also called the Web spam, which refers to hyperlinked pages
on the World Wide Web that are created with the intention
of misleading search engines [7]. It is reported that
approximately 70% of all pages in the .biz domain and about 35%
of the pages in the .us domain belong to the spam category
[12]. The reason for the increasing amount of Web spam is
explained in [12]: some web site operators try to influence
the positioning of their pages within search results because
of the large fraction of web traffic originating from searches
and the high potential monetary value of this traffic.
From the viewpoint of the Web site operators who want
to increase the ranking value of a particular page for search
engines, Keyword Stuffing and Link Stuffing are being used
widely [7, 12]. From the viewpoint of the search engine
managers, the Web spam is very harmful to the users" evaluations
and thus their preference to choosing search engines because
people believe that a good search engine should not return
irrelevant or low-quality results. There are two methods
being employed to combat the Web spam problem. Machine
learning methods are employed to handle the keyword
stuffing. To successfully apply machine learning methods, we
need to dig out some useful textual features for Web pages,
to mark part of the Web pages as either spam or non-spam,
then to apply supervised learning techniques to mark other
pages. For example, see [5, 12]. Link analysis methods are
also employed to handle the link stuffing problem. One
example is the TrustRank [7], a link-based method, in which
the link structure is utilized so that human labelled trusted
pages can propagate their trust scores trough their links.
This paper focuses on the link-based method.
The rest of the materials are organized as follows. In the
next section, we give a brief literature review on various
related ranking techniques. We establish the Heat Diffusion
Model (HDM) on various cases in Section 3, and propose
DiffusionRank in Section 4. In Section 5, we describe the
data sets that we worked on and the experimental results.
Finally, we draw conclusions in Section 6.
2. LITERATURE REVIEW
The importance of a Web page is determined by either
the textual content of pages or the hyperlink structure or
both. As in previous work [7, 13], we focus on ranking
methods solely determined by hyperlink structure of the
Web graph. All the mentioned ranking algorithms are
established on a graph. For our convenience, we first give some
notations. Denote a static graph by G = (V, E), where V =
{v1, v2, . . . , vn}, E = {(vi, vj) | there is an edge from vi to
vj}. Ii and di denote the in-degree and the out-degree of
page i respectively.
2.1 PageRank
The importance of a Web page is an inherently subjective
matter, which depends on the reader"s interests, knowledge
and attitudes [13]. However, the average importance of all
readers can be considered as an objective matter. PageRank
tries to find such average importance based on the Web link
structure, which is considered to contain a large amount of
statistical data. The Web is modelled by a directed graph G
in the PageRank algorithms, and the rank or importance
xi for page vi ∈ V is defined recursively in terms of pages
which point to it: xi = (j,i)∈E aijxj, where aij is assumed
to be 1/dj if there is a link from j to i, and 0 otherwise. Or
in matrix terms, x = Ax. When the concept of random
jump is introduced, the matrix form is changed to
x = [(1 − α)g1T
+ αA]x, (1)
where α is the probability of following the actual link from a
page, (1 − α) is the probability of taking a random jump,
and g is a stochastic vector, i.e., 1T
g = 1. Typically, α =
0.85, and g = 1
n
1 is one of the standard settings, where 1 is
the vector of all ones [6, 13].
2.2 TrustRank
TrustRank [7] is composed of two parts. The first part
is the seed selection algorithm, in which the inverse
PageRank was proposed to help an expert of determining a good
node. The second part is to utilize the biased PageRank,
in which the stochastic distribution g is set to be shared by
all the trusted pages found in the first part. Moreover, the
initial input of x is also set to be g. The justification for
the inverse PageRank and the solid experiments support its
advantage in combating the Web spam. Although there are
many variations of PageRank, e.g., a family of link-based
ranking algorithms in [2], TrustRank is especially chosen for
comparisons for three reasonss: (1) it is designed for
combatting spamming; (2) its fixed parameters make a
comparison easy; and (3) it has a strong theoretical relations with
PageRank and DiffusionRank.
2.3 Manifold Ranking
In [17], the idea of ranking on the data manifolds was
proposed. The data points represented as vectors in Euclidean
space are considered to be drawn from a manifold. From
the data points on such a manifold, an undirected weighted
graph is created, then the weight matrix is given by the
Gaussian Kernel smoothing. While the manifold ranking
algorithm achieves an impressive result on ranking images,
the biased vector g and the parameter k in the general
personalized PageRank in [17] are unknown in the Web graph
setting; therefore we do not include it in the comparisons.
2.4 Heat Diffusion
Heat diffusion is a physical phenomena. In a medium,
heat always flow from position with high temperature to
position with low temperature. Heat kernel is used to
describe the amount of heat that one point receives from
another point. Recently, the idea of heat kernel on a manifold
is borrowed in applications such as dimension reduction [3]
and classification [9, 10, 14]. In these work, the input data
is considered to lie in a special structure.
All the above topics are related to our work. The readers
can find that our model is a generalization of PageRank in
order to resist Web manipulation, that we inherit the first
part of TrustRank, that we borrow the concept of ranking on
the manifold to introduce our model, and that heat diffusion
is a main scheme in this paper.
3. HEAT DIFFUSION MODEL
Heat diffusion provides us with another perspective about
how we can view the Web and also a way to calculate
ranking values. In this paper, the Web pages are considered to
be drawn from an unknown manifold, and the link structure
forms a directed graph, which is considered as an
approximation to the unknown manifold. The heat kernel established
on the Web graph is considered as the representation of the
relationship between Web pages. The temperature
distribution after a fixed time period, induced by a special initial
temperature distribution, is considered as the rank scores on
the Web pages. Before establishing the proposed models, we
first show our motivations.
3.1 Motivations
There are two points to explain that PageRank is
susceptible to web spam.
• Over-democratic. There is a belief behind
PageRank-all pages are born equal. This can be seen from
the equal voting ability of one page: the sum of each
column is equal to one. This equal voting ability of all
pages gives the chance for a Web site operator to
increase a manipulated page by creating a large number
of new pages pointing to this page since all the newly
created pages can obtain an equal voting right.
• Input-independent. For any given non-zero initial
input, the iteration will converge to the same stable
distribution corresponding to the maximum eigenvalue
1 of the transition matrix. This input-independent
property makes it impossible to set a special initial
input (larger values for trusted pages and less values even
negative values for spam pages) to avoid web spam.
The input-independent feature of PageRank can be further
explained as follows. P = [(1 − α)g1T
+ αA] is a positive
stochastic matrix if g is set to be a positive stochastic vector
(the uniform distribution is one of such settings), and so the
largest eigenvalue is 1 and no other eigenvalue whose
absolute value is equal to 1, which is guaranteed by the Perron
Theorem [11]. Let y be the eigenvector corresponding to 1,
then we have Py = y. Let {xk} be the sequence generated
from the iterations xk+1 = Pxk, and x0 is the initial input.
If {xk} converges to x, then xk+1 = Pxk implies that x
must satisfy Px = x. Since the only maximum eigenvalue
is 1, we have x = cy where c is a constant, and if both x
and y are normalized by their sums, then c = 1. The above
discussions show that PageRank is independent of the initial
input x0.
In our opinion, g and α are objective parameters
determined by the users" behaviors and preferences. A, α and
g are the true web structure. While A is obtained by a
crawler and the setting α = 0.85 is accepted by the people,
we think that g should be determined by a user behavior
investigation, something like [1]. Without any prior
knowledge, g has to be set as g = 1
n
1.
TrustRank model does not follow the true web structure
by setting a biased g, but the effects of combatting
spamming are achieved in [7]; PageRank is on the contrary in
some ways. We expect a ranking algorithm that has an
effect of anti-manipulation as TrustRank while respecting the
true web structure as PageRank.
We observe that the heat diffusion model is a natural way
to avoid the over-democratic and input-independent feature
of PageRank. Since heat always flows from a position with
higher temperatures to one with lower temperatures, points
are not equal as some points are born with high
temperatures while others are born with low temperatures. On the
other hand, different initial temperature distributions will
give rise to different temperature distributions after a fixed
time period. Based on these considerations, we propose the
novel DiffusionRank. This ranking algorithm is also
motivated by the viewpoint for the Web structure. We view
all the Web pages as points drawn from a highly complex
geometric structure, like a manifold in a high dimensional
space. On a manifold, heat can flow from one point to
another through the underlying geometric structure in a given
time period. Different geometric structures determine
different heat diffusion behaviors, and conversely the diffusion
behavior can reflect the geometric structure. More
specifically, on the manifold, the heat flows from one point to
another point, and in a given time period, if one point x
receives a large amount of heat from another point y, we
can say x and y are well connected, and thus x and y have
a high similarity in the sense of a high mutual connection.
We note that on a point with unit mass, the temperature
and the heat of this point are equivalent, and these two terms
are interchangeable in this paper. In the following, we first
show the HDM on a manifold, which is the origin of HDM,
but cannot be employed to the World Wide Web directly,
and so is considered as the ideal case. To connect the ideal
case and the practical case, we then establish HDM on a
graph as an intermediate case. To model the real world
problem, we further build HDM on a random graph as a
practical case. Finally we demonstrate the DiffusionRank
which is derived from the HDM on a random graph.
3.2 Heat Flow On a Known Manifold
If the underlying manifold is known, the heat flow
throughout a geometric manifold with initial conditions can be
described by the following second order differential equation:
∂f(x,t)
∂t
− ∆f(x, t) = 0, where f(x, t) is the heat at location x
at time t, and ∆f is the Laplace-Beltrami operator on a
function f. The heat diffusion kernel Kt(x, y) is a special
solution to the heat equation with a special initial condition-a
unit heat source at position y when there is no heat in other
positions. Based on this, the heat kernel Kt(x, y) describes
the heat distribution at time t diffusing from the initial unit
heat source at position y, and thus describes the
connectivity (which is considered as a kind of similarity) between x
and y. However, it is very difficult to represent the World
Wide Web as a regular geometry with a known dimension;
even the underlying is known, it is very difficult to find the
heat kernel Kt(x, y), which involves solving the heat
equation with the delta function as the initial condition. This
motivates us to investigate the heat flow on a graph. The
graph is considered as an approximation to the underlying
manifold, and so the heat flow on the graph is considered as
an approximation to the heat flow on the manifold.
3.3 On an Undirected Graph
On an undirected graph G, the edge (vi, vj) is considered
as a pipe that connects nodes vi and vj. The value fi(t)
describes the heat at node vi at time t, beginning from an
initial distribution of heat given by fi(0) at time zero. f(t)
(f(0)) denotes the vector consisting of fi(t) (fi(0)).
We construct our model as follows. Suppose, at time t,
each node i receives M(i, j, t, ∆t) amount of heat from its
neighbor j during a period of ∆t. The heat M(i, j, t, ∆t)
should be proportional to the time period ∆t and the heat
difference fj(t) − fi(t). Moreover, the heat flows from node
j to node i through the pipe that connects nodes i and j.
Based on this consideration, we assume that M(i, j, t, ∆t) =
γ(fj(t) − fi(t))∆t. As a result, the heat difference at node
i between time t + ∆t and time t will be equal to the sum
of the heat that it receives from all its neighbors. This is
formulated as
fi(t + ∆t) − fi(t) =
j:(j,i)∈E
γ(fj(t) − fi(t))∆t, (2)
where E is the set of edges. To find a closed form solution
to Eq. (2), we express it in a matrix form: (f(t + ∆t) −
f(t))/∆t = γHf(t), where d(v) denotes the degree of the
node v. In the limit ∆t → 0, it becomes d
dt
f(t) = γHf(t).
Solving it, we obtain f(t) = eγtH
f(0), especially we have
f(1) = eγH
f(0), Hij =
−d(vj), j = i,
1, (vj, vi) ∈ E,
0, otherwise,
(3)
where eγH
is defined as eγH
= I+γH+ γ2
2!
H2
+ γ3
3!
H3
+· · · .
3.4 On a Directed Graph
The above heat diffusion model must be modified to fit the
situation where the links between Web pages are directed.
On one Web page, when the page-maker creates a link (a, b)
to another page b, he actually forces the energy flow, for
example, people"s click-through activities, to that page, and
so there is added energy imposed on the link. As a result,
heat flows in a one-way manner, only from a to b, not from
b to a. Based on such consideration, we modified the heat
diffusion model on an undirected graph as follows.
On a directed graph G, the pipe (vi, vj) is forced by added
energy such that heat flows only from vi to vj. Suppose, at
time t, each node vi receives RH = RH(i, j, t, ∆t) amount of
heat from vj during a period of ∆t. We have three
assumptions: (1) RH should be proportional to the time period ∆t;
(2) RH should be proportional to the the heat at node vj;
and (3) RH is zero if there is no link from vj to vi. As a
result, vi will receive j:(vj ,vi)∈E σjfj(t)∆t amount of heat
from all its neighbors that points to it.
On the other hand, node vi diffuses DH(i, t, ∆t) amount
of heat to its subsequent nodes. We assume that: (1) The
heat DH(i, t, ∆t) should be proportional to the time period
∆t. (2) The heat DH(i, t, ∆t) should be proportional to the
the heat at node vi. (3) Each node has the same ability of
diffusing heat. This fits the intuition that a Web surfer only
has one choice to find the next page that he wants to browse.
(4) The heat DH(i, t, ∆t) should be uniformly distributed
to its subsequent nodes. The real situation is more complex
than what we assume, but we have to make this simple
assumption in order to make our model concise. As a result,
node vi will diffuse γfi(t)∆t/di amount of heat to any of its
subsequent nodes, and each of its subsequent node should
receive γfi(t)∆t/di amount of heat. Therefore σj = γ/dj.
To sum up, the heat difference at node vi between time
t+∆t and time t will be equal to the sum of the heat that it
receives, deducted by what it diffuses. This is formulated as
fi(t + ∆t) − fi(t) = −γfi(t)∆t + j:(vj ,vi)∈E γ/djfj(t)∆t.
Similarly, we obtain
f(1) = eγH
f(0), Hij =
−1, j = i,
1/dj, (vj, vi) ∈ E,
0, otherwise.
(4)
3.5 On a Random Directed Graph
For real world applications, we have to consider random
edges. This can be seen in two viewpoints. The first one
is that in Eq. (1), the Web graph is actually modelled as
a random graph, there is an edge from node vi to node vj
with a probability of (1 − α)gj (see the item (1 − α)g1T
),
and that the Web graph is predicted by a random graph
[15, 16]. The second one is that the Web structure is a
random graph in essence if we consider the content similarity
between two pages, though this is not done in this paper.
For these reasons, the model would become more flexible if
we extend it to random graphs. The definition of a random
graph is given below.
Definition 1. A random graph RG = (V, P = (pij)) is
defined as a graph with a vertex set V in which the edges are
chosen independently, and for 1 ≤ i, j ≤ |V | the probability
of (vi, vj) being an edge is exactly pij.
The original definition of random graphs in [4], is changed
slightly to consider the situation of directed graphs. Note
that every static graph can be considered as a special
random graph in the sense that pij can only be 0 or 1.
On a random graph RG = (V, P), where P = (pij) is
the probability of the edge (vi, vj) exists. In such a random
graph, the expected heat difference at node i between time
t + ∆t and time t will be equal to the sum of the expected
heat that it receives from all its antecedents, deducted by
the expected heat that it diffuses.
Since the probability of the link (vj, vi) is pji, the
expected heat flow from node j to node i should be multiplied
by pji, and so we have fi(t + ∆t) − fi(t) = −γ fi(t) ∆t +
j:(vj ,vi)∈E γpjifj(t)∆t/RD+
(vj), where RD+
(vi) is the
expected out-degree of node vi, it is defined as k pik.
Similarly we have
f(1) = eγR
f(0), Rij =
−1, j = i;
pji/RD+
(vj), j = i.
(5)
When the graph is large, a direct computation of eγR
is
time-consuming, and we adopt its discrete approximation:
f(1) = (I +
γ
N
R)N
f(0). (6)
The matrix (I+ γ
N
R)N
in Eq. (6) and matrix eγR
in Eq. (5)
are called Discrete Diffusion Kernel and the Continuous
Diffusion Kernel respectively. Based on the Heat Diffusion
Models and their solutions, DiffusionRank can be
established on undirected graphs, directed graphs, and random
graphs. In the next section, we mainly focus on
DiffusionRank in the random graph setting.
4. DIFFUSIONRANK
For a random graph, the matrix (I + γ
N
R)N
or eγR
can
measure the similarity relationship between nodes. Let fi(0)=
1, fj(0) = 0 if j = i, then the vector f(0) represent the unit
heat at node vi while all other nodes has zero heat. For such
f(0) in a random graph, we can find the heat distribution
at time 1 by using Eq. (5) or Eq. (6). The heat
distribution is exactly the i−th row of the matrix of (I + γ
N
R)N
or
eγR
. So the ith-row jth-column element hij in the matrix
(I + γ∆tR)N
or eγR
means the amount of heat that vi can
receive from vj from time 0 to 1. Thus the value hij can be
used to measure the similarity from vj to vi. For a static
graph, similarly the matrix (I + γ
N
H)N
or eγH
can measure
the similarity relationship between nodes.
The intuition behind is that the amount h(i, j) of heat
that a page vi receives from a unit heat in a page vj in a
unit time embodies the extent of the link connections from
page vj to page vi. Roughly speaking, when there are more
uncrossed paths from vj to vi, vi will receive more heat from
vj; when the path length from vj to vi is shorter, vi will
receive more heat from vj; and when the pipe connecting
vj and vi is wide, the heat will flow quickly. The final heat
that vi receives will depend on various paths from vj to vi,
their length, and the width of the pipes.
Algorithm 1 DiffusionRank Function
Input: The transition matrix A; the inverse transition
matrix U; the decay factor αI for the inverse PageRank; the
decay factor αB for PageRank; number of iterations MI for
the inverse PageRank; the number of trusted pages L; the
thermal conductivity coefficient γ.
Output: DiffusionRank score vector h.
1: s = 1
2: for i = 1 TO MI do
3: s = αI · U · s + (1 − αI ) · 1
n
· 1
4: end for
5: Sort s in a decreasing order: π = Rank({1, . . . , n}, s)
6: d = 0, Count = 0, i = 0
7: while Count ≤ L do
8: if π(i) is evaluated as a trusted page then
9: d(π(i)) = 1, Count + +
10: end if
11: i + +
12: end while
13: d = d/|d|
14: h = d
15: Find the iteration number MB according to λ
16: for i = 1 TO MB do
17: h = (1 − γ
MB
)h + γ
MB
(αB · A · h + (1 − αB) · 1
n
· 1)
18: end for
19: RETURN h
4.1 Algorithm
For the ranking task, we adopt the heat kernel on a
random graph. Formally the DiffusionRank is described in
Algorithm 1, in which,the element Uij in the inverse transition
matrix U is defined to be 1/Ij if there is a link from i to j,
and 0 otherwise. This trusted pages selection procedure by
inverse PageRank is completely borrowed from TrustRank
[7] except for a fix number of the size of the trusted set.
Although the inverse PageRank is not perfect in its
ability of determining the maximum coverage, it is appealing
because of its polynomial execution time and its
reasonable intuition-we actually inverse the original link when
we try to build the seed set from those pages that point
to many pages that in turn point to many pages and so
on. In the algorithm, the underlying random graph is set as
P = αB · A + (1 − αB) · 1
n
· 1n×n, which is induced by the
Web graph. As a result, R = −I + P.
In fact, the more general setting for DiffusionRank is P =
αB ·A+(1−αB)· 1
n
·g·1T
. By such a setting, DiffusionRank
is a generalization of TrustRank when γ tends to infinity
and when g is set in the same way as TrustRank. However,
the second part of TrustRank is not adopted by us. In our
model, g should be the true teleportation determined by
the user"s browse habits, popularity distribution over all the
Web pages, and so on; P should be the true model of the
random nature of the World Wide Web. Setting g according
to the trusted pages will not be consistent with the basic idea
of Heat Diffusion on a random graph. We simply set g = 1
only because we cannot find it without any priori knowledge.
Remark. In a social network interpretation,
DiffusionRank first recognizes a group of trusted people, who may
not be highly ranked, but they know many other people.
The initially trusted people are endowed with the power to
decide who can be further trusted, but cannot decide the
final voting results, and so they are not dictators.
4.2 Advantages
Next we show the four advantages for DiffusionRank.
4.2.1 Two closed forms
First, its solutions have two forms, both of which are
closed form. One takes the discrete form, and has the
advantage of fast computing while the other takes the continuous
form, and has the advantage of being easily analyzed in
theoretical aspects. The theoretical advantage has been shown
in the proof of theorem in the next section.
(a) Group to Group Relations (b) An undirected graph
Figure 1: Two graphs
4.2.2 Group-group relations
Second, it can be naturally employed to detect the
groupgroup relation. For example, let G2 and G1 denote two
groups, containing pages (j1, j2, . . . , js) and (i1, i2, . . . , it),
respectively. Then u,v hiu,jv is the total amounts of heat
that G1 receives from G2, where hiu,jv is the iu−th row
jv−th column element of the heat kernel. More specifically,
we need to first set f(0) for such an application as follows.
In f(0) = (f1(0), f2(0), . . . , fn(0))T
, if i ∈ {j1, j2, . . . , js},
then fi(0) = 1, and 0 otherwise. Then we employ Eq. (5)
to calculate f(1) = (f1(1), f2(1), . . . , fn(1))T
, finally we sum
those fj(1) where j ∈ {i1, i2, . . . , it}. Fig. 1 (a) shows the
results generated by the DiffusionRank. We consider five
groups-five departments in our Engineering Faculty: CSE,
MAE, EE, IE, and SE. γ is set to be 1, the numbers in
Fig. 1 (a) are the amount of heat that they diffuse to each
other. These results are normalized by the total number of
each group, and the edges are ignored if the values are less
than 0.000001. The group-to-group relations are therefore
detected, for example, we can see that the most strong
overall tie is from EE to IE. While it is a natural application
for DiffusionRank because of the easy interpretation by the
amount heat from one group to another group, it is difficult
to apply other ranking techniques to such an application
because they lack such a physical meaning.
4.2.3 Graph cut
Third, it can be used to partition the Web graph into
several parts. A quick example is shown below. The graph
in Fig. 1 (b) is an undirected graph, and so we employ the
Eq. (3). If we know that node 1 belongs to one
community and that node 12 belongs to another community, then
we can put one unit positive heat source on node 1 and
one unit negative heat source on node 12. After time 1, if
we set γ = 0.5, the heat distribution is [0.25, 0.16, 0.17,
0.16, 0.15, 0.09, 0.01, -0.04, -0.18 -0.21, -0.21, -0.34], and if
we set γ = 1, it will be [0.17, 0.16, 0.17, 0.16, 0.16, 0.12,
0.02, -0.07, -0.18, -0.22, -0.24, -0.24]. In both settings, we
can easily divide the graph into two parts: {1, 2, 3, 4, 5, 6, 7}
with positive temperatures and {8, 9, 10, 11, 12} with
negative temperatures. For directed graphs and random graphs,
similarly we can cut them by employing corresponding heat
solution.
4.2.4 Anti-manipulation
Fourth, it can be used to combat manipulation. Let G2
contain trusted Web pages (j1, j2, . . . , js), then for each page
i, v hi,jv is the heat that page i receives from G2, and can
be computed by the discrete approximation of Eq. (4) in
the case of a static graph or Eq. (6) in the case of a random
graph, in which f(0) is set to be a special initial heat
distribution so that the trusted Web pages have unit heat while
all the others have zero heat. In doing so, manipulated Web
page will get a lower rank unless it has strong in-links from
the trusted Web pages directly or indirectly. The situation
is quite different for PageRank because PageRank is
inputindependent as we have shown in Section 3.1. Based on the
fact that the connection from a trusted page to a bad page
should be weak-less uncross paths, longer distance and
narrower pipe, we can say DiffusionRank can resist web spam if
we can select trusted pages. It is fortunate that the trusted
pages selection method in [7]-the first part of TrustRank can
help us to fulfill this task. For such an application of
DiffusionRank, the computation complexity for Discrete
Diffusion Kernel is the same as that for PageRank in cases of
both a static graph and a random graph. This can be seen
in Eq. (6), by which we need N iterations and for each
iteration we need a multiplication operation between a matrix
and a vector, while in Eq. (1) we also need a multiplication
operation between a matrix and a vector for each iteration.
4.3 The Physical Meaning of γ
γ plays an important role in the anti-manipulation effect
of DiffusionRank. γ is the thermal conductivity-the heat
diffusion coefficient. If it has a high value, heat will
diffuse very quickly. Conversely, if it is small, heat will diffuse
slowly. In the extreme case, if it is infinitely large, then heat
will diffuse from one node to other nodes immediately, and
this is exactly the case corresponding to PageRank. Next,
we will interpret it mathematically.
Theorem 1. When γ tends to infinity and f(0) is not the
zero vector, eγR
f(0) is proportional to the stable distribution
produced by PageRank.
Let g = 1
n
1. By the Perron Theorem [11], we have shown
that 1 is the largest eigenvalue of P = [(1 − α)g1T
+ αA],
and that no other eigenvalue whose absolute value is equal
to 1. Let x be the stable distribution, and so Px = x. x is
the eigenvector corresponding to the eigenvalue 1. Assume
the n − 1 other eigenvalues of P are |λ2| < 1, . . . , |λn| < 1,
we can find an invertible matrix S = ( x S1 ) such that
S−1
PS =
1 ∗ ∗ ∗
0 λ2 ∗ ∗
0 0
... ∗
0 0 0 λn
. (7)
Since eγR
= eγ(−I+P)
=
S−1
1 ∗ ∗ ∗
0 eγ(λ2−1)
∗ ∗
0 0
... ∗
0 0 0 eγ(λn−1)
S, (8)
all eigenvalues of the matrix eγR
are 1, eγ(λ2−1)
, . . . , eγ(λn−1)
.
When γ → ∞, they become 1, 0, . . . , 0, which means that 1 is
the only nonzero eigenvalue of eγR
when γ → ∞. We can see
that when γ → ∞, eγR
eγR
f(0) = eγR
f(0), and so eγR
f(0)
is an eigenvector of eγR
when γ → ∞. On the other hand,
eγR
x = (I+γR+γ2
2!
R2
+γ3
3!
R3
+. . .)x = Ix+γRx+γ2
2!
R2
x+
γ3
3!
R3
x + . . . = x since Rx = (−I + P)x = −x + x = 0,
and hence x is the eigenvector of eγR
for any γ. Therefore
both x and eγR
f(0) are the eigenvectors corresponding the
unique eigenvalue 1 of eγR
when γ → ∞, and consequently
x = ceγR
f(0).
By this theorem, we see that DiffusionRank is a
generalization of PageRank. When γ = 0, the ranking value is
most robust to manipulation since no heat is diffused and
the system is unchangeable, but the Web structure is
completely ignored since eγR
f(0) = e0R
f(0) = If(0) = f(0);
when γ = ∞, DiffusionRank becomes PageRank, it can be
manipulated easily. We expect an appropriate setting of
γ that can balance both. For this, we have no theoretical
result, but in practice we find that γ = 1 works well in
Section 5. Next we discuss how to determine the number of
iterations if we employ the discrete heat kernel.
4.4 The Number of Iterations
While we enjoy the advantage of the concise form of the
exponential heat kernel, it is better for us to calculate
DiffusionRank by employing Eq. (6) in an iterative way. Then
the problem about determining N-the number of iterations
arises:
For a given threshold , find N such that ||((I + γ
N
R)N
−
eγR
)f(0)|| < for any f(0) whose sum is one.
Since it is difficult to solve this problem, we propose a
heuristic motivated by the following observations. When
R = −I+P, by Eq. (7), we have (I+ γ
N
R)N
= (I+ γ
N
(−I+
P))N
=
S−1
1 ∗ ∗ ∗
0 (1 + γ(λ2−1)
N
)N
∗ ∗
0 0
... ∗
0 0 0 (1 + γ(λn−1)
N
)N
S. (9)
Comparing Eq. (8) and Eq. (9), we observe that the
eigenvalues of (I + γ
N
R)N
− eγR
are (1 + γ(λn−1)
N
)N
− eγ(λn−1)
.
We propose a heuristic method to determine N so that the
difference between the eigenvalues are less than a threshold
for only positive λs.
We also observe that if γ = 1, λ < 1, then |(1+ γ(λ−1)
N
)N
−
eγ(λ−1)
| < 0.005 if N ≥ 100, and |(1+ γ(λ−1)
N
)N
−eγ(λ−1)
| <
0.01 if N ≥ 30. So we can set N = 30, or N = 100, or others
according to different accuracy requirements. In this paper,
we use the relatively accurate setting N = 100 to make the
real eigenvalues in (I + γ
N
R)N
− eγR
less than 0.005.
5. EXPERIMENTS
In this section, we show the experimental data, the
methodology, the setting, and the results.
5.1 Data Preparation
Our input data consist of a toy graph, a middle-size
realworld graph, and a large-size real-world graph. The toy
graph is shown in Fig. 2 (a). The graph below it shows node
1 is being manipulated by adding new nodes A, B, C, . . .
such that they all point to node 1, and node 1 points to
them all. The data of two real Web graph were obtained
from the domain in our institute in October, 2004. The
total number of pages found are 18,542 in the middle-size
graph, and 607,170 in the large-size graph respectively. The
middle-size graph is a subgraph of the large-size graph, and
they were obtained by the same crawler: one is recorded
by the crawler in its earlier time, and the other is obtained
when the crawler stopped.
5.2 Methodology
The algorithms we run include PageRank, TrustRank and
DiffusionRank. All the rank values are multiplied by the
number of nodes so that the sum of the rank values is equal
to the number of nodes. By this normalization, we can
compare the results on graphs with different sizes since the
average rank value is one for any graph after such normalization.
We will need value difference and pairwise order difference as
comparison measures. Their definitions are listed as follows.
Value Difference. The value difference between A =
{Ai}n
i=1 and B = {Bi}n
i=1 is measured as n
i=1 |Ai − Bi|.
Pairwise Order Difference. The order difference between
A and B is measured as the number of significant order
differences between A and B. The pair (A[i], A[j]) and
(B[i], B[j]) is considered as a significant order difference if
one of the following cases happens: both A[i] > [ <]A[j]+0.1
and B[i] ≤ [ ≥]A[j]; both A[i] ≤ [ ≥]A[j] and B[i] > [ <
]A[j] + 0.1.
A
1
B
C
...
2
5
6 3
4
1
2 5
6 3 4
0 1 2 3 4 5 6
0
2
4
6
8
10
12
Gamma
ValueDifference
Trust set={1}
Trust set={2}
Trust set={3}
Trust set={4}
Trust set={5}
Trust set={6}
(a) (b)
Figure 2: (a) The toy graph consisting of six nodes,
and node 1 is being manipulated by adding new
nodes A, B, C, . . . (b) The approximation tendency to
PageRank by DiffusionRank
5.3 Experimental Set-up
The experiments on the middle-size graph and the
largesize graphs are conducted on the workstation, whose
hardware model is Nix Dual Intel Xeon 2.2GHz with 1GB RAM
and a Linux Kernel 2.4.18-27smp (RedHat7.3). In
calculating DiffusionRank, we employ Eq. (6) and the discrete
approximation of Eq. (4) for such graphs. The related tasks
are implemented using C language. While in the toy graph,
we employ the continuous diffusion kernel in Eq. (4) and
Eq. (5), and implement related tasks using Matlab.
For nodes that have zero out-degree (dangling nodes), we
employ the method in the modified PageRank algorithm [8],
in which dangling nodes of are considered to have random
links uniformly to each node. We set α = αI = αB = 0.85 in
all algorithms. We also set g to be the uniform distribution
in both PageRank and DiffusionRank. For DiffusionRank,
we set γ = 1. According to the discussions in Section 4.3 and
Section 4.4, we set the iteration number to be MB = 100 in
DiffusionRank, and for accuracy consideration, the iteration
number in all the algorithms is set to be 100.
5.4 Approximation of PageRank
We show that when γ tends to infinity, the value
differences between DiffusionRank and PageRank tend to zero.
Fig. 2 (b) shows the approximation property of
DiffusionRank, as proved in Theorem 1, on the toy graph. The
horizontal axis of Fig. 2 (b) marks the γ value, and vertical axis
corresponds to the value difference between DiffusionRank
and PageRank. All the possible trusted sets with L = 1
are considered. For L > 1, the results should be the linear
combination of some of these curves because of the
linearity of the solutions to heat equations. On other graphs, the
situations are similar.
5.5 Results of Anti-manipulation
In this section, we show how the rank values change as the
intensity of manipulation increases. We measure the
intensity of manipulation by the number of newly added points
that point to the manipulated point. The horizontal axes
of Fig. 3 stand for the numbers of newly added points, and
vertical axes show the corresponding rank values of the
manipulated nodes. To be clear, we consider all six situations.
Every node in Fig. 2 (a) is manipulated respectively, and its
0 50 100
0
10
20
30
40
50
RankoftheManipulatdNode−1
DiffusionRank−Trust 4
PageRank
TrustRanl−Trust 4
0 50 100
0
10
20
30
40
50
RankoftheManipulatdNode−2
DiffusionRank−Trust 4
PageRank
TrustRanl−Trust 4
0 50 100
0
10
20
30
40
50
RankoftheManipulatdNode−3
DiffusionRank−Trust 4
PageRank
TrustRanl−Trust 4
0 50 100
0
10
20
30
40
50
Number of New Added Nodes
RankoftheManipulatdNode−4
DiffusionRank−Trust 3
PageRank
TrustRanl−Trust 3
0 50 100
0
10
20
30
40
50
Number of New Added Nodes
RankoftheManipulatdNode−5
DiffusionRank−Trust 4
PageRank
TrustRanl−Trust 4
0 50 100
0
10
20
30
40
50
Number of New Added Nodes
RankoftheManipulatdNode−6
DiffusionRank−Trust 4
PageRank
TrustRanl−Trust 4
Figure 3: The rank values of the manipulated nodes
on the toy graph
200040006000800010000
0
1000
2000
3000
4000
5000
6000
7000
8000
Number of New Added Points
RankoftheManipulatdNode
PageRank
DiffusionRank−uniform
DiffusionRank0
DiffusionRank1
DiffusionRank2
DiffusionRank3
TrustRank0
TrustRank1
TrustRank2
TrustRank3
2000 4000 6000 8000 10000
0
20
40
60
80
100
120
140
160
180
Number of New Added Points
RankoftheManipulatdNode
PageRank
DiffusionRank
TrustRank
DiffusionRank−uniform
(a) (b)
Figure 4: (a) The rank values of the manipulated
nodes on the middle-size graph; (b) The rank values
of the manipulated nodes on the large-size graph
corresponding values for PageRank, TrustRank (TR),
DiffusionRank (DR) are shown in the one of six sub-figures in
Fig. 3. The vertical axes show which node is being
manipulated. In each sub-figure, the trusted sets are
computed below. Since the inverse PageRank yields the results
[1.26, 0.85, 1.31, 1.36, 0.51, 0.71]. Let L = 1. If the
manipulated node is not 4, then the trusted set is {4}, and
otherwise {3}. We observe that in all the cases, rank values
of the manipulated node for DiffusionRank grow slowest as
the number of the newly added nodes increases. On the
middle-size graph and the large-size graph, this conclusion
is also true, see Fig. 4. Note that, in Fig. 4 (a), we choose
four trusted sets (L = 1), on which we test DiffusionRank
and TrustRank, the results are denoted by DiffusionRanki
and TrustRanki (i = 0, 1, 2, 3 denotes the four trusted set);
in Fig. 4 (b), we choose one trusted set (L = 1). Moreover,
in both Fig. 4 (a) and Fig. 4 (b), we show the results for
DiffusionRank when we have no trusted set, and we trust
all the pages before some of them are manipulated.
We also test the order difference between the ranking
order A before the page is manipulated and the ranking order
PA after the page is manipulated. Because after
manipulation, the number of pages changes, we only compare the
common part of A and PA. This experiment is used to test
the stability of all these algorithms. The less the order
difference, the stabler the algorithm, in the sense that only a
smaller part of the order relations is affected by the
manipulation. Figure 5 (a) shows that the order difference values
change when we add new nodes that point to the
manipulated node. We give several γ settings. We find that when
γ = 1, the least order difference is achieved by
DiffusionRank. It is interesting to point out that as γ increases, the
order difference will increase first; after reaching a maximum
value, it will decrease, and finally it tends to the PageRank
results. We show this tendency in Fig. 5 (b), in which we
choose three different settings-the number of manipulated
nodes are 2,000, 5,000, and 10,000 respectively. From these
figures, we can see that when γ < 2, the values are less than
those for PageRank, and that when γ > 20, the difference
between PageRank and DiffusionRank is very small.
After these investigations, we find that in all the graphs we
tested, DiffusionRank (when γ = 1) is most robust to
manipulation both in value difference and order difference. The
trust set selection algorithm proposed in [7] is effective for
both TrustRank and DiffusionRank.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
0
0.5
1
1.5
2
2.5
3
x 10
5
Number of New Added Points
PairwiseOrderDifference
PageRank
DiffusionRank−Gamma=1
DiffusionRank−Gamma=2
DiffusionRank−Gamma=3
DiffusionRank−Gamma=4
DiffusionRank−Gamma=5
DiffusionRank−Gamma=15
TrustRank
0 5 10 15 20
0
0.5
1
1.5
2
2.5
x 10
5
Gamma
PairwiseOrderDifference
DiffusionRank: when added 2000 nodes
DiffusionRank: when added 5000 nodes
DiffusionRank: when added 10000 nodes
PageRank
(a) (b)
Figure 5: (a) Pairwise order difference on the
middle-size graph, the least it is, the more stable
the algorithm; (b) The tendency of varying γ
6. CONCLUSIONS
We conclude that DiffusionRank is a generalization of
PageRank, which is interesting in that the heat diffusion
coefficient γ can balance the extent that we want to model the
original Web graph and the extent that we want to reduce
the effect of link manipulations. The experimental results
show that we can actually achieve such a balance by
setting γ = 1, although the best setting including varying γi
is still under further investigation. This anti-manipulation
feature enables DiffusionRank to be a candidate as a
penicillin for Web spamming. Moreover, DiffusionRank can be
employed to find group-group relations and to partition Web
graph into small communities. All these advantages can be
achieved in the same computational complexity as
PageRank. For the special application of anti-manipulation,
DiffusionRank performs best both in reduction effects and in
its stability among all the three algorithms.
7. ACKNOWLEDGMENTS
We thank Patrick Lau, Zhenjiang Lin and Zenglin Xu
for their help. This work is fully supported by two grants
from the Research Grants Council of the Hong Kong Special
administrative Region, China (Project No. CUHK4205/04E
and Project No. CUHK4235/04E).
8. REFERENCES
[1] E. Agichtein, E. Brill, and S. T. Dumais. Improving web search
ranking by incorporating user behavior information. In E. N.
Efthimiadis, S. T. Dumais, D. Hawking, and K. J¨arvelin,
editors, Proceedings of the 29th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval (SIGIR), pages 19-26, 2006.
[2] R. A. Baeza-Yates, P. Boldi, and C. Castillo. Generalizing
pagerank: damping functions for link-based ranking
algorithms. In E. N. Efthimiadis, S. T. Dumais, D. Hawking,
and K. J¨arvelin, editors, Proceedings of the 29th Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval (SIGIR), pages
308-315, 2006.
[3] M. Belkin and P. Niyogi. Laplacian eigenmaps for
dimensionality reduction and data representation. Neural
Computation, 15(6):1373-1396, Jun 2003.
[4] B. Bollob´as. Random Graphs. Academic Press Inc. (London),
1985.
[5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds,
N. Hamilton, and G. Hullender. Learning to rank using
gradient descent. In Proceedings of the 22nd international
conference on Machine learning (ICML), pages 89-96, 2005.
[6] N. Eiron, K. S. McCurley, and J. A. Tomlin. Ranking the web
frontier. In Proceeding of the 13th World Wide Web
Conference (WWW), pages 309-318, 2004.
[7] Z. Gy¨ongyi, H. Garcia-Molina, and J. Pedersen. Combating
web spam with trustrank. In M. A. Nascimento, M. T. ¨Ozsu,
D. Kossmann, R. J. Miller, J. A. Blakeley, and K. B. Schiefer,
editors, Proceedings of the Thirtieth International Conference
on Very Large Data Bases (VLDB), pages 576-587, 2004.
[8] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H.
Golub. Exploiting the block structure of the web for computing
pagerank. Technical report, Stanford University, 2003.
[9] R. I. Kondor and J. D. Lafferty. Diffusion kernels on graphs
and other discrete input spaces. In C. Sammut and A. G.
Hoffmann, editors, Proceedings of the Nineteenth
International Conference on Machine Learning (ICML),
pages 315-322, 2002.
[10] J. Lafferty and G. Lebanon. Diffusion kernels on statistical
manifolds. Journal of Machine Learning Research, 6:129-163,
Jan 2005.
[11] C. R. MacCluer. The many proofs and applications of perron"s
theorem. SIAM Review, 42(3):487-498, 2000.
[12] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly. Detecting
spam web pages through content analysis. In Proceedings of
the 15th international conference on World Wide Web
(WWW), pages 83-92, 2006.
[13] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank
citation ranking: Bringing order to the web. Technical Report
Paper SIDL-WP-1999-0120 (version of 11/11/1999), Stanford
Digital Library Technologies Project, 1999.
[14] H. Yang, I. King, and M. R. Lyu. NHDC and PHDC:
Non-propagating and propagating heat diffusion classifiers. In
Proceedings of the 12th International Conference on Neural
Information Processing (ICONIP), pages 394-399, 2005.
[15] H. Yang, I. King, and M. R. Lyu. Predictive ranking: a novel
page ranking approach by estimating the web structure. In
Proceedings of the 14th international conference on World
Wide Web (WWW) - Special interest tracks and posters,
pages 944-945, 2005.
[16] H. Yang, I. King, and M. R. Lyu. Predictive random graph
ranking on the web. In Proceedings of the IEEE World
Congress on Computational Intelligence (WCCI), pages
3491-3498, 2006.
[17] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and
B. Sch¨olkopf. Ranking on data manifolds. In S. Thrun, L. Saul,
and B. Sch¨olkopf, editors, Advances in Neural Information
Processing Systems 16 (NIPS 2003), 2004. | link stuffing;web graph;group-to-group relation;keyword stuffing;gaussian kernel smoothing;ranking;diffusionrank;pagerank;equal voting ability;random graph;web spam;link community;machine learning;link analysis;seed selection algorithm |
train_H-40 | Cross-Lingual Query Suggestion Using Query Logs of Different Languages | Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods. | 1. INTRODUCTION
Query suggestion is a functionality to help users of a search
engine to better specify their information need, by narrowing
down or expanding the scope of the search with synonymous
queries and relevant queries, or by suggesting related queries that
have been frequently used by other users. Search engines, such as
Google, Yahoo!, MSN, Ask Jeeves, all have implemented query
suggestion functionality as a valuable addition to their core search
method. In addition, the same technology has been leveraged to
recommend bidding terms to online advertiser in the
pay-forperformance search market [12].
Query suggestion is closely related to query expansion which
extends the original query with new search terms to narrow the
scope of the search. But different from query expansion, query
suggestion aims to suggest full queries that have been formulated
by users so that the query integrity and coherence are preserved in
the suggested queries.
Typical methods for query suggestion exploit query logs and
document collections, by assuming that in the same period of
time, many users share the same or similar interests, which can be
expressed in different manners [12, 14, 26]. By suggesting the
related and frequently used formulations, it is hoped that the new
query can cover more relevant documents. However, all of the
existing studies dealt with monolingual query suggestion and to
our knowledge, there is no published study on cross-lingual query
suggestion (CLQS). CLQS aims to suggest related queries but in a
different language. It has wide applications on World Wide Web:
for cross-language search or for suggesting relevant bidding terms
in a different language. 1
CLQS can be approached as a query translation problem, i.e., to
suggest the queries that are translations of the original query.
Dictionaries, large size of parallel corpora and existing
commercial machine translation systems can be used for
translation. However, these kinds of approaches usually rely on
static knowledge and data. It cannot effectively reflect the quickly
shifting interests of Web users. Moreover, there are some
problems with translated queries in target language. For instance,
the translated terms can be reasonable translations, but they are
not popularly used in the target language. For example, the French
query aliment biologique is translated into biologic food by
Google translation tool2
, yet the correct formulation nowadays
should be organic food. Therefore, there exist many mismatch
cases between the translated terms and the really used terms in
target language. This mismatch makes the suggested terms in the
target language ineffective.
A natural thinking of solving this mismatch is to map the
queries in the source language and the queries in the target
language, by using the query log of a search engine. We exploit
the fact that the users of search engines in the same period of time
have similar interests, and they submit queries on similar topics in
different languages. As a result, a query written in a source
language likely has an equivalent in a query log in the target
language. In particular, if the user intends to perform CLIR, then
original query is even more likely to have its correspondent
included in the target language query log. Therefore, if a
candidate for CLQS appears often in the query log, then it is more
likely the appropriate one to be suggested.
In this paper, we propose a method of calculating the similarity
between source language query and the target language query by
exploiting, in addition to the translation information, a wide
spectrum of bilingual and monolingual information, such as term
co-occurrences, query logs with click-through data, etc. A
discriminative model is used to learn the cross-lingual query
similarity based on a set of manually translated queries. The
model is trained by optimizing the cross-lingual similarity to best
fit the monolingual similarity between one query and the other
query"s translation. Besides being benchmarked as an independent
module, the resulting CLQS system is tested as a new means of
query translation in CLIR task on TREC collections. The results
show that this new translation method is more effective than the
traditional query translation method.
The remainder of this paper is organized as follows: Section 2
introduces the related work; Section 3 describes in detail the
discriminative model for estimating cross-lingual query similarity;
Section 4 presents a new CLIR approach using cross-lingual query
suggestion as a bridge across language boundaries. Section 5
discusses the experiments and benchmarks; finally, the paper is
concluded in Section 6.
2. RELATED WORK
Most approaches to CLIR perform a query translation followed by
a monolingual IR. Typically, queries are translated either using a
bilingual dictionary [22], a machine translation software [9] or a
parallel corpus [20].
Despite the various types of resources used, out-of-vocabulary
(OOV) words and translation disambiguation are the two major
bottlenecks for CLIR [20]. In [7, 27], OOV term translations are
mined from the Web using a search engine. In [17], bilingual
knowledge is acquired based on anchor text analysis. In addition,
word co-occurrence statistics in the target language has been
leveraged for translation disambiguation [3, 10, 11, 19].
2
http://www.google.com/language_tools
Nevertheless, it is arguable that accurate query translation may
not be necessary for CLIR. Indeed, in many cases, it is helpful to
introduce words even if they are not direct translations of any
query word, but are closely related to the meaning of the query.
This observation has led to the development of cross-lingual query
expansion (CLQE) techniques [2, 16, 18]. [2] reports the
enhancement on CLIR by post-translation expansion. [16]
develops a cross-lingual relevancy model by leveraging the
crosslingual co-occurrence statistics in parallel texts. [18] makes
performance comparison on multiple CLQE techniques, including
pre-translation expansion and post-translation expansion.
However, there is lack of a unified framework to combine the
wide spectrum of resources and recent advances of mining
techniques for CLQE.
CLQS is different from CLQE in that it aims to suggest full
queries that have been formulated by users in another language.
As CLQS exploits up-to-date query logs, it is expected that for
most user queries, we can find common formulations on these
topics in the query log in the target language. Therefore, CLQS
also plays a role of adapting the original query formulation to the
common formulations of similar topics in the target language.
Query logs have been successfully used for monolingual IR [8,
12, 15, 26], especially in monolingual query suggestions [12] and
relating the semantically relevant terms for query expansion [8,
15]. In [1], the target language query log has been exploited to
help query translation in CLIR.
3. ESTIMATING CROSS-LINGUAL
QUERY SIMILARITY
A search engine has a query log containing user queries in
different languages within a certain period of time. In addition to
query terms, click-through information is also recorded. Therefore,
we know which documents have been selected by users for each
query. Given a query in the source language, our CLQS task is to
determine one or several similar queries in the target language
from the query log.
The key problem with cross-lingual query suggestion is how to
learn a similarity measure between two queries in different
languages. Although various statistical similarity measures have
been studied for monolingual terms [8, 26], most of them are
based on term co-occurrence statistics, and can hardly be applied
directly in cross-lingual settings.
In order to define a similarity measure across languages, one
has to use at least one translation tool or resource. So the measure
is based on both translation relation and monolingual similarity. In
this paper, as our purpose is to provide up-to-date query similarity
measure, it may not be sufficient to use only a static translation
resource. Therefore, we also integrate a method to mine possible
translations on the Web. This method is particularly useful for
dealing with OOV terms.
Given a set of resources of different natures, the next question
is how to integrate them in a principled manner. In this paper, we
propose a discriminative model to learn the appropriate similarity
measure. The principle is as follows: we assume that we have a
reasonable monolingual query similarity measure. For any training
query example for which a translation exists, its similarity
measure (with any other query) is transposed to its translation.
Therefore, we have the desired cross-language similarity value for
this example. Then we use a discriminative model to learn the
cross-language similarity function which fits the best these
examples.
In the following sections, let us first describe the detail of the
discriminative model for cross-lingual query similarity estimation.
Then we introduce all the features (monolingual and cross-lingual
information) that we will use in the discriminative model.
3.1 Discriminative Model for Estimating
Cross-Lingual Query Similarity
In this section, we propose a discriminative model to learn
crosslingual query similarities in a principled manner. The principle is
as follows: for a reasonable monolingual query similarity between
two queries, a cross-lingual correspondent can be deduced
between one query and another query"s translation. In other
words, for a pair of queries in different languages, their
crosslingual similarity should fit the monolingual similarity between
one query and the other query"s translation. For example, the
similarity between French query pages jaunes (i.e., yellow
page in English) and English query telephone directory should
be equal to the monolingual similarity between the translation of
the French query yellow page and telephone directory. There
are many ways to obtain a monolingual similarity measure
between terms, e.g., term co-occurrence based mutual information
and 2
χ . Any of them can be used as the target for the cross-lingual
similarity function to fit. In this way, cross-lingual query
similarity estimation is formulated as a regression task as follows:
Given a source language query fq , a target language query eq ,
and a monolingual query similarity MLsim , the corresponding
cross-lingual query similarity CLsim is defined as follows:
),(),( eqMLefCL qTsimqqsim f
= (1)
where fqT is the translation of fq in the target language.
Based on Equation (1), it would be relatively easy to create a
training corpus. All it requires is a list of query translations. Then
an existing monolingual query suggestion system can be used to
automatically produce similar query to each translation, and create
the training corpus for cross-lingual similarity estimation. Another
advantage is that it is fairly easy to make use of arbitrary
information sources within a discriminative modeling framework
to achieve optimal performance.
In this paper, support vector machine (SVM) regression
algorithm [25] is used to learn the cross-lingual term similarity
function. Given a vector of feature functions f between fq and
eq , ),( efCL ttsim is represented as an inner product between a
weight vector and the feature vector in a kernel space as follows:
)),((),( efefCL ttfwttsim φ•= (2)
where φ is the mapping from the input feature space onto the
kernel space, and wis the weight vector in the kernel space which
will be learned by the SVM regression training. Once the weight
vector is learned, the Equation (2) can be used to estimate the
similarity between queries of different languages.
We want to point out that instead of regression, one can
definitely simplify the task as a binary or ordinal classification, in
which case CLQS can be categorized according to discontinuous
class labels, e.g., relevant and irrelevant, or a series of levels of
relevancies, e.g., strongly relevant, weakly relevant, and
irrelevant. In either case, one can resort to discriminative
classification approaches, such as an SVM or maximum entropy
model, in a straightforward way. However, the regression
formalism enables us to fully rank the suggested queries based on
the similarity score given by Equation (1).
The Equations (1) and (2) construct a regression model for
cross-lingual query similarity estimation. In the following
sections, the monolingual query similarity measure (see Section
3.2) and the feature functions used for SVM regression (see
Section 3.3) will be presented.
3.2 Monolingual Query Similarity Measure
Based on Click-through Information
Any monolingual term similarity measure can be used as the
regression target. In this paper, we select the monolingual query
similarity measure presented in [26] which reports good
performance by using search users" click-through information in
query logs. The reason to choose this monolingual similarity is
that it is defined in a similar context as ours − according to a user
log that reflects users" intention and behavior. Therefore, we can
expect that the cross-language term similarity learned from it can
also reflect users" intention and expectation.
Following [26], our monolingual query similarity is defined by
combining both query content-based similarity and click-through
commonality in the query log.
First the content similarity between two queries p and q is
defined as follows:
))(),((
),(
),(
qknpknMax
qpKN
qpsimilarity content = (3)
where )( xkn is the number of keywords in a query x, ),( qpKN is
the number of common keywords in the two queries.
Secondly, the click-through based similarity is defined as
follows,
))(),((
),(
),(
qrdprdMax
qpRD
qpsimilarity throughclick =−
(4)
where )(xrd is the number of clicked URLs for a query x, and
),( qpRD is the number of common URLs clicked for two queries.
Finally, the similarity between two queries is a linear
combination of the content-based and click-through-based
similarities, and is presented as follows:
),(*
),(*),(
qpsimilarity
qpsimilarityqpsimilarity
throughclick
content
−
+=
β
α (5)
where α and β are the relative importance of the two similarity
measures. In this paper, we set ,4.0=α and 6.0=β following the
practice in [26]. Queries with similarity measure higher than a
threshold with another query will be regarded as relevant
monolingual query suggestions (MLQS) for the latter. In this
paper, the threshold is set as 0.9 empirically.
3.3 Features Used for Learning Cross-Lingual
Query Similarity Measure
This section presents the extraction of candidate relevant queries
from the log with the assistance of various monolingual and
bilingual resources. Meanwhile, feature functions over source
query and the cross-lingual relevant candidates are defined. Some
of the resources being used here, such as bilingual lexicon and
parallel corpora, were for query translation in previous work. But
note that we employ them here as an assistant means for finding
relevant candidates in the log rather than for acquiring accurate
translations.
3.3.1 Bilingual Dictionary
In this subsection, a built-in-house bilingual dictionary containing
120,000 unique entries is used to retrieve candidate queries. Since
multiple translations may be associated with each source word,
co-occurrence based translation disambiguation is performed [3,
10]. The process is presented as follows:
Given an input query }{ ,2,1 fnfff wwwq K= in the source
language, for each query term fiw , a set of unique translations are
provided by the bilingual dictionary D : },,{)( ,2,1 imiifi tttwD K= .
Then the cohesion between the translations of two query terms is
measured using mutual information which is computed as follows:
)()(
),(
log),()( ,
klij
klij
klijklij
tPtP
ttP
ttPttMI = (6)
where .
)(
)(,
),(
),(
N
tC
tP
N
ttC
ttP
klij
klij ==
Here ),( yxC is the number of queries in the log containing both
x and y , )(xC is the number of queries containing term x , and
N is the total number of queries in the log.
Based on the term-term cohesion defined in Equation (6), all the
possible query translations are ranked using the summation of the
term-term cohesion ∑≠
=
kiki
klijqdict ttMITS f
,,
),()( . The set of
top-4 query translations is denoted as )( fqTS . For each possible
query translation )( fqTST∈ , we retrieve all the queries containing
the same keywords as T from the target language log. The
retrieved queries are candidate target queries, and are assigned
)(TSdict
as the value of the feature Dictionary-based Translation
Score.
3.3.2 Parallel Corpora
Parallel corpora are precious resources for bilingual knowledge
acquisition. Different from the bilingual dictionary, the bilingual
knowledge learned from parallel corpora assigns probability for
each translation candidate which is useful in acquiring dominant
query translations.
In this paper, the Europarl corpus (a set of parallel French and
English texts from the proceedings of the European Parliament) is
used. The corpus is first sentence aligned. Then word alignments
are derived by training an IBM translation model 1 [4] using
GIZA++ [21]. The learned bilingual knowledge is used to extract
candidate queries from the query log. The process is presented as
follows:
Given a pair of queries, fq in the source language and eq in the
target language, the Bi-Directional Translation Score is defined as
follows:
)|()|(),( 111 feIBMefIBMefIBM qqpqqpqqS = (7)
where )|(1 xypIBM
is the word sequence translation probability
given by IBM model 1 which has the following form:
∏∑= =+
=
||
1
||
0
||1 )|(
)1|(|
1
)|(
y
j
x
i
ijyIBM xyp
x
xyp (8)
where )|( ij xyp is the word to word translation probability
derived from the word-aligned corpora.
The reason to use bidirectional translation probability is to deal
with the fact that common words can be considered as possible
translations of many words. By using bidirectional translation, we
test whether the translation words can be translated back to the
source words. This is helpful to focus on the translation
probability onto the most specific translation candidates.
Now, given an input query fq , the top 10 queries }{ eq with the
highest bidirectional translation scores with fq are retrieved from
the query log, and ),(1 efIBM qqS in Equation (7) is assigned as the
value for the feature Bi-Directional Translation Score.
3.3.3 Online Mining for Related Queries
OOV word translation is a major knowledge bottleneck for query
translation and CLIR. To overcome this knowledge bottleneck,
web mining has been exploited in [7, 27] to acquire
EnglishChinese term translations based on the observation that Chinese
terms may co-occur with their English translations in the same
web page. In this section, this web mining approach is adapted to
acquire not only translations but semantically related queries in
the target language.
It is assumed that if a query in the target language co-occurs
with the source query in many web pages, they are probably
semantically related. Therefore, a simple method is to send the
source query to a search engine (Google in our case) for Web
pages in the target language in order to find related queries in the
target language. For instance, by sending a French query pages
jaunes to search for English pages, the English snippets
containing the key words yellow pages or telephone directory
will be returned. However, this simple approach may induce
significant amount of noise due to the non-relevant returns from
the search engine. In order to improve the relevancy of the
bilingual snippets, we extend the simple approach by the
following query modification: the original query is used to search
with the dictionary-based query keyword translations, which are
unified by the ∧ (and) ∨ (OR) operators into a single Boolean
query. For example, for a given query abcq = where the set of
translation entries in the dictionary of for a is },,{ 321 aaa , b is
},{ 21 bb and c is }{ 1c , we issue 121321 )()( cbbaaaq ∧∨∧∨∨∧ as
one web query.
From the returned top 700 snippets, the most frequent 10 target
queries are identified, and are associated with the feature
Frequency in the Snippets.
Furthermore, we use Co-Occurrence Double-Check (CODC)
Measure to weight the association between the source and target
queries. CODC Measure is proposed in [6] as an association
measure based on snippet analysis, named Web Search with
Double Checking (WSDC) model. In WSDC model, two objects a
and b are considered to have an association if b can be found by
using a as query (forward process), and a can be found by using b
as query (backward process) by web search. The forward process
counts the frequency of b in the top N snippets of query a, denoted
as )@( abfreq . Similarly, the backward process count the
frequency of a in the top N snippets of query b, denoted
as )@( bafreq . Then the CODC association score is defined as
follows:
⎪
⎩
⎪
⎨
⎧ =×
=
⎥
⎥
⎦
⎤
⎢
⎢
⎣
⎡
×
otherwise,
0)@()@(if,0
),(
)(
)@(
)(
)@(
log
α
e
ef
f
fe
qfreq
qqfreq
qfreq
qqfreq
effe
efCODC
e
qqfreqqqfreq
qqS (9)
CODC measures the association of two terms in the range
between 0 and 1, where under the two extreme cases, eq and fq
are of no association when 0)@( =fe qqfreq
or 0)@( =ef qqfreq , and are of the strongest association when
)()@( ffe qfreqqqfreq = and )()@( eef qfreqqqfreq = . In
our experiment, α is set at 0.15 following the practice in [6].
Any query eq mined from the Web will be associated with a
feature CODC Measure with ),( efCODC qqS as its value.
3.3.4 Monolingual Query Suggestion
For all the candidate queries 0Q being retrieved using dictionary
(see Section 3.3.1), parallel data (see Section 3.3.2) and web
mining (see Section 3.3.3), monolingual query suggestion system
(described in Section 3.1) is called to produce more related
queries in the target language. For each target query eq , its
monolingual source query )( eML qSQ is defined as the query in
0Q with the highest monolingual similarity with eq , i.e.,
),(maxarg)( 0 eeMLQqeML qqsimqSQ e
′= ∈′
(10)
Then the monolingual similarity between eq and )( eML qSQ is
used as the value of the eq "s Monolingual Query Suggestion
Feature. For any target query 0Qq∈ , its Monolingual Query
Suggestion Feature is set as 1.
For any query 0Qqe ∉ , its values of Dictionary-based
Translation Score, Bi-Directional Translation Score, Frequency
in the Snippet, and CODC Measure are set to be equal to the
feature values of )( eML qSQ .
3.4 Estimating Cross-lingual Query Similarity
In summary, four categories of features are used to learn the
crosslingual query similarity. SVM regression algorithm [25] is used to
learn the weights in Equation (2). In this paper, LibSVM toolkit
[5] is used for the regression training.
In the prediction stage, the candidate queries will be ranked
using the cross-lingual query similarity score computed in terms
of )),((),( efefCL ttfwttsim φ•= , and the queries with
similarity score lower than a threshold will be regarded as
nonrelevant. The threshold is learned using a development data set by
fitting MLQS"s output.
4. CLIR BASED ON CROSS-LINGUAL
QUERY SUGGESTION
In Section 3, we presented a discriminative model for cross lingual
query suggestion. However, objectively benchmarking a query
suggestion system is not a trivial task. In this paper, we propose to
use CLQS as an alternative to query translation, and test its
effectiveness in CLIR tasks. The resulting good performance of
CLIR corresponds to the high quality of the suggested queries.
Given a source query fq , a set of relevant queries }{ eq in the
target language are recommended using the cross-lingual query
suggestion system. Then a monolingual IR system based on the
BM25 model [23] is called using each }{ eqq∈ as queries to
retrieve documents. Then the retrieved documents are re-ranked
based on the sum of the BM25 scores associated with each
monolingual retrieval.
5. PERFORMACNCE EVALUATION
In this section, we will benchmark the cross-lingual query
suggestion system, comparing its performance with monolingual
query suggestion, studying the contribution of various information
sources, and testing its effectiveness when being used in CLIR
tasks.
5.1 Data Resources
In our experiments, French and English are selected as the source
and target language respectively. Such selection is due to the fact
that large scale query logs are readily available for these two
languages. A one-month English query log (containing 7 million
unique English queries with occurrence frequency more than 5) of
MSN search engine is used as the target language log. And a
monolingual query suggestion system is built based on it. In
addition, 5,000 French queries are selected randomly from a
French query log (containing around 3 million queries), and are
manually translated into English by professional French-English
translators. Among the 5,000 French queries, 4,171 queries have
their translations in the English query log, and are used for CLQS
training and testing. Furthermore, among the 4,171 French
queries, 70% are used for cross-lingual query similarity training,
10% are used as the development data to determine the relevancy
threshold, and 20% are used for testing. To retrieve the
crosslingual related queries, a built-in-house French-English bilingual
lexicon (containing 120,000 unique entries) and the Europarl
corpus are used.
Besides benchmarking CLQS as an independent system, the
CLQS is also tested as a query translation system for CLIR
tasks. Based on the observation that the CLIR performance
heavily relies on the quality of the suggested queries, this
benchmark measures the quality of CLQS in terms of its
effectiveness in helping CLIR. To perform such benchmark, we
use the documents of TREC6 CLIR data (AP88-90 newswire,
750MB) with officially provided 25 short French-English queries
pairs (CL1-CL25). The selection of this data set is due to the fact
that the average length of the queries are 3.3 words long, which
matches the web query logs we use to train CLQS.
5.2 Performance of Cross-lingual Query
Suggestion
Mean-square-error (MSE) is used to measure the regression error
and it is defined as follows:
( )2
),(),(
1
∑ −=
i
eiqMLeifiCL qTsimqqsim
l
MSE fi
where l is the total number of cross-lingual query pairs in the
testing data.
As described in Section 3.4, a relevancy threshold is learned
using the development data, and only CLQS with similarity value
above the threshold is regarded as truly relevant to the input
query. In this way, CLQS can also be benchmarked as a
classification task using precision (P) and recall (R) which are
defined as follows:
CLQS
MLQSCLQS
P
S
SS I
= ,
MLQS
MLQSCLQS
R
S
SS I
=
where CLQSS is the set of relevant queries suggested by CLQS,
MLQSS is the set of relevant queries suggested by MLQS (see
Section 3.2).
The benchmarking results with various feature configurations
are shown in Table 1.
Regression Classification
Features
MSE P R
DD 0.274 0.723 0.098
DD+PC 0.224 0.713 0.125
DD+PC+
Web
0.115 0.808 0.192
DD+PC+
Web+ML
QS
0.174 0.796 0.421
Table 1. CLQS performance with different feature settings
(DD: dictionary only; DD+PC: dictionary and parallel corpora;
DD+PC+Web: dictionary, parallel corpora, and web mining;
DD+PC+Web+MLQS: dictionary, parallel corpora, web mining
and monolingual query suggestion)
Table 1 reports the performance comparison with various
feature settings. The baseline system (DD) uses a conventional
query translation approach, i.e., a bilingual dictionary with
cooccurrence-based translation disambiguation. The baseline system
only covers less than 10% of the suggestions made by MLQS.
Using additional features obviously enables CLQS to generate
more relevant queries. The most significant improvement on recall
is achieved by exploiting MLQS. The final CLQS system is able
to generate 42% of the queries suggested by MLQS. Among all
the feature combinations, there is no significant change in
precision. This indicates that our methods can improve the recall
by effectively leveraging various information sources without
losing the accuracy of the suggestions.
Besides benchmarking CLQS by comparing its output with
MLQS output, 200 French queries are randomly selected from the
French query log. These queries are double-checked to make sure
that they are not in the CLQS training corpus. Then CLQS system
is used to suggest relevant English queries for them. On average,
for each French query, 8.7 relevant English queries are suggested.
Then the total 1,740 suggested English queries are manually
checked by two professional English/French translators with
cross-validation. Among the 1,747 suggested queries, 1,407
queries are recognized as relevant to the original ones, hence the
accuracy is 80.9%. Figure 1 shows an example of CLQS of the
French query terrorisme international (international terrorism
in English).
5.3 CLIR Performance
In this section, CLQS is tested with French to English CLIR tasks.
We conduct CLIR experiments using the TREC 6 CLIR dataset
described in Section 5.1. The CLIR is performed using a query
translation system followed by a BM25-based [23] monolingual
IR module. The following three different systems have been used
to perform query translation: (1) CLQS: our CLQS system; (2)
MT: Google French to English machine translation system; (3)
DT: a dictionary based query translation system using
cooccurrence statistics for translation disambiguation. The
translation disambiguation algorithm is presented in Section 3.3.1.
Besides, the monolingual IR performance is also reported as a
reference. The average precision of the four IR systems are
reported in Table 2, and the 11-point precision-recall curves are
shown in Figure 2.
Table 2. Average precision of CLIR on TREC 6 Dataset
(Monolingual: monolingual IR system; MT: CLIR based on
machine translation; DT: CLIR based on dictionary
translation; CLQS: CLQS-based CLIR)
IR System Average Precision % of Monolingual IR
Monolingual 0.266 100%
MT 0.217 81.6%
DT 0.186 69.9%
CLQS 0.233 87.6%
Figure 1. An example of CLQS of the French query
terrorisme international
international terrorism (0.991); what is terrorism (0.943);
counter terrorism (0.920); terrorist (0.911);
terrorist attacks (0.898); international terrorist (0.853);
world terrorism (0.845); global terrorism (0.833);
transnational terrorism (0.821); human rights (0.811);
terrorist groups (0. 777); patterns of global terrorism (0.762)
september 11 (0.734)
11-point P-R curves (TREC6)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Recall
Precison
Monolingual
MT
DT
CLQS
The benchmark shows that using CLQS as a query translation
tool outperforms CLIR based on machine translation by 7.4%,
outperforms CLIR based on dictionary translation by 25.2%, and
achieves 87.6% of the monolingual IR performance.
The effectiveness of CLQS lies in its ability in suggesting
closely related queries besides accurate translations. For example,
for the query CL14 terrorisme international (international
terrorism), although the machine translation tool translates the
query correctly, CLQS system still achieves higher score by
recommending many additional related terms such as global
terrorism, world terrorism, etc. (as shown in Figure 1). Another
example is the query La pollution causée par l'automobile (air
pollution due to automobile) of CL6. The MT tool provides the
translation the pollution caused by the car, while CLQS system
enumerates all the possible synonyms of car, and suggest the
following queries car pollution, auto pollution, automobile
pollution. Besides, other related queries such as global
warming are also suggested. For the query CL12 La culture
écologique (organic farming), the MT tool fails to generate the
correct translation. Although the correct translation is neither in
our French-English dictionary, CLQS system generates organic
farm as a relevant query due to successful web mining.
The above experiment demonstrates the effectiveness of using
CLQS to suggest relevant queries for CLIR enhancement. A
related research is to perform query expansion to enhance CLIR
[2, 18]. So it is very interesting to compare the CLQS approach
with the conventional query expansion approaches. Following
[18], post-translation expansion is performed based on
pseudorelevance feedback (PRF) techniques. We first perform CLIR in
the same way as before. Then we use the traditional PRF
algorithm described in [24] to select expansion terms. In our
experiments, the top 10 terms are selected to expand the original
query, and the new query is used to search the collection for the
second time. The new CLIR performance in terms of average
precision is shown in Table 3. The 11-point P-R curves are drawn
in Figure 3.
Although being enhanced by pseudo-relevance feedback, the
CLIR using either machine translation or dictionary-based query
translation still does not perform as well as CLQS-based
approach. Statistical t-test [13] is conducted to indicate whether
the CLQS-based CLIR performs significantly better. Pair-wise
pvalues are shown in Table 4. Clearly, CLQS significantly
outperforms MT and DT without PRF as well as DT+PRF, but its
superiority over MT+PRF is not significant. However, when
combined with PRF, CLQS significant outperforms all the other
methods. This indicates the higher effectiveness of CLQS in
related term identification by leveraging a wide spectrum of
resources. Furthermore, post-translation expansion is capable of
improving CLQS-based CLIR. This is due to the fact that CLQS
and pseudo-relevance feedback are leveraging different categories
of resources, and both approaches can be complementary.
IR System AP without PRF AP with PRF
Monolingual 0.266 (100%) 0.288 (100%)
MT 0.217 (81.6%) 0.222 (77.1%)
DT 0.186 (69.9%) 0.220 (76.4%)
CLQS 0.233 (87.6%) 0.259 (89.9%)
11-point P-R curves with pseudo relevance feedback (TREC6)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Recall
Precison
Monolingual
MT
DT
CLQS
MT DT MT+PRF DT+PRF
CLQS 0.0298 3.84e-05 0.1472 0.0282
CLQS+PR
F
0.0026 2.63e-05 0.0094 0.0016
6. CONCLUSIONS
In this paper, we proposed a new approach to cross-lingual query
suggestion by mining relevant queries in different languages from
query logs. The key solution to this problem is to learn a
crosslingual query similarity measure by a discriminative model
exploiting multiple monolingual and bilingual resources. The
model is trained based on the principle that cross-lingual
similarity should best fit the monolingual similarity between one
query and the other query"s translation.
Figure 2. 11 points precision-recall on TREC6 CLIR data set
Figure 3. 11 points precision-recall on TREC6 CLIR
dataset with pseudo relevance feedback
Table 3. Comparison of average precision (AP) on TREC 6
without and with post-translation expansion. (%) are the
relative percentages over the monolingual IR performance
Table 4. The results of pair-wise significance t-test. Here
pvalue < 0.05 is considered statistically significant
The baseline CLQS system applies a typical query translation
approach, using a bilingual dictionary with co-occurrence-based
translation disambiguation. This approach only covers 10% of the
relevant queries suggested by an MLQS system (when the exact
translation of the original query is given). By leveraging
additional resources such as parallel corpora, web mining and
logbased monolingual query expansion, the final system is able to
cover 42% of the relevant queries suggested by an MLQS system
with precision as high as 79.6%.
To further test the quality of the suggested queries, CLQS system
is used as a query translation system in CLIR tasks.
Benchmarked using TREC 6 French to English CLIR task, CLQS
demonstrates higher effectiveness than the traditional query
translation methods using either bilingual dictionary or
commercial machine translation tools.
The improvement on TREC French to English CLIR task by
using CLQS demonstrates the high quality of the suggested
queries. This also shows the strong correspondence between the
input French queries and English queries in the log. In the future,
we will build CLQS system between languages which may be
more loosely correlated, e.g., English and Chinese, and study the
CLQS performance change due to the less strong correspondence
among queries in such languages.
7. REFERENCES
[1] Ambati, V. and Rohini., U. Using Monolingual Clickthrough
Data to Build Cross-lingual Search Systems. In Proceedings
of New Directions in Multilingual Information Access
Workshop of SIGIR 2006.
[2] Ballestors, L. A. and Croft, W. B. Phrasal Translation and
Query Expansion Techniques for Cross-Language
Information Retrieval. In Proc. SIGIR 1997, pp. 84-91.
[3] Ballestors, L. A. and Croft, W. B. Resolving Ambiguity for
Cross-Language Retrieval. In Proc. SIGIR 1998, pp. 64-71.
[4] Brown, P. F., Pietra, D. S. A., Pietra, D. V. J., and Mercer, R.
L. The Mathematics of Statistical Machine Translation:
Parameter Estimation. Computational Linguistics,
19(2):263311, 1993.
[5] Chang, C. C. and Lin, C. LIBSVM: a Library for Support
Vector Machines (Version 2.3). 2001.
http://citeseer.ist.psu.edu/chang01libsvm.html
[6] Chen, H.-H., Lin, M.-S., and Wei, Y.-C. Novel Association
Measures Using Web Search with Double Checking. In Proc.
COLING/ACL 2006, pp. 1009-1016.
[7] Cheng, P.-J., Teng, J.-W., Chen, R.-C., Wang, J.-H., Lu,
W.H., and Chien, L.-F. Translating Unknown Queries with Web
Corpora for Cross-Language Information Retrieval. In Proc.
SIGIR 2004, pp. 146-153.
[8] Cui, H., Wen, J. R., Nie, J.-Y., and Ma, W. Y. Query
Expansion by Mining User Logs. IEEE Trans. on Knowledge
and Data Engineering, 15(4):829-839, 2003.
[9] Fujii A. and Ishikawa, T. Applying Machine Translation to
Two-Stage Cross-Language Information Retrieval. In
Proceedings of 4th Conference of the Association for
Machine Translation in the Americas, pp. 13-24, 2000.
[10] Gao, J. F., Nie, J.-Y., Xun, E., Zhang, J., Zhou, M., and
Huang, C. Improving query translation for CLIR using
statistical Models. In Proc. SIGIR 2001, pp. 96-104.
[11] Gao, J. F., Nie, J.-Y., He, H., Chen, W., and Zhou, M.
Resolving Query Translation Ambiguity using a Decaying
Co-occurrence Model and Syntactic Dependence Relations.
In Proc. SIGIR 2002, pp. 183-190.
[12] Gleich, D., and Zhukov, L. SVD Subspace Projections for
Term Suggestion Ranking and Clustering. In Technical
Report, Yahoo! Research Labs, 2004.
[13] Hull, D. Using Statistical Testing in the Evaluation of
Retrieval Experiments. In Proc. SIGIR 1993, pp. 329-338.
[14] Jeon, J., Croft, W. B., and Lee, J. Finding Similar Questions
in Large Question and Answer Archives. In Proc. CIKM
2005, pp. 84-90.
[15] Joachims, T. Optimizing Search Engines Using Clickthrough
Data. In Proc. SIGKDD 2002, pp. 133-142.
[16] Lavrenko, V., Choquette, M., and Croft, W. B. Cross-Lingual
Relevance Models. In Proc. SIGIR 2002, pp. 175-182.
[17] Lu, W.-H., Chien, L.-F., and Lee, H.-J. Anchor Text Mining
for Translation Extraction of Query Terms. In Proc. SIGIR
2001, pp. 388-389.
[18] McNamee, P. and Mayfield, J. Comparing Cross-Language
Query Expansion Techniques by Degrading Translation
Resources. In Proc. SIGIR 2002, pp. 159-166.
[19] Monz, C. and Dorr, B. J. Iterative Translation
Disambiguation for Cross-Language Information Retrieval.
In Proc. SIGIR 2005, pp. 520-527.
[20] Nie, J.-Y., Simard, M., Isabelle, P., and Durand, R.
CrossLanguage Information Retrieval based on Parallel Text and
Automatic Mining of Parallel Text from the Web. In Proc.
SIGIR 1999, pp. 74-81.
[21] Och, F. J. and Ney, H. A Systematic Comparison of Various
Statistical Alignment Models. Computational Linguistics,
29(1):19-51, 2003.
[22] Pirkola, A., Hedlund, T., Keshusalo, H., and Järvelin, K.
Dictionary-Based Cross-Language Information Retrieval:
Problems, Methods, and Research Findings. Information
Retrieval, 4(3/4):209-230, 2001.
[23] Robertson, S. E., Walker, S., Hancock-Beaulieu, M. M., and
Gatford, M. OKAPI at TREC-3. In Proc.TREC-3, pp.
200225, 1995.
[24] Robertson, S. E. and Jones, K. S. Relevance Weighting of
Search Terms. Journal of the American Society of
Information Science, 27(3):129-146, 1976.
[25] Smola, A. J. and Schölkopf, B. A. Tutorial on Support Vector
Regression. Statistics and Computing, 14(3):199-222, 2004.
[26] Wen, J. R., Nie, J.-Y., and Zhang, H. J. Query Clustering
Using User Logs. ACM Trans. Information Systems,
20(1):59-81, 2002.
[27] Zhang, Y. and Vines, P. Using the Web for Automated
Translation Extraction in Cross-Language Information
Retrieval. In Proc. SIGIR 2004, pp. 162-169. | bidding term;benchmark;target language query log;map;query expansion;query suggestion;query log;cross-language information retrieval;query translation;search engine advertisement;search engine;keyword bidding;monolingual query suggestion |
train_H-41 | HITS on the Web: How does it Compare? | This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries. | 1. INTRODUCTION
Link graph features such as in-degree and PageRank have
been shown to significantly improve the performance of text
retrieval algorithms on the web. The HITS algorithm is also
believed to be of interest for web search; to some degree,
one may expect HITS to be more informative that other
link-based features because it is query-dependent: it tries to
measure the interest of pages with respect to a given query.
However, it remains unclear today whether there are
practical benefits of HITS over other link graph measures. This
is even more true when we consider that modern retrieval
algorithms used on the web use a document representation
which incorporates the document"s anchor text, i.e. the text
of incoming links. This, at least to some degree, takes the
link graph into account, in a query-dependent manner.
Comparing HITS to PageRank or in-degree empirically is
no easy task. There are two main difficulties: scale and
relevance. Scale is important because link-based features are
known to improve in quality as the document graph grows.
If we carry out a small experiment, our conclusions won"t
carry over to large graphs such as the web. However,
computing HITS efficiently on a graph the size of a realistic web
crawl is extraordinarily difficult. Relevance is also crucial
because we cannot measure the performance of a feature in
the absence of human judgments: what is crucial is ranking
at the top of the ten or so documents that a user will peruse.
To our knowledge, this paper is the first attempt to
evaluate HITS at a large scale and compare it to other link-based
features with respect to human evaluated judgment.
Our results confirm many of the intuitions we have about
link-based features and their relationship to text retrieval
methods exploiting anchor text. This is reassuring: in the
absence of a theoretical model capable of tying these
measures with relevance, the only way to validate our intuitions
is to carry out realistic experiments. However, we were quite
surprised to find that HITS, a query-dependent feature, is
about as effective as web page in-degree, the most
simpleminded query-independent link-based feature. This
continues to be true when the link-based features are combined
with a text retrieval algorithm exploiting anchor text.
The remainder of this paper is structured as follows:
Section 2 surveys related work. Section 3 describes the data
sets we used in our study. Section 4 reviews the
performance measures we used. Sections 5 and 6 describe the
PageRank and HITS algorithms in more detail, and sketch
the computational infrastructure we employed to carry out
large scale experiments. Section 7 presents the results of our
evaluations, and Section 8 offers concluding remarks.
2. RELATED WORK
The idea of using hyperlink analysis for ranking web search
results arose around 1997, and manifested itself in the HITS
[16, 17] and PageRank [5, 21] algorithms. The popularity
of these two algorithms and the phenomenal success of the
Google search engine, which uses PageRank, have spawned
a large amount of subsequent research.
There are numerous attempts at improving the
effectiveness of HITS and PageRank. Query-dependent link-based
ranking algorithms inspired by HITS include SALSA [19],
Randomized HITS [20], and PHITS [7], to name a few.
Query-independent link-based ranking algorithms inspired
by PageRank include TrafficRank [22], BlockRank [14], and
TrustRank [11], and many others.
Another line of research is concerned with analyzing the
mathematical properties of HITS and PageRank. For
example, Borodin et al. [3] investigated various theoretical
properties of PageRank, HITS, SALSA, and PHITS, including
their similarity and stability, while Bianchini et al. [2]
studied the relationship between the structure of the web graph
and the distribution of PageRank scores, and Langville and
Meyer examined basic properties of PageRank such as
existence and uniqueness of an eigenvector and convergence of
power iteration [18].
Given the attention that has been paid to improving the
effectiveness of PageRank and HITS, and the thorough
studies of the mathematical properties of these algorithms, it is
somewhat surprising that very few evaluations of their
effectiveness have been published. We are aware of two studies
that have attempted to formally evaluate the effectiveness of
HITS and of PageRank. Amento et al. [1] employed
quantitative measures, but based their experiments on the result
sets of just 5 queries and the web-graph induced by topical
crawls around the result set of each query. A more recent
study by Borodin et al. [4] is based on 34 queries, result sets
of 200 pages per query obtained from Google, and a
neighborhood graph derived by retrieving 50 in-links per result
from Google. By contrast, our study is based on over 28,000
queries and a web graph covering 2.9 billion URLs.
3. OUR DATA SETS
Our evaluation is based on two data sets: a large web
graph and a substantial set of queries with associated results,
some of which were labeled by human judges.
Our web graph is based on a web crawl that was
conducted in a breadth-first-search fashion, and successfully
retrieved 463,685,607 HTML pages. These pages contain
17,672,011,890 hyperlinks (after eliminating duplicate
hyperlinks embedded in the same web page), which refer to
a total of 2,897,671,002 URLs. Thus, at the end of the
crawl there were 2,433,985,395 URLs in the frontier set
of the crawler that had been discovered, but not yet
downloaded. The mean out-degree of crawled web pages is 38.11;
the mean in-degree of discovered pages (whether crawled or
not) is 6.10. Also, it is worth pointing out that there is a
lot more variance in in-degrees than in out-degrees; some
popular pages have millions of incoming links. As we will
see, this property affects the computational cost of HITS.
Our query set was produced by sampling 28,043 queries
from the MSN Search query log, and retrieving a total of
66,846,214 result URLs for these queries (using commercial
search engine technology), or about 2,838 results per query
on average. It is important to point out that our 2.9 billion
URL web graph does not cover all these result URLs. In
fact, only 9,525,566 of the result URLs (about 14.25%) were
covered by the graph.
485,656 of the results in the query set (about 0.73% of
all results, or about 17.3 results per query) were rated by
human judges as to their relevance to the given query, and
labeled on a six-point scale (the labels being definitive,
excellent, good, fair, bad and detrimental).
Results were selected for judgment based on their commercial
search engine placement; in other words, the subset of
labeled results is not random, but biased towards documents
considered relevant by pre-existing ranking algorithms.
Involving a human in the evaluation process is extremely
cumbersome and expensive; however, human judgments are
crucial for the evaluation of search engines. This is so
because no document features have been found yet that can
effectively estimate the relevance of a document to a user
query. Since content-match features are very unreliable (and
even more so link features, as we will see) we need to ask
a human to evaluate the results in order to compare the
quality of features.
Evaluating the retrieval results from document scores and
human judgments is not trivial and has been the subject of
many investigations in the IR community. A good
performance measure should correlate with user satisfaction,
taking into account that users will dislike having to delve deep
in the results to find relevant documents. For this reason,
standard correlation measures (such as the correlation
coefficient between the score and the judgment of a document),
or order correlation measures (such as Kendall tau between
the score and judgment induced orders) are not adequate.
4. MEASURING PERFORMANCE
In this study, we quantify the effectiveness of various
ranking algorithms using three measures: NDCG, MRR, and
MAP.
The normalized discounted cumulative gains (NDCG)
measure [13] discounts the contribution of a document to the
overall score as the document"s rank increases (assuming
that the best document has the lowest rank). Such a
measure is particularly appropriate for search engines, as studies
have shown that search engine users rarely consider anything
beyond the first few results [12]. NDCG values are
normalized to be between 0 and 1, with 1 being the NDCG of a
perfect ranking scheme that completely agrees with the
assessment of the human judges. The discounted
cumulative gain at a particular rank-threshold T (DCG@T) is
defined to be
PT
j=1
1
log(1+j)
2r(j)
− 1
, where r(j) is the
rating (0=detrimental, 1=bad, 2=fair, 3=good, 4=excellent,
and 5=definitive) at rank j. The NDCG is computed by
dividing the DCG of a ranking by the highest possible DCG
that can be obtained for that query. Finally, the NDGCs of
all queries in the query set are averaged to produce a mean
NDCG.
The reciprocal rank (RR) of the ranked result set of a
query is defined to be the reciprocal value of the rank of the
highest-ranking relevant document in the result set. The RR
at rank-threshold T is defined to be 0 if none of the
highestranking T documents is relevant. The mean reciprocal rank
(MRR) of a query set is the average reciprocal rank of all
queries in the query set.
Given a ranked set of n results, let rel(i) be 1 if the result
at rank i is relevant and 0 otherwise. The precision P(j)
at rank j is defined to be 1
j
Pj
i=1 rel(i), i.e. the fraction
of the relevant results among the j highest-ranking results.
The average precision (AP) at rank-threshold k is defined to
be
Pk
i=1 P (i)rel(i)
Pn
i=1
rel(i)
. The mean average precision (MAP) of a
query set is the mean of the average precisions of all queries
in the query set.
The above definitions of MRR and MAP rely on the notion
of a relevant result. We investigated two definitions of
relevance: One where all documents rated fair or better were
deemed relevant, and one were all documents rated good
or better were deemed relevant. For reasons of space, we
only report MAP and MRR values computed using the
latter definition; using the former definition does not change
the qualitative nature of our findings. Similarly, we
computed NDCG, MAP, and MRR values for a wide range of
rank-thresholds; we report results here at rank 10; again,
changing the rank-threshold never led us to different
conclusions.
Recall that over 99% of documents are unlabeled. We
chose to treat all these documents as irrelevant to the query.
For some queries, however, not all relevant documents have
been judged. This introduces a bias into our evaluation:
features that bring new documents to the top of the rank
may be penalized. This will be more acute for features less
correlated to the pre-existing commercial ranking algorithms
used to select documents for judgment. On the other hand,
most queries have few perfect relevant documents (i.e. home
page or item searches) and they will most often be within
the judged set.
5. COMPUTING PAGERANK ON A LARGE
WEB GRAPH
PageRank is a query-independent measure of the
importance of web pages, based on the notion of peer-endorsement:
A hyperlink from page A to page B is interpreted as an
endorsement of page B"s content by page A"s author. The
following recursive definition captures this notion of
endorsement:
R(v) =
X
(u,v)∈E
R(u)
Out(u)
where R(v) is the score (importance) of page v, (u, v) is an
edge (hyperlink) from page u to page v contained in the
edge set E of the web graph, and Out(u) is the out-degree
(number of embedded hyperlinks) of page u. However, this
definition suffers from a severe shortcoming: In the
fixedpoint of this recursive equation, only edges that are part of
a strongly-connected component receive a non-zero score. In
order to overcome this deficiency, Page et al. grant each page
a guaranteed minimum score, giving rise to the definition
of standard PageRank:
R(v) =
d
|V |
+ (1 − d)
X
(u,v)∈E
R(u)
Out(u)
where |V | is the size of the vertex set (the number of known
web pages), and d is a damping factor, typically set to be
between 0.1 and 0.2.
Assuming that scores are normalized to sum up to 1,
PageRank can be viewed as the stationary probability
distribution of a random walk on the web graph, where at each
step of the walk, the walker with probability 1 − d moves
from its current node u to a neighboring node v, and with
probability d selects a node uniformly at random from all
nodes in the graph and jumps to it. In the limit, the random
walker is at node v with probability R(v).
One issue that has to be addressed when implementing
PageRank is how to deal with sink nodes, nodes that do
not have any outgoing links. One possibility would be to
select another node uniformly at random and transition to
it; this is equivalent to adding edges from each sink nodes
to all other nodes in the graph. We chose the alternative
approach of introducing a single phantom node. Each sink
node has an edge to the phantom node, and the phantom
node has an edge to itself.
In practice, PageRank scores can be computed using power
iteration. Since PageRank is query-independent, the
computation can be performed off-line ahead of query time. This
property has been key to PageRank"s success, since it is a
challenging engineering problem to build a system that can
perform any non-trivial computation on the web graph at
query time.
In order to compute PageRank scores for all 2.9 billion
nodes in our web graph, we implemented a distributed
version of PageRank. The computation consists of two distinct
phases. In the first phase, the link files produced by the web
crawler, which contain page URLs and their associated link
URLs in textual form, are partitioned among the machines
in the cluster used to compute PageRank scores, and
converted into a more compact format along the way.
Specifically, URLs are partitioned across the machines in the
cluster based on a hash of the URLs" host component, and each
machine in the cluster maintains a table mapping the URL
to a 32-bit integer. The integers are drawn from a densely
packed space, so as to make suitable indices into the array
that will later hold the PageRank scores. The system then
translates our log of pages and their associated hyperlinks
into a compact representation where both page URLs and
link URLs are represented by their associated 32-bit
integers. Hashing the host component of the URLs guarantees
that all URLs from the same host are assigned to the same
machine in our scoring cluster. Since over 80% of all
hyperlinks on the web are relative (that is, are between two pages
on the same host), this property greatly reduces the amount
of network communication required by the second stage of
the distributed scoring computation.
The second phase performs the actual PageRank power
iteration. Both the link data and the current PageRank
vector reside on disk and are read in a streaming fashion;
while the new PageRank vector is maintained in memory.
We represent PageRank scores as 64-bit floating point
numbers. PageRank contributions to pages assigned to remote
machines are streamed to the remote machine via a TCP
connection.
We used a three-machine cluster, each machine equipped
with 16 GB of RAM, to compute standard PageRank scores
for all 2.9 billion URLs that were contained in our web
graph. We used a damping factor of 0.15, and performed 200
power iterations. Starting at iteration 165, the L∞ norm of
the change in the PageRank vector from one iteration to the
next had stopped decreasing, indicating that we had reached
as much of a fixed point as the limitations of 64-bit floating
point arithmetic would allow.
0.07
0.08
0.09
0.10
0.11
1 10 100
Number of back-links sampled per result
NDCG@10
hits-aut-all
hits-aut-ih
hits-aut-id
0.01
0.02
0.03
0.04
1 10 100
Number of back-links sampled per result
MAP@10
hits-aut-all
hits-aut-ih
hits-aut-id
0.07
0.08
0.09
0.10
0.11
0.12
0.13
0.14
1 10 100
Number of back-links sampled per result
MRR@10
hits-aut-all
hits-aut-ih
hits-aut-id
Figure 1: Effectiveness of authority scores computed using different parameterizations of HITS.
A post-processing phase uses the final PageRank vectors
(one per machine) and the table mapping URLs to 32-bit
integers (representing indices into each PageRank vector) to
score the result URL in our query log. As mentioned above,
our web graph covered 9,525,566 of the 66,846,214 result
URLs. These URLs were annotated with their computed
PageRank score; all other URLs received a score of 0.
6. HITS
HITS, unlike PageRank, is a query-dependent ranking
algorithm. HITS (which stands for Hypertext Induced Topic
Search) is based on the following two intuitions: First,
hyperlinks can be viewed as topical endorsements: A hyperlink
from a page u devoted to topic T to another page v is likely
to endorse the authority of v with respect to topic T. Second,
the result set of a particular query is likely to have a certain
amount of topical coherence. Therefore, it makes sense to
perform link analysis not on the entire web graph, but rather
on just the neighborhood of pages contained in the result
set, since this neighborhood is more likely to contain
topically relevant links. But while the set of nodes immediately
reachable from the result set is manageable (given that most
pages have only a limited number of hyperlinks embedded
into them), the set of pages immediately leading to the result
set can be enormous. For this reason, Kleinberg suggests
sampling a fixed-size random subset of the pages linking to
any high-indegree page in the result set. Moreover,
Kleinberg suggests considering only links that cross host
boundaries, the rationale being that links between pages on the
same host (intrinsic links) are likely to be navigational or
nepotistic and not topically relevant.
Given a web graph (V, E) with vertex set V and edge
set E ⊆ V × V , and the set of result URLs to a query
(called the root set R ⊆ V ) as input, HITS computes a
neighborhood graph consisting of a base set B ⊆ V (the
root set and some of its neighboring vertices) and some of
the edges in E induced by B. In order to formalize the
definition of the neighborhood graph, it is helpful to first
introduce a sampling operator and the concept of a
linkselection predicate.
Given a set A, the notation Sn[A] draws n elements
uniformly at random from A; Sn[A] = A if |A| ≤ n.
A link section predicate P takes an edge (u, v) ∈ E. In
this study, we use the following three link section predicates:
all(u, v) ⇔ true
ih(u, v) ⇔ host(u) = host(v)
id(u, v) ⇔ domain(u) = domain(v)
where host(u) denotes the host of URL u, and domain(u)
denotes the domain of URL u. So, all is true for all links,
whereas ih is true only for inter-host links, and id is true
only for inter-domain links.
The outlinked-set OP
of the root set R w.r.t. a
linkselection predicate P is defined to be:
OP
=
[
u∈R
{v ∈ V : (u, v) ∈ E ∧ P(u, v)}
The inlinking-set IP
s of the root set R w.r.t. a link-selection
predicate P and a sampling value s is defined to be:
IP
s =
[
v∈R
Ss[{u ∈ V : (u, v) ∈ E ∧ P(u, v)}]
The base set BP
s of the root set R w.r.t. P and s is defined
to be:
BP
s = R ∪ IP
s ∪ OP
The neighborhood graph (BP
s , NP
s ) has the base set BP
s as
its vertex set and an edge set NP
s containing those edges in
E that are covered by BP
s and permitted by P:
NP
s = {(u, v) ∈ E : u ∈ BP
s ∧ v ∈ BP
s ∧ P(u, v)}
To simplify notation, we write B to denote BP
s , and N to
denote NP
s .
For each node u in the neighborhood graph, HITS
computes two scores: an authority score A(u), estimating how
authoritative u is on the topic induced by the query, and a
hub score H(u), indicating whether u is a good reference to
many authoritative pages. This is done using the following
algorithm:
1. For all u ∈ B do H(u) :=
q
1
|B|
, A(u) :=
q
1
|B|
.
2. Repeat until H and A converge:
(a) For all v ∈ B : A (v) :=
P
(u,v)∈N H(u)
(b) For all u ∈ B : H (u) :=
P
(u,v)∈N A(v)
(c) H := H 2, A := A 2
where X 2 normalizes the vector X to unit length in
euclidean space, i.e. the squares of its elements sum up to 1.
In practice, implementing a system that can compute HITS
within the time constraints of a major search engine (where
the peak query load is in the thousands of queries per second,
and the desired query response time is well below one
second) is a major engineering challenge. Among other things,
the web graph cannot reasonably be stored on disk, since
.221
.106
.105
.104
.102
.095
.092
.090
.038
.036
.035
.034
.032
.032
.011
0.00
0.05
0.10
0.15
0.20
0.25
bm25f
degree-in-id
degree-in-ih
hits-aut-id-25
hits-aut-ih-100
degree-in-all
pagerank
hits-aut-all-100
hits-hub-all-100
hits-hub-ih-100
hits-hub-id-100
degree-out-all
degree-out-ih
degree-out-id
random
NDCG@10
.100
.035
.033
.033
.033
.029
.027
.027
.008
.007
.007
.006
.006
.006
.002
0.00
0.02
0.04
0.06
0.08
0.10
0.12
bm25f
hits-aut-id-9
degree-in-id
hits-aut-ih-15
degree-in-ih
degree-in-all
pagerank
hits-aut-all-100
hits-hub-all-100
hits-hub-ih-100
hits-hub-id-100
degree-out-all
degree-out-ih
degree-out-id
random
MAP@10
.273
.132
.126
.117
.114
.101
.101
.097
.032
.032
.030
.028
.027
.027
.007
0.00
0.05
0.10
0.15
0.20
0.25
0.30
bm25f
hits-aut-id-9
hits-aut-ih-15
degree-in-id
degree-in-ih
degree-in-all
hits-aut-all-100
pagerank
hits-hub-all-100
hits-hub-ih-100
hits-hub-id-100
degree-out-all
degree-out-ih
degree-out-id
random
MRR@10
Figure 2: Effectiveness of different features.
seek times of modern hard disks are too slow to retrieve the
links within the time constraints, and the graph does not fit
into the main memory of a single machine, even when using
the most aggressive compression techniques.
In order to experiment with HITS and other
query-dependent link-based ranking algorithms that require non-regular
accesses to arbitrary nodes and edges in the web graph, we
implemented a system called the Scalable Hyperlink Store,
or SHS for short. SHS is a special-purpose database,
distributed over an arbitrary number of machines that keeps a
highly compressed version of the web graph in memory and
allows very fast lookup of nodes and edges. On our
hardware, it takes an average of 2 microseconds to map a URL
to a 64-bit integer handle called a UID, 15 microseconds to
look up all incoming or outgoing link UIDs associated with
a page UID, and 5 microseconds to map a UID back to a
URL (the last functionality not being required by HITS).
The RPC overhead is about 100 microseconds, but the SHS
API allows many lookups to be batched into a single RPC
request.
We implemented the HITS algorithm using the SHS
infrastructure. We compiled three SHS databases, one
containing all 17.6 billion links in our web graph (all), one
containing only links between pages that are on different hosts
(ih, for inter-host), and one containing only links between
pages that are on different domains (id). We consider two
URLs to belong to different hosts if the host portions of the
URLs differ (in other words, we make no attempt to
determine whether two distinct symbolic host names refer to
the same computer), and we consider a domain to be the
name purchased from a registrar (for example, we consider
news.bbc.co.uk and www.bbc.co.uk to be different hosts
belonging to the same domain). Using each of these databases,
we computed HITS authority and hub scores for various
parameterizations of the sampling operator S, sampling
between 1 and 100 back-links of each page in the root set.
Result URLs that were not covered by our web graph
automatically received authority and hub scores of 0, since they
were not connected to any other nodes in the neighborhood
graph and therefore did not receive any endorsements.
We performed forty-five different HITS computations, each
combining one of the three link selection predicates (all, ih,
and id) with a sampling value. For each combination, we
loaded one of the three databases into an SHS system
running on six machines (each equipped with 16 GB of RAM),
and computed HITS authority and hub scores, one query
at a time. The longest-running combination (using the all
database and sampling 100 back-links of each root set
vertex) required 30,456 seconds to process the entire query set,
or about 1.1 seconds per query on average.
7. EXPERIMENTAL RESULTS
For a given query Q, we need to rank the set of documents
satisfying Q (the result set of Q). Our hypothesis is that
good features should be able to rank relevant documents in
this set higher than non-relevant ones, and this should result
in an increase in each performance measure over the query
set. We are specifically interested in evaluating the
usefulness of HITS and other link-based features. In principle, we
could do this by sorting the documents in each result set by
their feature value, and compare the resulting NDCGs. We
call this ranking with isolated features.
Let us first examine the relative performance of the
different parameterizations of the HITS algorithm we
examined. Recall that we computed HITS for each combination
of three link section schemes - all links (all), inter-host links
only (ih), and inter-domain links only (id) - with back-link
sampling values ranging from 1 to 100. Figure 1 shows the
impact of the number of sampled back-links on the retrieval
performance of HITS authority scores. Each graph is
associated with one performance measure. The horizontal axis
of each graph represents the number of sampled back-links,
the vertical axis represents performance under the
appropriate measure, and each curve depicts a link selection scheme.
The id scheme slightly outperforms ih, and both vastly
outperform the all scheme - eliminating nepotistic links pays
off. The performance of the all scheme increases as more
back-links of each root set vertex are sampled, while the
performance of the id and ih schemes peaks at between 10
and 25 samples and then plateaus or even declines,
depending on the performance measure.
Having compared different parameterizations of HITS, we
will now fix the number of sampled back-links at 100 and
compare the three link selection schemes against other
isolated features: PageRank, in-degree and out-degree
counting links of all pages, of different hosts only and of different
domains only (all, ih and id datasets respectively), and a
text retrieval algorithm exploiting anchor text: BM25F[24].
BM25F is a state-of-the art ranking function solely based on
textual content of the documents and their associated
anchor texts. BM25F is a descendant of BM25 that combines
the different textual fields of a document, namely title, body
and anchor text. This model has been shown to be one of
the best-performing web search scoring functions over the
last few years [8, 24]. BM25F has a number of free
parameters (2 per field, 6 in our case); we used the parameter values
described in [24].
.341
.340
.339
.337
.336
.336
.334
.311
.311
.310
.310
.310
.310
.231
0.22
0.24
0.26
0.28
0.30
0.32
0.34
0.36
degree-in-id
degree-in-ih
degree-in-all
hits-aut-ih-100
hits-aut-all-100
pagerank
hits-aut-id-10
degree-out-all
hits-hub-all-100
degree-out-ih
hits-hub-ih-100
degree-out-id
hits-hub-id-10
bm25f
NDCG@10
.152
.152
.151
.150
.150
.149
.149
.137
.136
.136
.128
.127
.127
.100
0.09
0.10
0.11
0.12
0.13
0.14
0.15
0.16
degree-in-ih
degree-in-id
degree-in-all
hits-aut-ih-100
hits-aut-all-100
hits-aut-id-10
pagerank
hits-hub-all-100
degree-out-ih
hits-hub-id-100
degree-out-all
degree-out-id
hits-hub-ih-100
bm25f
MAP@10
.398
.397
.394
.394
.392
.392
.391
.356
.356
.356
.356
.356
.355
.273
0.25
0.30
0.35
0.40
degree-in-id
degree-in-ih
degree-in-all
hits-aut-ih-100
hits-aut-all-100
pagerank
hits-aut-id-10
degree-out-all
hits-hub-all-100
degree-out-ih
hits-hub-ih-100
degree-out-id
hits-hub-id-10
bm25f
MRR@10
Figure 3: Effectiveness measures for linear combinations of link-based features with BM25F.
Figure 2 shows the NDCG, MRR, and MAP measures
of these features. Again all performance measures (and
for all rank-thresholds we explored) agree. As expected,
BM25F outperforms all link-based features by a large
margin. The link-based features are divided into two groups,
with a noticeable performance drop between the groups.
The better-performing group consists of the features that
are based on the number and/or quality of incoming links
(in-degree, PageRank, and HITS authority scores); and the
worse-performing group consists of the features that are
based on the number and/or quality of outgoing links
(outdegree and HITS hub scores). In the group of features based
on incoming links, features that ignore nepotistic links
perform better than their counterparts using all links.
Moreover, using only inter-domain (id) links seems to be marginally
better than using inter-host (ih) links.
The fact that features based on outgoing links
underperform those based on incoming links matches our
expectations; if anything, it is mildly surprising that outgoing links
provide a useful signal for ranking at all. On the other
hand, the fact that in-degree features outperform PageRank
under all measures is quite surprising. A possible
explanation is that link-spammers have been targeting the published
PageRank algorithm for many years, and that this has led
to anomalies in the web graph that affect PageRank, but
not other link-based features that explore only a distance-1
neighborhood of the result set. Likewise, it is surprising that
simple query-independent features such as in-degree, which
might estimate global quality but cannot capture relevance
to a query, would outperform query-dependent features such
as HITS authority scores.
However, we cannot investigate the effect of these features
in isolation, without regard to the overall ranking function,
for several reasons. First, features based on the textual
content of documents (as opposed to link-based features) are
the best predictors of relevance. Second, link-based features
can be strongly correlated with textual features for several
reasons, mainly the correlation between in-degree and
numFeature Transform function
bm25f T(s) = s
pagerank T(s) = log(s + 3 · 10−12
)
degree-in-* T(s) = log(s + 3 · 10−2
)
degree-out-* T(s) = log(s + 3 · 103
)
hits-aut-* T(s) = log(s + 3 · 10−8
)
hits-hub-* T(s) = log(s + 3 · 10−1
)
Table 1: Near-optimal feature transform functions.
ber of textual anchor matches.
Therefore, one must consider the effect of link-based
features in combination with textual features. Otherwise, we
may find a link-based feature that is very good in isolation
but is strongly correlated with textual features and results
in no overall improvement; and vice versa, we may find a
link-based feature that is weak in isolation but significantly
improves overall performance.
For this reason, we have studied the combination of the
link-based features above with BM25F. All feature
combinations were done by considering the linear combination of two
features as a document score, using the formula score(d) =Pn
i=1 wiTi(Fi(d)), where d is a document (or
documentquery pair, in the case of BM25F), Fi(d) (for 1 ≤ i ≤ n) is a
feature extracted from d, Ti is a transform, and wi is a free
scalar weight that needs to be tuned. We chose transform
functions that we empirically determined to be well-suited.
Table 1 shows the chosen transform functions.
This type of linear combination is appropriate if we
assume features to be independent with respect to relevance
and an exponential model for link features, as discussed
in [8]. We tuned the weights by selecting a random
subset of 5,000 queries from the query set, used an iterative
refinement process to find weights that maximized a given
performance measure on that training set, and used the
remaining 23,043 queries to measure the performance of the
thus derived scoring functions.
We explored the pairwise combination of BM25F with
every link-based scoring function. Figure 3 shows the NDCG,
MRR, and MAP measures of these feature combinations,
together with a baseline BM25F score (the right-most bar
in each graph), which was computed using the same subset
of 23,045 queries that were used as the test set for the
feature combinations. Regardless of the performance measure
applied, we can make the following general observations:
1. Combining any of the link-based features with BM25F
results in a substantial performance improvement over
BM25F in isolation.
2. The combination of BM25F with features based on
incoming links (PageRank, in-degree, and HITS
authority scores) performs substantially better than the
combination with features based on outgoing links (HITS
hub scores and out-degree).
3. The performance differences between the various
combinations of BM25F with features based on incoming
links is comparatively small, and the relative ordering
of feature combinations is fairly stable across the
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0 2 4 6 8 10 12 14 16 18 20 22 24
MAP@10
26 374 1640 2751 3768 4284 3944 3001 2617 1871 1367 771 1629
bm25fnorm pagerank degree-in-id hits-aut-id-100
Figure 4: Effectiveness measures for selected
isolated features, broken down by query specificity.
ferent performance measures used. However, the
combination of BM25F with any in-degree variant, and in
particular with id in-degree, consistently outperforms
the combination of BM25F with PageRank or HITS
authority scores, and can be computed much easier
and faster.
Finally, we investigated whether certain features are
better for some queries than for others. Particularly, we are
interested in the relationship between the specificity of a query
and the performance of different ranking features. The most
straightforward measure of the specificity of a query Q would
be the number of documents in a search engine"s corpus that
satisfy Q. Unfortunately, the query set available to us did
not contain this information. Therefore, we chose to
approximate the specificity of Q by summing up the inverse
document frequencies of the individual query terms
comprising Q. The inverse document frequency (IDF) of a term
t with respect to a corpus C is defined to be logN/doc(t),
where doc(t) is the number of documents in C containing t
and N is the total number of documents in C. By summing
up the IDFs of the query terms, we make the (flawed)
assumption that the individual query terms are independent of
each other. However, while not perfect, this approximation
is at least directionally correct.
We broke down our query set into 13 buckets, each bucket
associated with an interval of query IDF values, and we
computed performance metrics for all ranking functions applied
(in isolation) to the queries in each bucket. In order to
keep the graphs readable, we will not show the performance
of all the features, but rather restrict ourselves to the four
most interesting ones: PageRank, id HITS authority scores,
id in-degree, and BM25F. Figure 4 shows the MAP@10 for
all 13 query specificity buckets. Buckets on the far left of
each graph represent very general queries; buckets on the far
right represent very specific queries. The figures on the
upper x axis of each graph show the number of queries in each
bucket (e.g. the right-most bucket contains 1,629 queries).
BM25F performs best for medium-specific queries, peaking
at the buckets representing the IDF sum interval [12,14).
By comparison, HITS peaks at the bucket representing the
IDF sum interval [4,6), and PageRank and in-degree peak at
the bucket representing the interval [6,8), i.e. more general
queries.
8. CONCLUSIONS AND FUTURE WORK
This paper describes a large-scale evaluation of the
effectiveness of HITS in comparison with other link-based
ranking algorithms, in particular PageRank and in-degree,
when applied in isolation or in combination with a text
retrieval algorithm exploiting anchor text (BM25F).
Evaluation is carried out with respect to a large number of human
evaluated queries, using three different measures of
effectiveness: NDCG, MRR, and MAP. Evaluating link-based
features in isolation, we found that web page in-degree
outperforms PageRank, and is about as effwective as HITS
authority scores. HITS hub scores and web page out-degree are
much less effective ranking features, but still outperform a
random ordering. A linear combination of any link-based
features with BM25F produces a significant improvement in
performance, and there is a clear difference between
combining BM25F with a feature based on incoming links
(indegree, PageRank, or HITS authority scores) and a feature
based on outgoing links (HITS hub scores and out-degree),
but within those two groups the precise choice of link-based
feature matters relatively little.
We believe that the measurements presented in this paper
provide a solid evaluation of the best well-known link-based
ranking schemes. There are many possible variants of these
schemes, and many other link-based ranking algorithms have
been proposed in the literature, hence we do not claim this
work to be the last word on this subject, but rather the
first step on a long road. Future work includes evaluation
of different parameterizations of PageRank and HITS. In
particular, we would like to study the impact of changes
to the PageRank damping factor on effectiveness, the
impact of various schemes meant to counteract the effects of
link spam, and the effect of weighing hyperlinks differently
depending on whether they are nepotistic or not. Going
beyond PageRank and HITS, we would like to measure the
effectiveness of other link-based ranking algorithms, such as
SALSA. Finally, we are planning to experiment with more
complex feature combinations.
9. REFERENCES
[1] B. Amento, L. Terveen, and W. Hill. Does authority
mean quality? Predicting expert quality ratings of
web documents. In Proc. of the 23rd Annual
International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages
296-303, 2000.
[2] M. Bianchini, M. Gori, and F. Scarselli. Inside
PageRank. ACM Transactions on Internet Technology,
5(1):92-128, 2005.
[3] A. Borodin, G. O. Roberts, and J. S. Rosenthal.
Finding authorities and hubs from link structures on
the World Wide Web. In Proc. of the 10th
International World Wide Web Conference, pages
415-429, 2001.
[4] A. Borodin, G. O. Roberts, J. S. Rosenthal, and
P. Tsaparas. Link analysis ranking: algorithms,
theory, and experiments. ACM Transactions on
Interet Technology, 5(1):231-297, 2005.
[5] S. Brin and L. Page. The anatomy of a large-scale
hypertextual Web search engine. Computer Networks
and ISDN Systems, 30(1-7):107-117, 1998.
[6] C. Burges, T. Shaked, E. Renshaw, A. Lazier,
M. Deeds, N. Hamilton, and G. Hullender. Learning
to rank using gradient descent. In Proc. of the 22nd
International Conference on Machine Learning, pages
89-96, New York, NY, USA, 2005. ACM Press.
[7] D. Cohn and H. Chang. Learning to probabilistically
identify authoritative documents. In Proc. of the 17th
International Conference on Machine Learning, pages
167-174, 2000.
[8] N. Craswell, S. Robertson, H. Zaragoza, and
M. Taylor. Relevance weighting for query independent
evidence. In Proc. of the 28th Annual International
ACM SIGIR Conference on Research and
Development in Information Retrieval, pages 416-423,
2005.
[9] E. Garfield. Citation analysis as a tool in journal
evaluation. Science, 178(4060):471-479, 1972.
[10] Z. Gy¨ongyi and H. Garcia-Molina. Web spam
taxonomy. In 1st International Workshop on
Adversarial Information Retrieval on the Web, 2005.
[11] Z. Gy¨ongyi, H. Garcia-Molina, and J. Pedersen.
Combating web spam with TrustRank. In Proc. of the
30th International Conference on Very Large
Databases, pages 576-587, 2004.
[12] B. J. Jansen, A. Spink, J. Bateman, and T. Saracevic.
Real life information retrieval: a study of user queries
on the web. ACM SIGIR Forum, 32(1):5-17, 1998.
[13] K. J¨arvelin and J. Kek¨al¨ainen. Cumulated gain-based
evaluation of IR techniques. ACM Transactions on
Information Systems, 20(4):422-446, 2002.
[14] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and
G. H. Golub. Extrapolation methods for accelerating
PageRank computations. In Proc. of the 12th
International World Wide Web Conference, pages
261-270, 2003.
[15] M. M. Kessler. Bibliographic coupling between
scientific papers. American Documentation,
14(1):10-25, 1963.
[16] J. M. Kleinberg. Authoritative sources in a
hyperlinked environment. In Proc. of the 9th Annual
ACM-SIAM Symposium on Discrete Algorithms, pages
668-677, 1998.
[17] J. M. Kleinberg. Authoritative sources in a
hyperlinked environment. Journal of the ACM,
46(5):604-632, 1999.
[18] A. N. Langville and C. D. Meyer. Deeper inside
PageRank. Internet Mathematics, 1(3):2005, 335-380.
[19] R. Lempel and S. Moran. The stochastic approach for
link-structure analysis (SALSA) and the TKC effect.
Computer Networks and ISDN Systems,
33(1-6):387-401, 2000.
[20] A. Y. Ng, A. X. Zheng, and M. I. Jordan. Stable
algorithms for link analysis. In Proc. of the 24th
Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 258-266, 2001.
[21] L. Page, S. Brin, R. Motwani, and T. Winograd. The
PageRank citation ranking: Bringing order to the
web. Technical report, Stanford Digital Library
Technologies Project, 1998.
[22] J. A. Tomlin. A new paradigm for ranking pages on
the World Wide Web. In Proc. of the 12th
International World Wide Web Conference, pages
350-355, 2003.
[23] T. Upstill, N. Craswell, and D. Hawking. Predicting
fame and fortune: Pagerank or indegree? In Proc. of
the Australasian Document Computing Symposium,
pages 31-40, 2003.
[24] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and
S. Robertson. Microsoft Cambridge at TREC-13:
Web and HARD tracks. In Proc. of the 13th Text
Retrieval Conference, 2004. | mean reciprocal rank;scale and relevance;quantitative measure;hit;breadth-first search crawl;mean average precision;link-based feature;rank;ranking;map;link graph;pagerank;ndcg;mrr;normalized discounted cumulative gain measurement;feature selection;query specificity;bm25f;crawled web page;hyperlink analysis |
train_H-42 | HITS Hits TRECExploring IR Evaluation Results with Network Analysis | We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known - and somewhat generalized - indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case. | 1. INTRODUCTION
Evaluation is a primary concern in the Information
Retrieval (IR) field. TREC (Text REtrieval Conference) [12,
15] is an annual benchmarking exercise that has become a
de facto standard in IR evaluation: before the actual
conference, TREC provides to participants a collection of
documents and a set of topics (representations of information
needs). Participants use their systems to retrieve, and
submit to TREC, a list of documents for each topic. After the
lists have been submitted and pooled, the TREC organizers
employ human assessors to provide relevance judgements on
the pooled set. This defines a set of relevant documents for
each topic. System effectiveness is then measured by well
established metrics (Mean Average Precision being the most
used). Other conferences such as NTCIR, INEX, CLEF
provide comparable data.
Network analysis is a discipline that studies features and
properties of (usually large) networks, or graphs. Of
particular importance is Social Network Analysis [16], that studies
networks made up by links among humans (friendship,
acquaintance, co-authorship, bibliographic citation, etc.).
Network analysis and IR fruitfully meet in Web Search
Engine implementation, as is already described in textbooks
[3,6]. Current search engines use link analysis techniques to
help rank the retrieved documents. Some indicators (and
the corresponding algorithms that compute them) have been
found useful in this respect, and are nowadays well known:
Inlinks (the number of links to a Web page), PageRank [7],
and HITS (Hyperlink-Induced Topic Search) [5]. Several
extensions to these algorithms have been and are being
proposed. These indicators and algorithms might be quite
general in nature, and can be used for applications which are
very different from search result ranking. One example is
using HITS for stemming, as described by Agosti et al. [1].
In this paper, we propose and demonstrate a method
for constructing a network, specifically a weighted complete
bidirectional directed bipartite graph, on a set of TREC
topics and participating systems. Links represent effectiveness
measurements on system-topic pairs. We then apply
analysis methods originally developed for search applications to
the resulting network. This reveals phenomena previously
hidden in TREC data. In passing, we also provide a small
generalization to Kleinberg"s HITS algorithm, as well as to
Inlinks and PageRank.
The paper is organized as follows: Sect. 2 gives some
motivations for the work. Sect. 3 presents the basic ideas of
normalizing average precision and of constructing a
systemstopics graph, whose properties are analyzed in Sect. 4; Sect. 5
presents some experiments on TREC 8 data; Sect. 6
discusses some issues and Sect. 7 closes the paper.
2. MOTIVATIONS
We are interested in the following hypotheses:
1. Some systems are more effective than others;
t1 · · · tn MAP
s1 AP(s1, t1) · · · AP(s1, tn) MAP(s1)
...
...
...
sm AP(sm, t1) · · · AP(sm, tn) MAP(sm)
AAP AAP(t1) · · · AAP(tn)
(a)
t1 t2 · · · MAP
s1 0.5 0.4 · · · 0.6
s2 0.4 · · · · · · 0.3
...
...
...
...
AAP 0.6 0.3 · · ·
(b)
Table 1: AP, MAP and AAP
2. Some topics are easier than others;
3. Some systems are better than others at distinguishing
easy and difficult topics;
4. Some topics are better than others at distinguishing
more or less effective systems.
The first of these hypotheses needs no further justification
- every reported significant difference between any two
systems supports it. There is now also quite a lot of evidence
for the second, centered on the TREC Robust Track [14].
Our primary interest is in the third and fourth. The third
might be regarded as being of purely academic interest;
however, the fourth has the potential for being of major
practical importance in evaluation studies. If we could identify
a relatively small number of topics which were really good
at distinguishing effective and ineffective systems, we could
save considerable effort in evaluating systems.
One possible direction from this point would be to attempt
direct identification of such small sets of topics. However, in
the present paper, we seek instead to explore the
relationships suggested by the hypotheses, between what different
topics tell us about systems and what different systems tell
us about topics. We seek methods of building and analysing
a matrix of system-topic normalised performances, with a
view to giving insight into the issue and confirming or
refuting the third and fourth hypotheses. It turns out that
the obvious symmetry implied by the above formulation of
the hypotheses is a property worth investigating, and the
investigation does indeed give us valuable insights.
3. THE IDEA
3.1 1st step: average precision table
From TREC results, one can produce an Average
Precision (AP) table (see Tab. 1a): each AP(si, tj) value
measures the AP of system si on topic tj.
Besides AP values, the table shows Mean Average
Precision (MAP) values i.e., the mean of the AP values for a
single system over all topics, and what we call Average
Average Precision (AAP) values i.e., the average of the AP
values for a single topic over all systems:
MAP(si) =
1
n
nX
j=1
AP(si, tj), (1)
AAP(tj) =
1
m
mX
i=1
AP(si, tj). (2)
MAPs are indicators of systems performance: higher MAP
means good system. AAP are indicators of the performance
on a topic: higher AAP means easy topic - a topic on which
all or most systems have good performance.
3.2 Critique of pure AP
MAP is a standard, well known, and widely used IR
effectiveness measure. Single AP values are used too (e.g.,
in AP histograms). Topic difficulty is often discussed (e.g.,
in TREC Robust track [14]), although AAP values are not
used and, to the best of our knowledge, have never been
proposed (the median, not the average, of AP on a topic
is used to produce TREC AP histograms [11]). However,
the AP values in Tab. 1 present two limitations, which are
symmetric in some respect:
• Problem 1. They are not reliable to compare the
effectiveness of a system on different topics, relative
to the other systems. If, for example, AP(s1, t1) >
AP(s1, t2), can we infer that s1 is a good system (i.e.,
has a good performance) on t1 and a bad system on
t2? The answer is no: t1 might be an easy topic (with
high AAP) and t2 a difficult one (low AAP). See an
example in Tab. 1b: s1 is outperformed (on average)
by the other systems on t1, and it outperforms the
other systems on t2.
• Problem 2. Conversely, if, for example, AP(s1, t1) >
AP(s2, t1), can we infer that t1 is considered easier
by s1 than by s2? No, we cannot: s1 might be a good
system (with high MAP) and s2 a bad one (low MAP);
see an example in Tab. 1b.
These two problems are a sort of breakdown of the well
known high influence of topics on IR evaluation; again, our
formulation makes explicit the topics / systems symmetry.
3.3 2nd step: normalizations
To avoid these two problems, we can normalize the AP
table in two ways. The first normalization removes the
influence of the single topic ease on system performance. Each
AP(si, tj) value in the table depends on both system
goodness and topic ease (the value will increase if a system is
good and/or the topic is easy). By subtracting from each
AP(si, tj) the AAP(tj) value, we obtain normalized AP
values (APA(si, tj), Normalized AP according to AAP):
APA(si, tj) = AP(si, tj) − AAP(tj), (3)
that depend on system performance only (the value will
increase only if system performance is good). See Tab. 2a.
The second normalization removes the influence of the
single system effectiveness on topic ease: by subtracting from
each AP(si, tj) the MAP(si) value, we obtain normalized
AP values (APM(si, tj), Normalized AP according to MAP):
APM(si, tj) = AP(si, tj) − MAP(si), (4)
that depend on topic ease only (the value will increase only
if the topic is easy, i.e., all systems perform well on that
topic). See Tab. 2b.
In other words, APA avoids Problem 1: APA(s, t) values
measure the performance of system s on topic t normalized
t1 · · · tn MAP
s1 APA(s1, t1) · · · APA(s1, tn) MAP(s1)
...
...
...
sm APA(sm, t1) · · · APA(sm, tn) MAP(sm)
0 · · · 0 0
(a)
t1 · · · tn
s1 APM(s1, t1) · · · APM(s1, tn) 0
...
...
...
sm APM(sm, t1) · · · APM(sm, tn) 0
AAP AAP(t1) · · · AAP(tn) 0
(b)
t1 t2 · · · MAP
s1 −0.1 0.1 · · · . . .
s2 0.2 · · · · · · . . .
...
...
...
0 0 · · ·
t1 t2 · · ·
s1 −0.1 −0.2 · · · 0
s2 0.1 · · · · · · 0
...
...
...
AAP . . . . . . · · ·
(c) (d)
Table 2: Normalizations: APA and MAP: normalized
AP (APA) and MAP (MAP) (a); normalized AP (APM)
and AAP (AAP) (b); a numeric example (c) and (d)
according to the ease of the topic (easy topics will not have
higher APA values). Now, if, for example, APA(s1, t2) >
APA(s1, t1), we can infer that s1 is a good system on t2 and
a bad system on t1 (see Tab. 2c). Vice versa, APM avoids
Problem 2: APM(s, t) values measure the ease of topic t
according to system s, normalized according to goodness
of the system (good systems will not lead to higher APM
values). If, for example, APM(s2, t1) > APM(s1, t1), we
can infer that t1 is considered easier by s2 than by s1 (see
Tab. 2d).
On the basis of Tables 2a and 2b, we can also define two
new measures of system effectiveness and topic ease, i.e., a
Normalized MAP (MAP), obtained by averaging the APA
values on one row in Tab. 2a, and a Normalized AAP (AAP),
obtained by averaging the APM values on one column in
Tab. 2b:
MAP(si) =
1
n
nX
j=1
APA(si, tj) (5)
AAP(tj) =
1
m
mX
i=1
APM(si, tj). (6)
Thus, overall system performance can be measured,
besides by means of MAP, also by means of MAP. Moreover,
MAP is equivalent to MAP, as can be immediately proved
by using Eqs. (5), (3), and (1):
MAP(si) =
1
n
nX
j=1
(AP(si, tj) − AAP(tj)) =
= MAP(si) −
1
n
nX
j=1
AAP(tj)
(and 1
n
Pn
j=1 AAP(tj) is the same for all systems). And,
conversely, overall topic ease can be measured, besides by
t1 · · · tn
s1
... APM
sm
t1 · · · tn
s1
... APA
sm
s1 · · · sm t1 · · · tn
s1
... 0 APM 0
sm
t1
... APA
T
0 0
tn
MAP AAP 0
Figure 1: Construction of the adjacency matrix.
APA
T
is the transpose of APA.
means of AAP, also by means of AAP, and this is equivalent
(the proof is analogous, and relies on Eqs. (6), (4), and (2)).
The two Tables 2a and 2b are interesting per se, and can
be analyzed in several different ways. In the following we
propose an analysis based on network analysis techniques,
mainly Kleinberg"s HITS algorithm. There is a little further
discussion of these normalizations in Sect. 6.
3.4 3rd step: Systems-Topics Graph
The two tables 2a and 2b can be merged into a single one
with the procedure shown in Fig. 1. The obtained matrix
can be interpreted as the adjacency matrix of a complete
weighted bipartite graph, that we call Systems-Topics graph.
Arcs and weights in the graph can be interpreted as follows:
• (weight on) arc s → t: how much the system s thinks
that the topic t is easy - assuming that a system has
no knowledge of the other systems (or in other words,
how easy we might think the topic is, knowing only
the results for this one system). This corresponds to
APM values, i.e., to normalized topic ease (Fig. 2a).
• (weight on) arc s ← t: how much the topic t thinks
that the system s is good - assuming that a topic has
no knowledge of the other topics (or in other words,
how good we might think the system is, knowing only
the results for this one topic). This corresponds to
APA (normalized system effectiveness, Fig. 2b).
Figs. 2c and 2d show the Systems-Topics complete weighted
bipartite graph, on a toy example with 4 systems and 2
topics; the graph is split in two parts to have an understandable
graphical representation: arcs in Fig. 2c are labeled with
APM values; arcs in Fig. 2d are labeled with APA values.
4. ANALYSIS OF THE GRAPH
4.1 Weighted Inlinks, Outlinks, PageRank
The sum of weighted outlinks, i.e., the sum of the weights
on the outgoing arcs from each node, is always zero:
• The outlinks on each node corresponding to a system
s (Fig. 2c) is the sum of all the corresponding APM
values on one row of the matrix in Tab. 2b.
• The outlinks on each node corresponding to a topic
t (Fig. 2d) is the sum of all the corresponding APA
(a) (b)
(c) (d)
Figure 2: The relationships between systems and
topics (a) and (b); and the Systems-Topics graph for
a toy example (c) and (d). Dashed arcs correspond
to negative values.
h
(a)
s1
...
sm
t1
...
tn
=
s1 · · · sm t1 · · · tn
s1
... 0 APM
(APA)
sm
t1
... APA
T
0
tn (APM
T
)
·
a
(h)
s1
...
sm
t1
...
tn
Figure 3: Hub and Authority computation
values on one row of the transpose of the matrix in
Tab. 2a.
The average1
of weighted inlinks is:
• MAP for each node corresponding to a system s; this
corresponds to the average of all the corresponding
APA values on one column of the APA
T
part of the
adjacency matrix (see Fig. 1).
• AAP for each node corresponding to a topic t; this
corresponds to the average of all the corresponding
APM values on one column of the APM part of the
adjacency matrix (see Fig. 1).
Therefore, weighted inlinks measure either system
effectiveness or topic ease; weighted outlinks are not meaningful. We
could also apply the PageRank algorithm to the network;
the meaning of the PageRank of a node is not quite so
obvious as Inlinks and Outlinks, but it also seems a sensible
measure of either system effectiveness or topic ease: if a
system is effective, it will have several incoming links with high
1
Usually, the sum of the weights on the incoming arcs to
each node is used in place of the average; since the graph is
complete, it makes no difference.
weights (APA); if a topic is easy it will have high weights
(APM) on the incoming links too. We will see experimental
confirmation in the following.
4.2 Hubs and Authorities
Let us now turn to more sophisticated indicators.
Kleinberg"s HITS algorithm defines, for a directed graph, two
indicators: hubness and authority; we reiterate here some of
the basic details of the HITS algorithm in order to
emphasize both the nature of our generalization and the
interpretation of the HITS concepts in this context. Usually, hubness
and authority are defined as h(x) =
P
x→y a(y) and a(x) =
P
y→x h(y), and described intuitively as a good hub links
many good authorities; a good authority is linked from many
good hubs. As it is well known, an equivalent formulation
in linear algebra terms is (see also Fig. 3):
h = Aa and a = AT
h (7)
(where h is the hubness vector, with the hub values for all
the nodes; a is the authority vector; A is the adjacency
matrix of the graph; and AT
its transpose). Usually, A
contains 0s and 1s only, corresponding to presence and absence
of unweighted directed arcs, but Eq. (7) can be immediately
generalized to (in fact, it is already valid for) A containing
any real value, i.e., to weighted graphs.
Therefore we can have a generalized version (or rather
a generalized interpretation, since the formulation is still
the original one) of hubness and authority for all nodes in
a graph. An intuitive formulation of this generalized HITS
is still available, although slightly more complex: a good
hub links, by means of arcs having high weights, many good
authorities; a good authority is linked, by means of arcs
having high weights, from many good hubs. Since arc weights
can be, in general, negative, hub and authority values can be
negative, and one could speak of unhubness and unauthority;
the intuitive formulation could be completed by adding that
a good hub links good unauthorities by means of links with
highly negative weights; a good authority is linked by good
unhubs by means of links with highly negative weights.
And, also, a good unhub links positively good
unauthorities and negatively good authorities; a good unauthority
is linked positively from good unhubs and negatively from
good hubs.
Let us now apply generalized HITS to our Systems-Topics
graph. We compute a(s), h(s), a(t), and h(t). Intuitively,
we expect that a(s) is somehow similar to Inlinks, so it
should be a measure of either systems effectiveness or topic
ease. Similarly, hubness should be more similar to Outlinks,
thus less meaningful, although the interplay between hub
and authority might lead to the discovery of something
different. Let us start by remarking that authority of topics
and hubness of systems depend only on each other; similarly
hubness of topics and authority of systems depend only on
each other: see Figs. 2c, 2d and 3.
Thus the two graphs in Figs. 2c and 2d can be analyzed
independently. In fact the entire HITS analysis could be
done in one direction only, with just APM(s, t) values or
alternatively with just APA(s, t). As discussed below,
probably most interest resides in the hubness of topics and the
authority of systems, so the latter makes sense. However, in
this paper, we pursue both analyses together, because the
symmetry itself is interesting.
Considering Fig. 2c we can state that:
• Authority a(t) of a topic node t increases when:
- if h(si) > 0, APM(si, t) increases
(or if APM(si, t) > 0, h(si) increases);
- if h(si) < 0, APM(si, t) decreases
(or if APM(si, t) < 0, h(si) decreases).
• Hubness h(s) of a system node s increases when:
- if a(tj) > 0, APM(s, tj) increases
(or, if APM(s, tj) > 0, a(tj) increases);
- if a(tj) < 0, APM(s, tj) decreases
(or, if APM(s, tj) < 0, a(tj) decreases).
We can summarize this as: a(t) is high if APM(s, t) is high
for those systems with high h(s); h(s) is high if APM(s, t)
is high for those topics with high a(t). Intuitively, authority
a(t) of a topic measures topic ease; hubness h(s) of a system
measures system"s capability to recognize easy topics. A
system with high unhubness (negative hubness) would tend
to regard easy topics as hard and hard ones as easy.
The situation for Fig. 2d, i.e., for a(s) and h(t), is
analogous. Authority a(s) of a system node s measures system
effectiveness: it increases with the weight on the arc (i.e.,
APA(s, tj)) and the hubness of the incoming topic nodes tj.
Hubness h(t) of a topic node t measures topic capability to
recognize effective systems: if h(t) > 0, it increases further
if APA(s, tj) increases; if h(t) < 0, it increases if APA(s, tj)
decreases.
Intuitively, we can state that A system has a higher
authority if it is more effective on topics with high hubness;
and A topic has a higher hubness if it is easier for those
systems which are more effective in general. Conversely for
system hubness and topic authority: A topic has a higher
authority if it is easier on systems with high hubness; and
A system has a higher hubness if it is more effective for
those topics which are easier in general.
Therefore, for each system we have two indicators:
authority (a(s)), measuring system effectiveness, and hubness
(h(s)), measuring system capability to estimate topic ease.
And for each topic, we have two indicators: authority (a(t)),
measuring topic ease, and hubness (h(t)), measuring topic
capability to estimate systems effectiveness. We can define
them formally as
a(s) =
X
t
h(t) · APA(s, t), h(t) =
X
s
a(s) · APA(s, t),
a(t) =
X
s
h(s) · APM(s, t), h(s) =
X
t
a(t) · APM(s, t).
We observe that the hubness of topics may be of particular
interest for evaluation studies. It may be that we can
evaluate the effectiveness of systems efficiently by using relatively
few high-hubness topics.
5. EXPERIMENTS
We now turn to discuss if these indicators are meaningful
and useful in practice, and how they correlate with standard
measures used in TREC. We have built the Systems-Topics
graph for TREC 8 data (featuring 1282
systems - actually,
2
Actually, TREC 8 data features 129 systems; due to
some bug in our scripts, we did not include one system
(8manexT3D1N0), but the results should not be affected.
0
0.1
0.2
0.3
-1 -0.5 0 0.5 1
NAPM
NAPA
AP
Figure 4: Distributions of AP, APA, and APM values
in TREC 8 data
MAP In PR H A
MAP 1 1.0 1.0 .80 .99
Inlinks 1 1.0 .80 .99
PageRank 1 .80 .99
Hub 1 .87
(a)
AAP In PR H A
AAP 1 1.0 1.0 .92 1.0
Inlinks 1 1.0 .92 1.0
PageRank 1 .92 1.0
Hub 1 .93
(b)
Table 3: Correlations between network analysis
measures and MAP (a) and AAP (b)
runs - on 50 topics). This section illustrates the results
obtained mining these data according to the method presented
in previous sections.
Fig. 4 shows the distributions of AP, APA, and APM:
whereas AP is very skewed, both APA and APM are much
more symmetric (as it should be, since they are constructed
by subtracting the mean). Tables 3a and 3b show the
Pearson"s correlation values between Inlinks, PageRank, Hub,
Authority and, respectively, MAP or AAP (Outlinks
values are not shown since they are always zero, as seen in
Sect. 4). As expected, Inlinks and PageRank have a perfect
correlation with MAP and AAP. Authority has a very high
correlation too with MAP and AAP; Hub assumes slightly
lower values.
Let us analyze the correlations more in detail. The
correlations chart in Figs. 5a and 5b demonstrate the high
correlation between Authority and MAP or AAP. Hubness
presents interesting phenomena: both Fig. 5c (correlation
with MAP) and Fig. 5d (correlation with AAP) show that
correlation is not exact, but neither is it random. This, given
the meaning of hubness (capability in estimating topic ease
and system effectiveness), means two things: (i) more
effective systems are better at estimating topic ease; and (ii)
easier topics are better at estimating system effectiveness.
Whereas the first statement is fine (there is nothing against
it), the second is a bit worrying. It means that system
effectiveness in TREC is affected more by easy topics than by
difficult topics, which is rather undesirable for quite obvious
reasons: a system capable of performing well on a difficult
topic, i.e., on a topic on which the other systems perform
badly, would be an important result for IR effectiveness;
con-8E-5
-6E-5
-4E-5
-2E-5
0E+0
2E-5
4E-5
6E-5
0.0 0.1 0.2 0.3 0.4 0.5
-3E-1
-2E-1
-1E-1
0E+0
1E-1
2E-1
3E-1
4E-1
5E-1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
(a) (b)
0E+0
2E-2
4E-2
6E-2
8E-2
1E-1
1E-1
1E-1
0.0 0.1 0.2 0.3 0.4 0.5
0E+0
1E-5
2E-5
3E-5
4E-5
5E-5
6E-5
7E-5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
(c) (d)
Figure 5: Correlations: MAP (x axis) and authority (y axis) of systems (a); AAP and authority of topics
(b); MAP and hub of systems (c) and AAP and hub of topics (d)
versely, a system capable of performing well on easy topics
is just a confirmation of the state of the art. Indeed, the
correlation between hubness and AAP (statement (i) above) is
higher than the correlation between hubness and MAP
(corresponding to statement (ii)): 0.92 vs. 0.80. However, this
phenomenon is quite strong. This is also confirmed by the
work being done on the TREC Robust Track [14].
In this respect, it is interesting to see what happens if we
use a different measure from MAP (and AAP). The GMAP
(Geometric MAP) metric is defined as the geometric mean of
AP values, or equivalently as the arithmetic mean of the
logarithms of AP values [8]. GMAP has the property of giving
more weight to the low end of the AP scale (i.e., to low AP
values), and this seems reasonable, since, intuitively, a
performance increase in MAP values from 0.01 to 0.02 should
be more important than an increase from 0.81 to 0.82. To
use GMAP in place of MAP and AAP, we only need to take
the logarithms of initial AP values, i.e., those in Tab. 1a
(zero values are modified into ε = 0.00001). We then repeat
the same normalization process (with GMAP and GAAP
- Geometric AAP - replacing MAP and AAP): whereas
authority values still perfectly correlate with GMAP (0.99)
and GAAP (1.00), the correlation with hubness largely
disappears (values are −0.16 and −0.09 - slightly negative but
not enough to concern us).
This is yet another confirmation that TREC effectiveness
as measured by MAP depends mainly on easy topics; GMAP
appears to be a more balanced measure. Note that,
perhaps surprisingly, GMAP is indeed fairly well balanced, not
biased in the opposite direction - that is, it does not
overemphasize the difficult topics.
In Sect. 6.3 below, we discuss another transformation,
replacing the log function used in GMAP with logit. This has
a similar effect: the correlation of mean logitAP and average
logitAP with hubness are now small positive numbers (0.23
and 0.15 respectively), still comfortably away from the high
correlations with regular MAP and AAP, i.e., not presenting
the problematic phenomenon (ii) above (over-dependency on
easy topics).
We also observe that hub values are positive, whereas
authority assumes, as predicted, both positive and negative
values. An intuitive justification is that negative hubness
would indicate a node which disagrees with the other nodes,
e.g., a system which does better on difficult topics, or a
topic on which bad systems do better; such systems and
topics would be quite strange, and probably do not appear
in TREC. Finally, although one might think that topics with
several relevant documents are more important and difficult,
this is not the case: there is no correlation between hub (or
any other indicator) and the number of documents relevant
to a topic.
6. DISCUSSION
6.1 Related work
There has been considerable interest in recent years in
questions of statistical significance of effectiveness
comparisons between systems (e.g. [2, 9]), and related questions of
how many topics might be needed to establish differences
(e.g. [13]). We regard some results of the present study as
in some way complementary to this work, in that we make a
step towards answering the question Which topics are best
for establishing differences?.
The results on evaluation without relevance judgements
such as [10] show that, to some extent, good systems agree
on which are the good documents. We have not addressed
the question of individual documents in the present analysis,
but this effect is certainly analogous to our results.
6.2 Are normalizations necessary?
At this point it is also worthwhile to analyze what would
happen without the MAP- and AAP-normalizations defined
in Sect. 3.3. Indeed, the process of graph construction
(Sect. 3.4) is still valid: both the APM and APA matrices
are replaced by the AP one, and then everything goes on as
above. Therefore, one might think that the normalizations
are unuseful in this setting.
This is not the case. From the theoretical point of view,
the AP-only graph does not present the interesting
properties above discussed: since the AP-only graph is
symmetrical (the weight on each incoming link is equal to the weight
on the corresponding outgoing link), Inlinks and Outlinks
assume the same values. There is symmetry also in
computing hub and authority, that assume the same value for each
node since the weights on the incoming and outgoing arcs
are the same. This could be stated in more precise and
formal terms, but one might still wonder if on the overall graph
there are some sort of counterbalancing effects. It is
therefore easier to look at experimental data, which confirm that
the normalizations are needed: the correlations between AP,
Inlinks, Outlinks, Hub, and/or Authority are all very close
to one (none of them is below 0.98).
6.3 Are these normalizations sufficient?
It might be argued that (in the case of APA, for example)
the amount we have subtracted from each AP value is
topicdependent, therefore the range of the resulting APA value
is also topic-dependent (e.g. the maximum is 1 − AAP(tj)
and the minimum is − AAP(tj)). This suggests that the
cross-topic comparisons of these values suggested in Sect. 3.3
may not be reliable. A similar issue arises for APM and
comparisons across systems.
One possible way to overcome this would be to use an
unconstrained measure whose range is the full real line. Note
that in applying the method to GMAP by using log AP, we
avoid the problem with the lower limit but retain it for the
upper limit. One way to achieve an unconstrainted range
would be to use the logit function rather than the log [4,8].
We have also run this variant (as already reported in
Sect. 5 above), and it appears to provide very similar
results to the GMAP results already given. This is not
surprising, since in practice the two functions are very similar
over most of the operative range. The normalizations thus
seem reliable.
6.4 On AAT
and AT
A
It is well known that h and a vectors are the principal
left eigenvectors of AAT
and AT
A, respectively (this can
be easily derived from Eqs. (7)), and that, in the case of
citation graphs, AAT
and AT
A represent, respectively,
bibliographic coupling and co-citations. What is the meaning,
if any, of AAT
and AT
A in our Systems-Topics graph? It
is easy to derive that:
AAT
[i, j] =
8
><
>:
0
if i ∈ S ∧ j ∈ T
or i ∈ T ∧ j ∈ S
P
k A[i, k] · A[j, k] otherwise
AT
A[i, j] =
8
><
>:
0
if i ∈ S ∧ j ∈ T
or i ∈ T ∧ j ∈ S
P
k A[k, i] · A[k, j] otherwise
(where S is the set of indices corresponding to systems and T
the set of indices corresponding to topics). Thus AAT
and
AT
A are block diagonal matrices, with two blocks each, one
relative to systems and one relative to topics:
(a) if i, j ∈ S, then AAT
[i, j] =
P
k∈T APM(i, k)·APM(j, k)
measures how much the two systems i and j agree in
estimating topics ease (APM): high values mean that
the two systems agree on topics ease.
(b) if i, j ∈ T, then AAT
[i, j] =
P
k∈S APA(k, i)·APA(k, j)
measures how much the two topics i and j agree in
estimating systems effectiveness (APA): high values mean
that the two topics agree on systems effectiveness (and
that TREC results would not change by leaving out one
of the two topics).
(c) if i, j ∈ S, then AT
A[i, j] =
P
k∈T APA(i, k) · APA(j, k)
measures how much agreement on the effectiveness of
two systems i and j there is over all topics: high
values mean that many topics quite agree on the two
systems effectiveness; low values single out systems that
are somehow controversial, and that need several topics
to have a correct effectiveness assessment.
(d) if i, j ∈ T, then AT
A[i, j] =
P
k∈S APM(k, i)·APM(k, j)
measures how much agreement on the ease of the two
topics i and j there is over all systems: high values mean
that many systems quite agree on the two topics ease.
Therefore, these matrices are meaningful and somehow
interesting. For instance, the submatrix (b) corresponds to
a weighted undirected complete graph, whose nodes are the
topics and whose arc weights are a measure of how much
two topics agree on systems effectiveness. Two topics that
are very close on this graph give the same information, and
therefore one of them could be discarded without changes in
TREC results. It would be interesting to cluster the topics
on this graph. Furthermore, the matrix/graph (a) could be
useful in TREC pool formation: systems that do not agree
on topic ease would probably find different relevant
documents, and should therefore be complementary in pool
formation. Note that no notion of single documents is involved
in the above analysis.
6.5 Insights
As indicated, the primary contribution of this paper has
been a method of analysis. However, in the course of
applying this method to one set of TREC results, we have
achieved some insights relating to the hypotheses formulated
in Sect. 2:
• We confirm Hypothesis 2 above, that some topics are
easier than others.
• Differences in the hubness of systems reveal that some
systems are better than others at distinguishing easy
and difficult topics; thus we have some confirmation of
Hypothesis 3.
• There are some relatively idiosyncratic systems which
do badly on some topics generally considered easy but
well on some hard topics. However, on the whole, the
more effective systems are better at distinguishing easy
and difficult topics. This is to be expected: a really
bad system will do badly on everything, while even a
good system may have difficulty with some topics.
• Differences in the hubness of topics reveal that some
topics are better than others at distinguising more or
less effective systems; thus we have some confirmation
of Hypothesis 4.
• If we use MAP as the measure of effectiveness, it is
also true that the easiest topics are better at
distinguishing more or less effective systems. As argued in
Sect. 5, this is an undesirable property. GMAP is more
balanced.
Clearly these ideas need to be tested on other data sets.
However, they reveal that the method of analysis proposed
in this paper can provide valuable information.
6.6 Selecting topics
The confirmation of Hypothesis 4 leads, as indicated, to
the idea that we could do reliable system evaluation on a
much smaller set of topics, provided we could select such an
appropriate set. This selection may not be straightforward,
however. It is possible that simply selecting the high
hubness topics will achieve this end; however, it is also possible
that there are significant interactions between topics which
would render such a simple rule ineffective. This
investigation would therefore require serious experimentation. For
this reason we have not attempted in this paper to point to
the specific high hubness topics as being good for evaluation.
This is left for future work.
7. CONCLUSIONS AND FUTURE
DEVELOPMENTS
The contribution of this paper is threefold:
• we propose a novel way of normalizing AP values;
• we propose a novel method to analyse TREC data;
• the method applied on TREC data does indeed reveal
some hidden properties.
More particularly, we propose Average Average Precision
(AAP), a measure of topic ease, and a novel way of
normalizing the average precision measure in TREC, on the basis
of both MAP (Mean Average Precision) and AAP. The
normalized measures (APM and APA) are used to build a
bipartite weighted Systems-Topics graph, that is then
analyzed by means of network analysis indicators widely known
in the (social) network analysis field, but somewhat
generalised. We note that no such approach to TREC data
analysis has been proposed so far. The analysis shows that,
with current measures, a system that wants to be effective
in TREC needs to be effective on easy topics. Also, it is
suggested that a cluster analysis on topic similarity can lead to
relying on a lower number of topics.
Our method of analysis, as described in this paper, can
be applied only a posteriori, i.e., once we have all the
topics and all the systems available. Adding (removing) a new
system / topic would mean re-computing hubness and
authority indicators. Moreover, we are not explicitly proposing
a change to current TREC methodology, although this could
be a by-product of these - and further - analyses.
This is an initial work, and further analyses could be
performed. For instance, other effectiveness metrics could be
used, in place of AP. Other centrality indicators, widely
used in social network analysis, could be computed, although
probably with similar results to PageRank. It would be
interesting to compute the higher-order eigenvectors of AT
A
and AAT
. The same kind of analysis could be performed at
the document level, measuring document ease. Hopefully,
further analyses of the graph defined in this paper,
according to the approach described, can be insightful for a better
understanding of TREC or similar data.
Acknowledgments
We would like to thank Nick Craswell for insightful
discussions and the anonymous referees for useful remarks. Part
of this research has been carried on while the first author
was visiting Microsoft Research Cambridge, whose financial
support is acknowledged.
8. REFERENCES
[1] M. Agosti, M. Bacchin, N. Ferro, and M. Melucci.
Improving the automatic retrieval of text documents.
In Proceedings of the 3rd CLEF Workshop, volume
2785 of LNCS, pages 279-290, 2003.
[2] C. Buckley and E. Voorhees. Evaluating evaluation
measure stability. In 23rd SIGIR, pages 33-40, 2000.
[3] S. Chakrabarti. Mining the Web. Morgan Kaufmann,
2003.
[4] G. V. Cormack and T. R. Lynam. Statistical precision
of information retrieval evaluation. In 29th SIGIR,
pages 533-540, 2006.
[5] J. Kleinberg. Authoritative sources in a hyperlinked
environment. J. of the ACM, 46(5):604-632, 1999.
[6] M. Levene. An Introduction to Search Engines and
Web Navigation. Addison Wesley, 2006.
[7] L. Page, S. Brin, R. Motwani, and T. Winograd. The
PageRank Citation Ranking: Bringing Order to the
Web, 1998.
http://dbpubs.stanford.edu:8090/pub/1999-66.
[8] S. Robertson. On GMAP - and other transformations.
In 13th CIKM, pages 78-83, 2006.
[9] M. Sanderson and J. Zobel. Information retrieval
system evaluation: effort, sensitivity, and reliability. In
28th SIGIR, pages 162-169, 2005.
http://doi.acm.org/10.1145/1076034.1076064.
[10] I. Soboroff, C. Nicholas, and P. Cahan. Ranking
retrieval systems without relevance judgments. In 24th
SIGIR, pages 66-73, 2001.
[11] TREC Common Evaluation Measures, 2005.
http://trec.nist.gov/pubs/trec14/appendices/
CE.MEASURES05.pdf (Last visit: Jan. 2007).
[12] Text REtrieval Conference (TREC).
http://trec.nist.gov/ (Last visit: Jan. 2007).
[13] E. Voorhees and C. Buckley. The effect of topic set
size on retrieval experiment error. In 25th SIGIR,
pages 316-323, 2002.
[14] E. M. Voorhees. Overview of the TREC 2005 Robust
Retrieval Track. In TREC 2005 Proceedings, 2005.
[15] E. M. Voorhees and D. K. Harman.
TRECExperiment and Evaluation in Information Retrieval.
MIT Press, 2005.
[16] S. Wasserman and K. Faust. Social Network Analysis.
Cambridge University Press, Cambridge, UK, 1994. | trec;systems-topic graph;hit;social network analysis;mean average precision;link analysis technique;web search engine implementation;pagerank;network analysis;information retrieval evaluation experiment;inlink;ir evaluation;hit algorithm;kleinberg' hit algorithm;human assessor;weighted bipartite graph;stemming |
train_H-43 | Combining Content and Link for Classification using Matrix Factorization | The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks. | 1. INTRODUCTION
With the advance of the World Wide Web, more and more
hypertext documents become available on the Web. Some examples of
such data include organizational and personal web pages (e.g, the
WebKB benchmark data set, which contains university web pages),
research papers (e.g., data in CiteSeer), online news articles, and
customer-generated media (e.g., blogs). Comparing to data in
traditional information management, in addition to content, these data
on the Web also contain links: e.g., hyperlinks from a student"s
homepage pointing to the homepage of her advisor, paper citations,
sources of a news article, comments of one blogger on posts from
another blogger, and so on. Performing information management
tasks on such structured data raises many new research challenges.
In the following discussion, we use the task of web page
classification as an illustrating example, while the techniques we develop
in later sections are applicable equally well to many other tasks in
information retrieval and data mining.
For the classification problem of web pages, a simple approach
is to treat web pages as independent documents. The advantage
of this approach is that many off-the-shelf classification tools can
be directly applied to the problem. However, this approach
relies only on the content of web pages and ignores the structure of
links among them. Link structures provide invaluable information
about properties of the documents as well as relationships among
them. For example, in the WebKB dataset, the link structure
provides additional insights about the relationship among documents
(e.g., links often pointing from a student to her advisor or from
a faculty member to his projects). Since some links among these
documents imply the inter-dependence among the documents, the
usual i.i.d. (independent and identical distributed) assumption of
documents does not hold any more. From this point of view, the
traditional classification methods that ignore the link structure may
not be suitable.
On the other hand, a few studies, for example [25], rely solely on
link structures. It is however a very rare case that content
information can be ignorable. For example, in the Cora dataset, the content
of a research article abstract largely determines the category of the
article.
To improve the performance of web page classification,
therefore, both link structure and content information should be taken
into consideration. To achieve this goal, a simple approach is to
convert one type of information to the other. For example, in spam
blog classification, Kolari et al. [13] concatenate outlink features
with the content features of the blog. In document classification,
Kurland and Lee [14] convert content similarity among documents
into weights of links. However, link and content information have
different properties. For example, a link is an actual piece of
evidence that represents an asymmetric relationship whereas the
content similarity is usually defined conceptually for every pair of
documents in a symmetric way. Therefore, directly converting one type
of information to the other usually degrades the quality of
information. On the other hand, there exist some studies, as we will discuss
in detail in related work, that consider link information and content
information separately and then combine them. We argue that such
an approach ignores the inherent consistency between link and
content information and therefore fails to combine the two seamlessly.
Some work, such as [3], incorporates link information using
cocitation similarity, but this may not fully capture the global link
structure. In Figure 1, for example, web pages v6 and v7 co-cite
web page v8, implying that v6 and v7 are similar to each other.
In turns, v4 and v5 should be similar to each other, since v4 and
v5 cite similar web pages v6 and v7, respectively. But using
cocitation similarity, the similarity between v4 and v5 is zero without
considering other information.
v1
v2
v3
v4
v5
v6
v7
v8
Figure 1: An example of link structure
In this paper, we propose a simple technique for analyzing
inter-connected documents, such as web pages, using factor
analysis[18]. In the proposed technique, both content information and
link structures are seamlessly combined through a single set of
latent factors. Our model contains two components. The first
component captures the content information. This component has a form
similar to that of the latent topics in the Latent Semantic Indexing
(LSI) [8] in traditional information retrieval. That is, documents
are decomposed into latent topics/factors, which in turn are
represented as term vectors. The second component captures the
information contained in the underlying link structure, such as links
from homepages of students to those of faculty members. A
factor can be loosely considered as a type of documents (e.g., those
homepages belonging to students). It is worth noting that we do
not explicitly define the semantic of a factor a priori. Instead,
similar to LSI, the factors are learned from the data. Traditional factor
analysis models the variables associated with entities through the
factors. However, in analysis of link structures, we need to model
the relationship of two ends of links, i.e., edges between vertex
pairs. Therefore, the model should involve factors of both vertices
of the edge. This is a key difference between traditional factor
analysis and our model. In our model, we connect two
components through a set of shared factors, that is, the latent factors in the
second component (for contents) are tied to the factors in the first
component (for links). By doing this, we search for a unified set
of latent factors that best explains both content and link structures
simultaneously and seamlessly.
In the formulation, we perform factor analysis based on matrix
factorization: solution to the first component is based on
factorizing the term-document matrix derived from content features;
solution to the second component is based on factorizing the adjacency
matrix derived from links. Because the two factorizations share
a common base, the discovered bases (latent factors) explain both
content information and link structures, and are then used in further
information management tasks such as classification.
This paper is organized as follows. Section 2 reviews related
work. Section 3 presents the proposed approach to analyze the web
page based on the combined information of links and content.
Section 4 extends the basic framework and a few variants for fine tune.
Section 5 shows the experiment results. Section 6 discusses the
details of this approach and Section 7 concludes.
2. RELATED WORK
In the content analysis part, our approach is closely related to
Latent Semantic Indexing (LSI) [8]. LSI maps documents into a
lower dimensional latent space. The latent space implicitly
captures a large portion of information of documents, therefore it is
called the latent semantic space. The similarity between documents
could be defined by the dot products of the corresponding vectors
of documents in the latent space. Analysis tasks, such as
classification, could be performed on the latent space. The commonly
used singular value decomposition (SVD) method ensures that the
data points in the latent space can optimally reconstruct the original
documents. Though our approach also uses latent space to
represent web pages (documents), we consider the link structure as well
as the content of web pages.
In the link analysis approach, the framework of hubs and
authorities (HITS) [12] puts web page into two categories, hubs and
authorities. Using recursive notion, a hub is a web page with many
outgoing links to authorities, while an authority is a web page with
many incoming links from hubs. Instead of using two categories,
PageRank [17] uses a single category for the recursive notion, an
authority is a web page with many incoming links from authorities.
He et al. [9] propose a clustering algorithm for web document
clustering. The algorithm incorporates link structure and the co-citation
patterns. In the algorithm, all links are treated as undirected edge of
the link graph. The content information is only used for weighing
the links by the textual similarity of both ends of the links. Zhang
et al. [23] uses the undirected graph regularization framework for
document classification. Achlioptas et al[2] decompose the web
into hub and authority attributes then combine them with content.
Zhou et al. [25] and [24] propose a directed graph regularization
framework for semi-supervised learning. The framework combines
the hub and authority information of web pages. But it is difficult
to combine the content information into that framework. Our
approach consider the content and the directed linkage between topics
of source and destination web pages in one step, which implies the
topic combines the information of web page as authorities and as
hubs in a single set of factors.
Cohn and Hofmann [6] construct the latent space from both
content and link information, using content analysis based on
probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5]. The
major difference between the approach of [6] (PLSI+PHITS) and
our approach is in the part of link analysis. In PLSI+PHITS, the
link is constructed with the linkage from the topic of the source
web page to the destination web page. In the model, the outgoing
links of the destination web page have no effect on the source web
page. In other words, the overall link structure is not utilized in
PHITS. In our approach, the link is constructed with the linkage
between the factor of the source web page and the factor of the
destination web page, instead of the destination web page itself. The
factor of the destination web page contains information of its
outgoing links. In turn, such information is passed to the factor of the
source web page. As the result of matrix factorization, the factor
forms a factor graph, a miniature of the original graph, preserving
the major structure of the original graph.
Taskar et al. [19] propose relational Markov networks (RMNs)
for entity classification, by describing a conditional distribution of
entity classes given entity attributes and relationships. The model
was applied to web page classification, where web pages are
entities and hyperlinks are treated as relationships. RMNs apply
conditional random fields to define a set of potential functions on cliques
of random variables, where the link structure provides hints to form
the cliques. However the model does not give an off-the-shelf
solution, because the success highly depends on the arts of designing
the potential functions. On the other hand, the inference for RMNs
is intractable and requires belief propagation.
The following are some work on combining documents and
links, but the methods are loosely related to our approach. The
experiments of [21] show that using terms from the linked
document improves the classification accuracy. Chakrabarti et al.[3] use
co-citation information in their classification model. Joachims et
al.[11] combine text kernels and co-citation kernels for
classification. Oh et al [16] use the Naive Bayesian frame to combine link
information with content.
3. OUR APPROACH
In this section we will first introduce a novel matrix
factorization method, which is more suitable than conventional matrix
factorization methods for link analysis. Then we will introduce our
approach that jointly factorizes the document-term matrix and link
matrix and obtains compact and highly indicative factors for
representing documents or web pages.
3.1 Link Matrix Factorization
Suppose we have a directed graph G = (V, E), where the vertex
set V = {vi}n
i=1 represents the web pages and the edge set E
represents the hyperlinks between web pages. Let A = {asd} denotes
the n×n adjacency matrix of G, which is also called the link matrix
in this paper. For a pair of vertices, vs and vd, let asd = 1 when
there is an edge from vs to vd, and asd = 0, otherwise. Note that
A is an asymmetric matrix, because hyperlinks are directed.
Most machine learning algorithms assume a feature-vector
representation of instances. For web page classification, however, the
link graph does not readily give such a vector representation for
web pages. If one directly uses each row or column of A for the job,
she will suffer a very high computational cost because the
dimensionality equals to the number of web pages. On the other hand, it
will produces a poor classification accuracy (see our experiments
in Section 5), because A is extremely sparse1
.
The idea of link matrix factorization is to derive a high-quality
feature representation Z of web pages based on analyzing the link
matrix A, where Z is an n × l matrix, with each row being the
ldimensional feature vector of a web page. The new representation
of web pages captures the principal factors of the link structure and
makes further processing more efficient.
One may use a method similar to LSI, to apply the well-known
principal component analysis (PCA) for deriving Z from A. The
corresponding optimization problem 2
is
min
Z,U
A − ZU 2
F + γ U 2
F (1)
where γ is a small positive number, U is an l ×n matrix, and · F
is the Frobenius norm. The optimization aims to approximate A by
ZU , a product of two low-rank matrices, with a regularization on
U. In the end, the i-th row vector of Z can be thought as the hub
feature vector of vertex vi, and the row vector of U can be thought
as the authority features. A link generation model proposed in [2] is
similar to the PCA approach. Since A is a nonnegative matrix here,
one can also consider to put nonnegative constraints on U and Z,
which produces an algorithm similar to PLSA [10] and NMF [20].
1
Due to the sparsity of A, links from two similar pages may not
share any common target pages, which makes them to appear
dissimilar. However the two pages may be indirectly linked to many
common pages via their neighbors.
2
Another equivalent form is minZ,U A − ZU 2
F , s. t. U U =
I. The solution Z is identical subject to a scaling factor.
However, despite its popularity in matrix analysis, PCA (or other
similar methods like PLSA) is restrictive for link matrix
factorization. The major problem is that, PCA ignores the fact that the rows
and columns of A are indexed by exactly the same set of objects
(i.e., web pages). The approximating matrix ˜A = ZU shows no
evidence that links are within the same set of objects. To see the
drawback, let"s consider a link transitivity situation vi → vs → vj,
where page i is linked to page s which itself is linked to page j.
Since ˜A = ZU treats A as links from web pages {vi} to a
different set of objects, let it be denoted by {oi}, ˜A = ZU actually
splits an linked object os from vs and breaks down the link path
into two parts vi → os and vs → oj. This is obviously a miss
interpretation to the original link path.
To overcome the problem of PCA, in this paper we suggest to
use a different factorization:
min
Z,U
A − ZUZ 2
F + γ U 2
F (2)
where U is an l × l full matrix. Note that U is not symmetric, thus
ZUZ produces an asymmetric matrix, which is the case of A.
Again, each row vector of Z corresponds to a feature vector of a
web pages. The new approximating form ˜A = ZUZ puts a clear
meaning that the links are between the same set of objects,
represented by features Z. The factor model actually maps each vertex,
vi, into a vector zi = {zi,k; 1 ≤ k ≤ l} in the Rl
space. We call
the Rl
space the factor space. Then, {zi} encodes the information
of incoming and outgoing connectivity of vertices {vi}. The
factor loadings, U, explain how these observed connections happened
based on {zi}. Once we have the vector zi, we can use many
traditional classification methods (such as SVMs) or clustering tools
(such as K-Means) to perform the analysis.
Illustration Based on a Synthetic Problem
To further illustrate the advantages of the proposed link matrix
factorization Eq. (2), let us consider the graph in Figure 1. Given
v1
v2
v3
v4
v5
v6
v7
v8
Figure 2: Summarize Figure 1 with a factor graph
these observations, we can summarize the graph by grouping as
factor graph depicted in Figure 2. In the next we preform the two
factorization methods Eq. (2) and Eq. (1) on this link matrix. A
good low-rank representation should reveal the structure of the
factor graph.
First we try PCA-like decomposition, solving Eq. (1) and
obtaining
Z = U =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
1. 1. 0 0 0
0 0 −.6 −.7 .1
0 0 .0 .6 −.0
0 0 .8 −.4 .3
0 0 .2 −.2 −.9
.7 .7 0 0 0
.7 .7 0 0 0
0 0 0 0 0
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
0 0 0 0 0
.5 −.5 0 0 0
.5 −.5 0 0 0
0 0 −0.6 −.7 .1
0 0 .0 .6 −.0
0 0 .8 −.4 .3
0 0 .2 −.2 −.9
.7 .7 0 0 0
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
We can see that the row vectors of v6 and v7 are the same in Z,
indicating that v6 and v7 have the same hub attributes. The row
vectors of v2 and v3 are the same in U, indicating that v2 and
v3 have the same authority attributes. It is not clear to see the
similarity between v4 and v5, because their inlinks (and outlinks)
are different.
Then, we factorize A by ZUZ via solving Eq. (2), and obtain
the results
Z = U =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
−.8 −.5 .3 −.1 −.0
−.0 .4 .6 −.1 −.4
−.0 .4 .6 −.1 −.4
.3 −.2 .3 −.4 .3
.3 −.2 .3 −.4 .3
−.4 .5 .0 −.2 .6
−.4 .5 .0 −.2 .6
−.1 .1 −.4 −.8 −.4
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
2
6
6
6
6
6
6
6
6
4
−.1 −.2 −.4 .6 .7
.2 −.5 −.5 −.5 .0
.1 .1 .4 −.4 .3
.1 −.2 −.0 .3 −.1
−.3 .3 −.5 −.4 −.2
3
7
7
7
7
7
7
7
7
5
The resultant Z is very consistent with the clustering structure
of vertices: the row vectors of v2 and v3 are the same, those
of v4 and v5 are the same, those of v6 and v7 are the same.
Even interestingly, if we add constraints to ensure Z and U be
nonnegative, we have
Z = U =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
1. 0 0 0 0
0 .9 0 0 0
0 .9 0 0 0
0 0 .7 0 0
0 0 .7 0 0
0 0 0 .9 0
0 0 0 .9 0
0 0 0 0 1.
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
2
6
6
6
6
6
6
6
6
4
0 1. 0 0 0
0 0 .7 0 0
0 0 0 .7 0
0 0 0 0 1.
0 0 0 0 0
3
7
7
7
7
7
7
7
7
5
which clearly tells the assignment of vertices to clusters from Z
and the links of factor graph from U. When the interpretability is
not critical in some tasks, for example, classification, we found that
it achieves better accuracies without the nonnegative constraints.
Given our above analysis, it is clear that the factorization ZUZ
is more expressive than ZU in representing the link matrix A.
3.2 Content Matrix Factorization
Now let us consider the content information on the vertices. To
combine the link information and content information, we want to
use the same latent space to approximate the content as the latent
space for the links. Using the bag-of-words approach, we denote
the content of web pages by an n×m matrix C, each of whose rows
represents a document, each column represents a keyword, where
m is the number of keywords. Like the latent semantic indexing
(LSI) [8], the l-dimensional latent space for words is denoted by an
m × l matrix V . Therefore, we use ZV to approximate matrix
C,
min
V,Z
C − ZV 2
F + β V 2
F , (3)
where β is a small positive number, β V 2
F serves as a
regularization term to improve the robustness.
3.3 Joint Link-Content Matrix Factorization
There are many ways to employ both the content and link
information for web page classification. Our idea in this paper is not to
simply combine them, but rather to fuse them into a single,
consistent, and compact feature representation. To achieve this goal, we
solve the following problem,
min
U,V,Z
n
J (U, V, Z)
def
= A − ZUZ 2
F +
α C − ZV 2
F + γ U 2
F + β V 2
F
o
.
(4)
Eq. (4) is the joined matrix factorization of A and C with
regularization. The new representation Z is ensured to capture both the
structures of the link matrix A and the content matrix C. Once
we find the optimal Z, we can apply the traditional classification
or clustering methods on vectorial data Z. The relationship among
these matrices can be depicted as Figure 3.
A Y C
U Z V
Figure 3: Relationship among the matrices. Node Y is the
target of classification.
Eq. (4) can be solved using gradient methods, such as the
conjugate gradient method and quasi-Newton methods. Then main
computation of gradient methods is evaluating the object function J
and its gradients against variables,
∂J
∂U
=
Z ZUZ Z − Z AZ
+ γU,
∂J
∂V
=α
V Z Z − C Z
+ βV,
∂J
∂Z
=
ZU Z ZU + ZUZ ZU − A ZU − AZU
+ α
ZV V − CV
.
Because of the sparsity of A, the computational complexity of
multiplication of A and Z is O(µAl), where µA is the number of
nonzero entries in A. Similarly, the computational complexity of
C Z and CV is O(µC l), where µC is the number of nonzero
entries in C. The computational complexity of the rest
multiplications in the gradient computation is O(nl2
). Therefore, the total
computational complexity in one iteration is O(µAl + µC l + nl2
).
The number of links and the number of words in a web page are
relatively small comparing to the number of web pages, and are
almost constant as the number of web pages/documents increases, i.e.
µA = O(n) and µC = O(n). Therefore, theoretically the
computation time is almost linear to the number of web pages/documents,
n.
4. SUPERVISED MATRIX
FACTORIZATION
Consider a web page classification problem. We can solve
Eq. (4) to obtain Z as Section 3, then use a traditional classifier
to perform classification. However, this approach does not take
data labels into account in the first step. Believing that using data
labels improves the accuracy by obtaining a better Z for the
classification, we consider to use the data labels to guide the matrix
factorization, called supervised matrix factorization [22]. Because
some data used in the matrix factorization have no label
information, the supervised matrix factorization falls into the category of
semi-supervised learning.
Let C be the set of classes. For simplicity, we first consider
binary class problem, i.e. C = {−1, 1}. Assume we know the
labels {yi} for vertices in T ⊂ V. We want to find a hypothesis
h : V → R, such that we assign vi to 1 when h(vi) ≥ 0, −1
otherwise. We assume a transform from the latent space to R is linear,
i.e.
h(vi) = w φ(vi) + b = w zi + b, (5)
School course dept. faculty other project staff student total
Cornell 44 1 34 581 18 21 128 827
Texas 36 1 46 561 20 2 148 814
Washington 77 1 30 907 18 10 123 1166
Wisconsin 85 0 38 894 25 12 156 1210
Table 1: Dataset of WebKB
where w and b are parameters to estimate. Here, w is the norm
of the decision boundary. Similar to Support Vector Machines
(SVMs) [7], we can use the hinge loss to measure the loss,
X
i:vi∈T
[1 − yih(vi)]+ ,
where [x]+ is x if x ≥ 0, 0 if x < 0. However, the hinge loss
is not smooth at the hinge point, which makes it difficult to apply
gradient methods on the problem. To overcome the difficulty, we
use a smoothed version of hinge loss for each data point,
g(yih(vi)), (6)
where
g(x) =
8
><
>:
0 when x ≥ 2,
1 − x when x ≤ 0,
1
4
(x − 2)2
when 0 < x < 2.
We reduce a multiclass problem into multiple binary ones. One
simple scheme of reduction is the one-against-rest coding scheme.
In the one-against-rest scheme, we assign a label vector for each
class label. The element of a label vector is 1 if the data point
belongs the corresponding class, −1, if the data point does not belong
the corresponding class, 0, if the data point is not labeled. Let Y be
the label matrix, each column of which is a label vector. Therefore,
Y is a matrix of n × c, where c is the number of classes, |C|. Then
the values of Eq. (5) form a matrix
H = ZW + 1b , (7)
where 1 is a vector of size n, whose elements are all one, W is a
c × l parameter matrix, and b is a parameter vector of size c. The
total loss is proportional to the sum of Eq. (6) over all labeled data
points and the classes,
LY (W, b, Z) = λ
X
i:vi∈T ,j∈C
g(YijHij),
where λ is the parameter to scale the term.
To derive a robust solution, we also use Tikhonov regularization
for W,
ΩW (W) =
ν
2
W 2
F ,
where ν is the parameter to scale the term.
Then the supervised matrix factorization problem becomes
min
U,V,Z,W,b
Js(U, V, Z, W, b) (8)
where
Js(U, V, Z, W, b) = J (U, V, Z) + LY (W, b, Z) + ΩW (W).
We can also use gradient methods to solve the problem of Eq. (8).
The gradients are
∂Js
∂U
=
∂J
∂U
,
∂Js
∂V
=
∂J
∂V
,
∂Js
∂Z
=
∂J
∂Z
+ λGW,
∂Js
∂W
=λG Z + νW,
∂Js
∂b
=λG 1,
where G is an n×c matrix, whose ik-th element is Yikg (YikHik),
and
g (x) =
8
><
>:
0 when x ≥ 2,
−1 when x ≤ 0,
1
2
(x − 2) when 0 < x < 2.
Once we obtain w, b, and Z, we can apply h on the vertices with
unknown class labels, or apply traditional classification algorithms
on Z to get the classification results.
5. EXPERIMENTS
5.1 Data Description
In this section, we perform classification on two datasets, to
demonstrate the our approach. The two datasets are the WebKB
data set[1] and the Cora data set [15]. The WebKB data set
consists of about 6000 web pages from computer science departments
of four schools (Cornell, Texas, Washington, and Wisconsin). The
web pages are classified into seven categories. The numbers of
pages in each category are shown in Table 1. The Cora data set
consists of the abstracts and references of about 34,000 computer
science research papers. We use part of them to categorize into
one of subfields of data structure (DS), hardware and architecture
(HA), machine learning (ML), and programing language (PL). We
remove those articles without reference to other articles in the set.
The number of papers and the number of subfields in each area are
shown in Table 2.
area # of papers # of subfields
Data structure (DS) 751 9
Hardware and architecture (HA) 400 7
Machine learning (ML) 1617 7
Programing language (PL) 1575 9
Table 2: Dataset of Cora
5.2 Methods
The task of the experiments is to classify the data based on their
content information and/or link structure. We use the following
methods:
65
70
75
80
85
90
95
100
WisconsinWashingtonTexasCornell
accuracy(%)
dataset
SVM on content
SVM on link
SVM on link-content
Directed graph reg.
PLSI+PHITS
link-content MF
link-content sup. MF
method Cornell Texas Washington Wisconsin
SVM on content 81.00 ± 0.90 77.00 ± 0.60 85.20 ± 0.90 84.30 ± 0.80
SVM on links 70.10 ± 0.80 75.80 ± 1.20 80.50 ± 0.30 74.90 ± 1.00
SVM on link-content 80.00 ± 0.80 78.30 ± 1.00 85.20 ± 0.70 84.00 ± 0.90
Directed graph regularization 89.47 ± 1.41 91.28 ± 0.75 91.08 ± 0.51 89.26 ± 0.45
PLSI+PHITS 80.74 ± 0.88 76.15 ± 1.29 85.12 ± 0.37 83.75 ± 0.87
link-content MF 93.50 ± 0.80 96.20 ± 0.50 93.60 ± 0.50 92.60 ± 0.60
link-content sup. MF 93.80 ± 0.70 97.07 ± 1.11 93.70 ± 0.60 93.00 ± 0.30
Table 3: Classification accuracy (mean ± std-err %) on WebKB data set
• SVM on content We apply support vector machines (SVM)
on the content of documents. The features are the
bag-ofwords and all word are stemmed. This method ignores link
structure in the data. Linear SVM is used. The
regularization parameter of SVM is selected using the cross-validation
method. The implementation of SVM used in the
experiments is libSVM[4].
• SVM on links We treat links as the features of each
document, i.e. the i-th feature is link-to-pagei. We apply SVM on
link features. This method uses link information, but not the
link structure.
• SVM on link-content We combine the features of the above
two methods. We use different weights for these two set
of features. The weights are also selected using
crossvalidation.
• Directed graph regularization This method is described in
[25] and [24]. This method is solely based on link structure.
• PLSI+PHITS This method is described in [6]. This method
combines text content information and link structure for
analysis. The PHITS algorithm is in spirit similar to Eq.1,
with an additional nonnegative constraint. It models the
outgoing and in-coming structures separately.
• Link-content MF This is our approach of matrix
factorization described in Section 3. We use 50 latent factors for Z.
After we compute Z, we train a linear SVM using Z as the
feature vectors, then apply SVM on testing portion of Z to
obtain the final result, because of the multiclass output.
• Link-content sup. MF This method is our approach of the
supervised matrix factorization in Section 4. We use 50 latent
factors for Z. After we compute Z, we train a linear SVM
on the training portion of Z, then apply SVM on testing
portion of Z to obtain the final result, because of the multiclass
output.
We randomly split data into five folds and repeat the experiment
for five times, for each time we use one fold for test, four other
folds for training. During the training process, we use the
crossvalidation to select all model parameters. We measure the results
by the classification accuracy, i.e., the percentage of the number
of correct classified documents in the entire data set. The results
are shown as the average classification accuracies and it standard
deviation over the five repeats.
5.3 Results
The average classification accuracies for the WebKB data set are
shown in Table 3. For this task, the accuracies of SVM on links
are worse than that of SVM on content. But the directed graph
regularization, which is also based on link alone, achieves a much
higher accuracy. This implies that the link structure plays an
important role in the classification of this dataset, but individual links
in a web page give little information. The combination of link and
content using SVM achieves similar accuracy as that of SVM on
content alone, which confirms individual links in a web page give
little information. Since our approach consider the link structure
as well as the content information, our two methods give results
a highest accuracies among these approaches. The difference
between the results of our two methods is not significant. However in
the experiments below, we show the difference between them.
The classification accuracies for the Cora data set are shown in
Table 4. In this experiment, the accuracies of SVM on the
combination of links and content are higher than either SVM on content
or SVM on links. This indicates both content and links are
infor45
50
55
60
65
70
75
80
PLMLHADS
accuracy(%)
dataset
SVM on content
SVM on link
SVM on link-content
Directed graph reg.
PLSI+PHITS
link-content MF
link-content sup. MF
method DS HA ML PL
SVM on content 53.70 ± 0.50 67.50 ± 1.70 68.30 ± 1.60 56.40 ± 0.70
SVM on links 48.90 ± 1.70 65.80 ± 1.40 60.70 ± 1.10 58.20 ± 0.70
SVM on link-content 63.70 ± 1.50 70.50 ± 2.20 70.56 ± 0.80 62.35 ± 1.00
Directed graph regularization 46.07 ± 0.82 65.50 ± 2.30 59.37 ± 0.96 56.06 ± 0.84
PLSI+PHITS 53.60 ± 1.78 67.40 ± 1.48 67.51 ± 1.13 57.45 ± 0.68
link-content MF 61.00 ± 0.70 74.20 ± 1.20 77.50 ± 0.80 62.50 ± 0.80
link-content sup. MF 69.38 ± 1.80 74.20 ± 0.70 78.70 ± 0.90 68.76 ± 1.32
Table 4: Classification accuracy (mean ± std-err %) on Cora data set
mative for classifying the articles into subfields. The method of
directed graph regularization does not perform as good as SVM on
link-content, which confirms the importance of the article content
in this task. Though our method of link-content matrix
factorization perform slightly better than other methods, our method of
linkcontent supervised matrix factorization outperform significantly.
5.4 The Number of Factors
As we discussed in Section 3, the computational complexity of
each iteration for solving the optimization problem is quadratic to
the number of factors. We perform experiments to study how the
number of factors affects the accuracy of predication. We use
different numbers of factors for the Cornell data of WebKB data set
and the machine learning (ML) data of Cora data set. The result
shown in Figure 4(a) and 4(b). The figures show that the accuracy
88
89
90
91
92
93
94
95
0 10 20 30 40 50
accuracy(%)
number of factors
link-content sup. MF
link-content MF
(a) Cornell data
62
64
66
68
70
72
74
76
78
80
0 10 20 30 40 50
accuracy(%)
number of factors
link-content sup. MF
link-content MF
(b) ML data
Figure 4: Accuracy vs number of factors
increases as the number of factors increases. It is a different
concept from choosing the optimal number of clusters in clustering
application. It is how much information to represent in the latent
variables. We have considered the regularization over the factors,
which avoids the overfit problem for a large number of factors. To
choose of the number of factors, we need to consider the trade-off
between the accuracy and the computation time, which is quadratic
to the number of factors.
The difference between the method of matrix factorization and
that of supervised one decreases as the number of factors increases.
This indicates that the usefulness of supervised matrix factorization
at lower number of factors.
6. DISCUSSIONS
The loss functions LA in Eq. (2) and LC in Eq. (3) use squared
loss due to computationally convenience. Actually, squared loss
does not precisely describe the underlying noise model, because
the weights of adjacency matrix can only take nonnegative
values, in our case, zero or one only, and the components of
content matrix C can only take nonnegative integers. Therefore, we
can apply other types of loss, such as hinge loss or smoothed
hinge loss, e.g. LA(U, Z) = µh(A, ZUZ ), where h(A, B) =P
i,j [1 − AijBij]+ .
In our paper, we mainly discuss the application of classification.
A entry of matrix Z means the relationship of a web page and a
factor. The values of the entries are the weights of linear model,
instead of the probabilities of web pages belonging to latent
topics. Therefore, we allow the components take any possible real
values. When we come to the clustering application, we can use this
model to find Z, then apply K-means to partition the web pages
into clusters. Actually, we can use the idea of nonnegative matrix
factorization for clustering [20] to directly cluster web pages. As
the example with nonnegative constraints shown in Section 3, we
represent each cluster by a latent topic, i.e. the dimensionality of
the latent space is set to the number of clusters we want. Then the
problem of Eq. (4) becomes
min
U,V,Z
J (U, V, Z), s.t.Z ≥ 0. (9)
Solving Eq. (9), we can obtain more interpretable results, which
could be used for clustering.
7. CONCLUSIONS
In this paper, we study the problem of how to combine the
information of content and links for web page analysis, mainly on
classification application. We propose a simple approach using factors to
model the text content and link structure of web pages/documents.
The directed links are generated from the linear combination of
linkage of between source and destination factors. By sharing
factors between text content and link structure, it is easy to combine
both the content information and link structure. Our experiments
show our approach is effective for classification. We also discuss
an extension for clustering application.
Acknowledgment
We would like to thank Dr. Dengyong Zhou for sharing his code
of his algorithm. Also, thanks to the reviewers for constructive
comments.
8. REFERENCES
[1] CMU world wide knowledge base (WebKB) project.
Available at http://www.cs.cmu.edu/∼WebKB/.
[2] D. Achlioptas, A. Fiat, A. R. Karlin, and F. McSherry. Web
search via hub synthesis. In IEEE Symposium on
Foundations of Computer Science, pages 500-509, 2001.
[3] S. Chakrabarti, B. E. Dom, and P. Indyk. Enhanced hypertext
categorization using hyperlinks. In L. M. Haas and
A. Tiwary, editors, Proceedings of SIGMOD-98, ACM
International Conference on Management of Data, pages
307-318, Seattle, US, 1998. ACM Press, New York, US.
[4] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support
vector machines, 2001. Software available at
http://www.csie.ntu.edu.tw/∼cjlin/libsvm.
[5] D. Cohn and H. Chang. Learning to probabilistically identify
authoritative documents. Proc. ICML 2000. pp.167-174.,
2000.
[6] D. Cohn and T. Hofmann. The missing link - a probabilistic
model of document content and hypertext connectivity. In
T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances
in Neural Information Processing Systems 13, pages
430-436. MIT Press, 2001.
[7] C. Cortes and V. Vapnik. Support-vector networks. Machine
Learning, 20:273, 1995.
[8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W.
Furnas, and R. A. Harshman. Indexing by latent semantic
analysis. Journal of the American Society of Information
Science, 41(6):391-407, 1990.
[9] X. He, H. Zha, C. Ding, and H. Simon. Web document
clustering using hyperlink structures. Computational
Statistics and Data Analysis, 41(1):19-45, 2002.
[10] T. Hofmann. Probabilistic latent semantic indexing. In
Proceedings of the Twenty-Second Annual International
SIGIR Conference, 1999.
[11] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite
kernels for hypertext categorisation. In C. Brodley and
A. Danyluk, editors, Proceedings of ICML-01, 18th
International Conference on Machine Learning, pages
250-257, Williams College, US, 2001. Morgan Kaufmann
Publishers, San Francisco, US.
[12] J. M. Kleinberg. Authoritative sources in a hyperlinked
environment. J. ACM, 48:604-632, 1999.
[13] P. Kolari, T. Finin, and A. Joshi. SVMs for the Blogosphere:
Blog Identification and Splog Detection. In AAAI Spring
Symposium on Computational Approaches to Analysing
Weblogs, March 2006.
[14] O. Kurland and L. Lee. Pagerank without hyperlinks:
structural re-ranking using links induced by language
models. In SIGIR "05: Proceedings of the 28th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 306-313, New
York, NY, USA, 2005. ACM Press.
[15] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.
Automating the contruction of internet portals with machine
learning. Information Retrieval Journal, 3(127-163), 2000.
[16] H.-J. Oh, S. H. Myaeng, and M.-H. Lee. A practical
hypertext catergorization method using links and
incrementally available class information. In SIGIR "00:
Proceedings of the 23rd annual international ACM SIGIR
conference on Research and development in information
retrieval, pages 264-271, New York, NY, USA, 2000. ACM
Press.
[17] L. Page, S. Brin, R. Motowani, and T. Winograd. PageRank
citation ranking: bring order to the web. Stanford Digital
Library working paper 1997-0072, 1997.
[18] C. Spearman. General Intelligence, objectively determined
and measured. The American Journal of Psychology,
15(2):201-292, Apr 1904.
[19] B. Taskar, P. Abbeel, and D. Koller. Discriminative
probabilistic models for relational data. In Proceedings of
18th International UAI Conference, 2002.
[20] W. Xu, X. Liu, and Y. Gong. Document clustering based on
non-negative matrix factorization. In SIGIR "03:
Proceedings of the 26th annual international ACM SIGIR
conference on Research and development in informaion
retrieval, pages 267-273. ACM Press, 2003.
[21] Y. Yang, S. Slattery, and R. Ghani. A study of approaches to
hypertext categorization. Journal of Intelligent Information
Systems, 18(2-3):219-241, 2002.
[22] K. Yu, S. Yu, and V. Tresp. Multi-label informed latent
semantic indexing. In SIGIR "05: Proceedings of the 28th
annual international ACM SIGIR conference on Research
and development in information retrieval, pages 258-265,
New York, NY, USA, 2005. ACM Press.
[23] T. Zhang, A. Popescul, and B. Dom. Linear prediction
models with graph regularization for web-page
categorization. In KDD "06: Proceedings of the 12th ACM
SIGKDD international conference on Knowledge discovery
and data mining, pages 821-826, New York, NY, USA,
2006. ACM Press.
[24] D. Zhou, J. Huang, and B. Sch¨olkopf. Learning from labeled
and unlabeled data on a directed graph. In Proceedings of the
22nd International Conference on Machine Learning, Bonn,
Germany, 2005.
[25] D. Zhou, B. Sch¨olkopf, and T. Hofmann. Semi-supervised
learning on directed graphs. Proc. Neural Info. Processing
Systems, 2004. | low-dimensional factor space;web mining problem;classification;combining content and link;linkage adjacency matrix;content information;authority information;joint factorization;relationship;link structure;factor analysis;asymmetric relationship;document-term matrix;cocitation similarity;webkb and cora benchmark;text content;matrix factorization |
train_H-44 | A Time Machine for Text Search | Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient time machine scalable to large versioned text collections. | 1. INTRODUCTION
In this work we address time-travel text search over
temporally versioned document collections. Given a keyword
query q and a time t our goal is to identify and rank
relevant documents as if the collection was in its state as of
time t.
An increasing number of such versioned document
collections is available today including web archives,
collaborative authoring environments like Wikis, or timestamped
information feeds. Text search on these collections,
however, is mostly time-ignorant: while the searched collection
changes over time, often only the most recent version of
a documents is indexed, or, versions are indexed
independently and treated as separate documents. Even worse, for
some collections, in particular web archives like the
Internet Archive [18], a comprehensive text-search functionality
is often completely missing.
Time-travel text search, as we develop it in this paper,
is a crucial tool to explore these collections and to unfold
their full potential as the following example demonstrates.
For a documentary about a past political scandal, a
journalist needs to research early opinions and statements made
by the involved politicians. Sending an appropriate query
to a major web search-engine, the majority of returned
results contains only recent coverage, since many of the early
web pages have disappeared and are only preserved in web
archives. If the query could be enriched with a time point,
say August 20th 2003 as the day after the scandal got
revealed, and be issued against a web archive, only pages that
existed specifically at that time could be retrieved thus better
satisfying the journalist"s information need.
Document collections like the Web or Wikipedia [32], as
we target them here, are already large if only a single
snapshot is considered. Looking at their evolutionary history, we
are faced with even larger data volumes. As a consequence,
na¨ıve approaches to time-travel text search fail, and viable
approaches must scale-up well to such large data volumes.
This paper presents an efficient solution to time-travel
text search by making the following key contributions:
1. The popular well-studied inverted file index [35] is
transparently extended to enable time-travel text search.
2. Temporal coalescing is introduced to avoid an
indexsize explosion while keeping results highly accurate.
3. We develop two sublist materialization techniques to
improve index performance that allow trading off space
vs. performance.
4. In a comprehensive experimental evaluation our
approach is evaluated on the English Wikipedia and parts
of the Internet Archive as two large-scale real-world
datasets with versioned documents.
The remainder of this paper is organized as follows. The
presented work is put in context with related work in
Section 2. We delineate our model of a temporally versioned
document collection in Section 3. We present our time-travel
inverted index in Section 4. Building on it, temporal
coalescing is described in Section 5. In Section 6 we describe
principled techniques to improve index performance, before
presenting the results of our experimental evaluation in
Section 7.
2. RELATED WORK
We can classify the related work mainly into the following
two categories: (i) methods that deal explicitly with
collections of versioned documents or temporal databases, and
(ii) methods for reducing the index size by exploiting either
the document-content overlap or by pruning portions of the
index. We briefly review work under these categories here.
To the best of our knowledge, there is very little prior work
dealing with historical search over temporally versioned
documents. Anick and Flynn [3], while pioneering this research,
describe a help-desk system that supports historical queries.
Access costs are optimized for accesses to the most recent
versions and increase as one moves farther into the past.
Burrows and Hisgen [10], in a patent description, delineate
a method for indexing range-based values and mention its
potential use for searching based on dates associated with
documents. Recent work by Nørv˚ag and Nybø [25] and
their earlier proposals concentrate on the relatively simpler
problem of supporting text-containment queries only and
neglect the relevance scoring of results. Stack [29] reports
practical experiences made when adapting the open source
search-engine Nutch to search web archives. This
adaptation, however, does not provide the intended time-travel
text search functionality. In contrast, research in temporal
databases has produced several index structures tailored for
time-evolving databases; a comprehensive overview of the
state-of-art is available in [28]. Unlike the inverted file
index, their applicability to text search is not well understood.
Moving on to the second category of related work, Broder
et al. [8] describe a technique that exploits large content
overlaps between documents to achieve a reduction in index
size. Their technique makes strong assumptions about the
structure of document overlaps rendering it inapplicable to
our context. More recent approaches by Hersovici et al. [17]
and Zhang and Suel [34] exploit arbitrary content overlaps
between documents to reduce index size. None of the
approaches, however, considers time explicitly or provides the
desired time-travel text search functionality. Static
indexpruning techniques [11, 12] aim to reduce the effective index
size, by removing portions of the index that are expected
to have low impact on the query result. They also do not
consider temporal aspects of documents, and thus are
technically quite different from our proposal despite having a
shared goal of index-size reduction. It should be noted that
index-pruning techniques can be adapted to work along with
the temporal text index we propose here.
3. MODEL
In the present work, we deal with a temporally versioned
document collection D that is modeled as described in the
following. Each document d ∈ D is a sequence of its versions
d = dt1
, dt2
, . . . .
Each version dti
has an associated timestamp ti reflecting
when the version was created. Each version is a vector of
searchable terms or features. Any modification to a
document version results in the insertion of a new version with
corresponding timestamp. We employ a discrete definition
of time, so that timestamps are non-negative integers. The
deletion of a document at time ti, i.e., its disappearance
from the current state of the collection, is modeled as the
insertion of a special tombstone version ⊥. The validity
time-interval val(dti
) of a version dti
is [ti, ti+1), if a newer
version with associated timestamp ti+1 exists, and [ti, now)
otherwise where now points to the greatest possible value of
a timestamp (i.e., ∀t : t < now).
Putting all this together, we define the state Dt
of the
collection at time t (i.e., the set of versions valid at t that
are not deletions) as
Dt
=
[
d∈D
{dti
∈ d | t ∈ val(dti
) ∧ dti
= ⊥} .
As mentioned earlier, we want to enrich a keyword query
q with a timestamp t, so that q be evaluated over Dt
, i.e.,
the state of the collection at time t. The enriched time-travel
query is written as q t
for brevity.
As a retrieval model in this work we adopt Okapi BM25 [27],
but note that the proposed techniques are not dependent on
this choice and are applicable to other retrieval models like
tf-idf [4] or language models [26] as well. For our considered
setting, we slightly adapt Okapi BM25 as
w(q t
, dti
) =
X
v∈q
wtf (v, dti
) · widf (v, t) .
In the above formula, the relevance w(q t
, dti
) of a
document version dti
to the time-travel query q t
is defined.
We reiterate that q t
is evaluated over Dt
so that only the
version dti
valid at time t is considered. The first factor
wtf (v, dti
) in the summation, further referred to as the
tfscore is defined as
wtf (v, dti
) =
(k1 + 1) · tf(v, dti
)
k1 · ((1 − b) + b · dl(d ti )
avdl(ti)
) + tf(v, dti )
.
It considers the plain term frequency tf(v, dti
) of term v
in version dti
normalizing it, taking into account both the
length dl(dti
) of the version and the average document length
avdl(ti) in the collection at time ti. The length-normalization
parameter b and the tf-saturation parameter k1 are inherited
from the original Okapi BM25 and are commonly set to
values 1.2 and 0.75 respectively. The second factor widf (v, t),
which we refer to as the idf-score in the remainder, conveys
the inverse document frequency of term v in the collection
at time t and is defined as
widf (v, t) = log
N(t) − df(v, t) + 0.5
df(v, t) + 0.5
where N(t) = |Dt
| is the collection size at time t and df(v, t)
gives the number of documents in the collection that contain
the term v at time t. While the idf-score depends on the
whole corpus as of the query time t, the tf-score is specific
to each version.
4. TIME-TRAVELINVERTEDFILEINDEX
The inverted file index is a standard technique for text
indexing, deployed in many systems. In this section, we
briefly review this technique and present our extensions to
the inverted file index that make it ready for time-travel text
search.
4.1 Inverted File Index
An inverted file index consists of a vocabulary, commonly
organized as a B+-Tree, that maps each term to its
idfscore and inverted list. The index list Lv belonging to term
v contains postings of the form
( d, p )
where d is a document-identifier and p is the so-called
payload. The payload p contains information about the term
frequency of v in d, but may also include positional
information about where the term appears in the document.
The sort-order of index lists depends on which queries are
to be supported efficiently. For Boolean queries it is
favorable to sort index lists in document-order.
Frequencyorder and impact-order sorted index lists are beneficial for
ranked queries and enable optimized query processing that
stops early after having identified the k most relevant
documents [1, 2, 9, 15, 31]. A variety of compression techniques,
such as encoding document identifiers more compactly, have
been proposed [33, 35] to reduce the size of index lists. For
an excellent recent survey about inverted file indexes we
refer to [35].
4.2 Time-Travel Inverted File Index
In order to prepare an inverted file index for time travel
we extend both inverted lists and the vocabulary structure
by explicitly incorporating temporal information. The main
idea for inverted lists is that we include a validity
timeinterval [tb, te) in postings to denote when the payload
information was valid. The postings in our time-travel inverted
file index are thus of the form
( d, p, [tb, te) )
where d and p are defined as in the standard inverted file
index above and [tb, te) is the validity time-interval.
As a concrete example, in our implementation, for a
version dti
having the Okapi BM25 tf-score wtf (v, dti
) for term
v, the index list Lv contains the posting
( d, wtf (v, dti
), [ti, ti+1) ) .
Similarly, the extended vocabulary structure maintains
for each term a time-series of idf-scores organized as a
B+Tree. Unlike the tf-score, the idf-score of every term could
vary with every change in the corpus. Therefore, we take
a simplified approach to idf-score maintenance, by
computing idf-scores for all terms in the corpus at specific (possibly
periodic) times.
4.3 Query Processing
During processing of a time-travel query q t
, for each query
term the corresponding idf-score valid at time t is retrieved
from the extended vocabulary. Then, index lists are
sequentially read from disk, thereby accumulating the information
contained in the postings. We transparently extend the
sequential reading, which is - to the best of our
knowledgecommon to all query processing techniques on inverted file
indexes, thus making them suitable for time-travel
queryprocessing. To this end, sequential reading is extended by
skipping all postings whose validity time-interval does not
contain t (i.e., t ∈ [tb, te)). Whether a posting can be
skipped can only be decided after the posting has been
transferred from disk into memory and therefore still incurs
significant I/O cost. As a remedy, we propose index organization
techniques in Section 6 that aim to reduce the I/O overhead
significantly.
We note that our proposed extension of the inverted file
index makes no assumptions about the sort-order of
index lists. As a consequence, existing query-processing
techniques and most optimizations (e.g., compression techniques)
remain equally applicable.
5. TEMPORAL COALESCING
If we employ the time-travel inverted index, as described
in the previous section, to a versioned document collection,
we obtain one posting per term per document version. For
frequent terms and large highly-dynamic collections, this
time
score
non-coalesced
coalesced
Figure 1: Approximate Temporal Coalescing
leads to extremely long index lists with very poor
queryprocessing performance.
The approximate temporal coalescing technique that we
propose in this section counters this blowup in index-list
size. It builds on the observation that most changes in a
versioned document collection are minor, leaving large parts
of the document untouched. As a consequence, the payload
of many postings belonging to temporally adjacent versions
will differ only slightly or not at all. Approximate temporal
coalescing reduces the number of postings in an index list by
merging such a sequence of postings that have almost equal
payloads, while keeping the maximal error bounded. This
idea is illustrated in Figure 1, which plots non-coalesced and
coalesced scores of postings belonging to a single document.
Approximate temporal coalescing is greatly effective given
such fluctuating payloads and reduces the number of
postings from 9 to 3 in the example. The notion of temporal
coalescing was originally introduced in temporal database
research by B¨ohlen et al. [6], where the simpler problem of
coalescing only equal information was considered.
We next formally state the problem dealt with in
approximate temporal coalescing, and discuss the computation of
optimal and approximate solutions. Note that the technique
is applied to each index list separately, so that the following
explanations assume a fixed term v and index list Lv.
As an input we are given a sequence of temporally
adjacent postings
I = ( d, pi, [ti, ti+1) ), . . . , ( d, pn−1, [tn−1, tn)) ) .
Each sequence represents a contiguous time period during
which the term was present in a single document d. If a term
disappears from d but reappears later, we obtain multiple
input sequences that are dealt with separately. We seek to
generate the minimal length output sequence of postings
O = ( d, pj, [tj, tj+1) ), . . . , ( d, pm−1, [tm−1, tm)) ) ,
that adheres to the following constraints: First, O and I
must cover the same time-range, i.e., ti = tj and tn = tm.
Second, when coalescing a subsequence of postings of the
input into a single posting of the output, we want the
approximation error to be below a threshold . In other words, if
(d, pi, [ti, ti+1)) and (d, pj, [tj, tj+1)) are postings of I and
O respectively, then the following must hold for a chosen
error function and a threshold :
tj ≤ ti ∧ ti+1 ≤ tj+1 ⇒ error(pi, pj) ≤ .
In this paper, as an error function we employ the relative
error between payloads (i.e., tf-scores) of a document in I
and O, defined as:
errrel(pi, pj) = |pi − pj| / |pi| .
Finding an optimal output sequence of postings can be
cast into finding a piecewise-constant representation for the
points (ti, pi) that uses a minimal number of segments while
retaining the above approximation guarantee. Similar
problems occur in time-series segmentation [21, 30] and histogram
construction [19, 20]. Typically dynamic programming is
applied to obtain an optimal solution in O(n2
m∗
) [20, 30]
time with m∗
being the number of segments in an optimal
sequence. In our setting, as a key difference, only a
guarantee on the local error is retained - in contrast to a guarantee
on the global error in the aforementioned settings.
Exploiting this fact, an optimal solution is computable by means of
induction [24] in O(n2
) time. Details of the optimal
algorithm are omitted here but can be found in the
accompanying technical report [5].
The quadratic complexity of the optimal algorithm makes
it inappropriate for the large datasets encountered in this
work. As an alternative, we introduce a linear-time
approximate algorithm that is based on the sliding-window
algorithm given in [21]. This algorithm produces nearly-optimal
output sequences that retain the bound on the relative error,
but possibly require a few additional segments more than an
optimal solution.
Algorithm 1 Temporal Coalescing (Approximate)
1: I = ( d, pi, [ti, ti+1) ), . . . O =
2: pmin = pi pmax = pi p = pi tb = ti te = ti+1
3: for ( d, pj, [tj, tj+1) ) ∈ I do
4: pmin = min( pmin, pj ) pmax = max( pmax, pj )
5: p = optrep(pmin, pmax)
6: if errrel(pmin, p ) ≤ ∧ errrel(pmax, p ) ≤ then
7: pmin = pmin pmax = pmax p = p te = tj+1
8: else
9: O = O ∪ ( d, p, [tb, te) )
10: pmin = pj pmax = pj p = pj tb = tj te = tj+1
11: end if
12: end for
13: O = O ∪ ( d, p, [tb, te) )
Algorithm 1 makes one pass over the input sequence I.
While doing so, it coalesces sequences of postings having
maximal length. The optimal representative for a sequence
of postings depends only on their minimal and maximal
payload (pmin and pmax) and can be looked up using optrep in
O(1) (see [16] for details). When reading the next
posting, the algorithm tries to add it to the current sequence of
postings. It computes the hypothetical new representative
p and checks whether it would retain the approximation
guarantee. If this test fails, a coalesced posting bearing the
old representative is added to the output sequence O and,
following that, the bookkeeping is reinitialized. The time
complexity of the algorithm is in O(n).
Note that, since we make no assumptions about the sort
order of index lists, temporal-coalescing algorithms have an
additional preprocessing cost in O(|Lv| log |Lv|) for sorting
the index list and chopping it up into subsequences for each
document.
6. SUBLIST MATERIALIZATION
Efficiency of processing a query q t
on our time-travel
inverted index is influenced adversely by the wasted I/O due
to read but skipped postings. Temporal coalescing
implicitly addresses this problem by reducing the overall index list
size, but still a significant overhead remains. In this section,
we tackle this problem by proposing the idea of materializing
sublists each of which corresponds to a contiguous
subinterval of time spanned by the full index. Each of these sublists
contains all coalesced postings that overlap with the
corresponding time interval of the sublist. Note that all those
postings whose validity time-interval spans across the
temporal boundaries of several sublists are replicated in each of
the spanned sublists. Thus, in order to process the query q t
time
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
d1
d2
d3
document
1 2 3 4
5 6 7
8 9 10
Figure 2: Sublist Materialization
it is sufficient to scan any materialized sublist whose
timeinterval contains t.
We illustrate the idea of sublist materialization using an
example shown in Figure 2. The index list Lv visualized in
the figure contains a total of 10 postings from three
documents d1, d2, and d3. For ease of description, we have
numbered boundaries of validity time-intervals, in increasing
time-order, as t1, . . . , t10 and numbered the postings
themselves as 1, . . . , 10. Now, consider the processing of a query
q t
with t ∈ [t1, t2) using this inverted list. Although only
three postings (postings 1, 5 and 8) are valid at time t, the
whole inverted list has to be read in the worst case. Suppose
that we split the time axis of the list at time t2, forming two
sublists with postings {1, 5, 8} and {2, 3, 4, 5, 6, 7, 8, 9,
10} respectively. Then, we can process the above query with
optimal cost by reading only those postings that existed at
this t.
At a first glance, it may seem counterintuitive to reduce
index size in the first step (using temporal coalescing), and
then to increase it again using the sublist materialization
techniques presented in this section. However, we reiterate
that our main objective is to improve the efficiency of
processing queries, not to reduce the index size alone. The use
of temporal coalescing improves the performance by
reducing the index size, while the sublist materialization improves
performance by judiciously replicating entries. Further, the
two techniques, can be applied separately and are
independent. If applied in conjunction, though, there is a synergetic
effect - sublists that are materialized from a temporally
coalesced index are generally smaller.
We employ the notation Lv : [ti, tj) to refer to the
materialized sublist for the time interval [ti, tj), that is formally
defined as,
Lv : [ti, tj) = {( d, p, [tb, te) ) ∈ Lv | tb < tj ∧ te > ti} .
To aid the presentation in the rest of the paper, we first
provide some definitions. Let T = t1 . . . tn be the sorted
sequence of all unique time-interval boundaries of an
inverted list Lv. Then we define
E = { [ti, ti+1) | 1 ≤ i < n}
to be the set of elementary time intervals. We refer to the
set of time intervals for which sublists are materialized as
M ⊆ { [ti, tj) | 1 ≤ i < j ≤ n } ,
and demand
∀ t ∈ [t1, tn) ∃ m ∈ M : t ∈ m ,
i.e., the time intervals in M must completely cover the
time interval [t1, tn), so that time-travel queries q t
for all
t ∈ [t1, tn) can be processed. We also assume that
intervals in M are disjoint. We can make this assumption
without ruling out any optimal solution with regard to space
or performance defined below. The space required for the
materialization of sublists in a set M is defined as
S( M ) =
X
m∈M
|Lv : m| ,
i.e., the total length of all lists in M. Given a set M, we let
π( [ti, ti+1) ) = [tj, tk) ∈ M : [ti, ti+1) ⊆ [tj, tk)
denote the time interval that is used to process queries q t
with t ∈ [ti, ti+1). The performance of processing queries
q t
for t ∈ [ti, ti+1) inversely depends on its processing cost
PC( [ti, ti+1) ) = |Lv : π( [ti, ti+1) )| ,
which is assumed to be proportional to the length of the
list Lv : π( [ti, ti+1) ). Thus, in order to optimize the
performance of processing queries we minimize their processing
costs.
6.1 Performance/Space-Optimal Approaches
One strategy to eliminate the problem of skipped
postings is to eagerly materialize sublists for all elementary time
intervals, i.e., to choose M = E. In doing so, for every
query q t
only postings valid at time t are read and thus the
best possible performance is achieved. Therefore, we will
refer to this approach as Popt in the remainder. The initial
approach described above that keeps only the full list Lv
and thus picks M = { [t1, tn) } is referred to as Sopt in the
remainder. This approach requires minimal space, since it
keeps each posting exactly once.
Popt and Sopt are extremes: the former provides the best
possible performance but is not space-efficient, the latter
requires minimal space but does not provide good
performance. The two approaches presented in the rest of this
section allow mutually trading off space and performance
and can thus be thought of as means to explore the
configuration spectrum between the Popt and the Sopt approach.
6.2 Performance-Guarantee Approach
The Popt approach clearly wastes a lot of space
materializing many nearly-identical sublists. In the example
illustrated in Figure 2 materialized sublists for [t1, t2) and
[t2, t3) differ only by one posting. If the sublist for [t1, t3)
was materialized instead, one could save significant space
while incurring only an overhead of one skipped posting for
all t ∈ [t1, t3). The technique presented next is driven by
the idea that significant space savings over Popt are
achievable, if an upper-bounded loss on the performance can be
tolerated, or to put it differently, if a performance
guarantee relative to the optimum is to be retained. In detail,
the technique, which we refer to as PG (Performance
Guarantee) in the remainder, finds a set M that has minimal
required space, but guarantees for any elementary time
interval [ti, ti+1) (and thus for any query q t
with t ∈ [ti, ti+1))
that performance is worse than optimal by at most a factor
of γ ≥ 1. Formally, this problem can be stated as
argmin
M
S( M ) s.t.
∀ [ti, ti+1) ∈ E : PC( [ti, ti+1) ) ≤ γ · |Lv : [ti, ti+1)| .
An optimal solution to the problem can be computed by
means of induction using the recurrence
C( [t1, tk+1) ) =
min {C( [t1, tj) ) + |Lv : [tj, tk+1)| | 1 ≤ j ≤ k ∧ condition} ,
where C( [t1, tj) ) is the optimal cost (i.e., the space
required) for the prefix subproblem
{ [ti, ti+1) ∈ E | [ti, ti+1) ⊆ [t1, tj) }
and condition stands for
∀ [ti, ti+1) ∈ E : [ti, ti+1) ⊆ [tj, tk+1)
⇒ |Lv : [tj, tk+1)| ≤ γ · |Lv : [ti, ti+1)|
.
Intuitively, the recurrence states that an optimal solution
for [t1, tk+1) be combined from an optimal solution to a
prefix subproblem C( [t1, tj) ) and a time interval [tj, tk+1)
that can be materialized without violating the performance
guarantee.
Pseudocode of the algorithm is omitted for space reasons,
but can be found in the accompanying technical report [5].
The time complexity of the algorithm is in O(n2
) - for each
prefix subproblem the above recurrence must be evaluated,
which is possible in linear time if list sizes |L : [ti, tj)| are
precomputed. The space complexity is in O(n2
) - the cost
of keeping the precomputed sublist lengths and memoizing
optimal solutions to prefix subproblems.
6.3 Space-Bound Approach
So far we considered the problem of materializing sublists
that give a guarantee on performance while requiring
minimal space. In many situations, though, the storage space is
at a premium and the aim would be to materialize a set of
sublists that optimizes expected performance while not
exceeding a given space limit. The technique presented next,
which is named SB, tackles this very problem. The space
restriction is modeled by means of a user-specified
parameter κ ≥ 1 that limits the maximum allowed blowup in index
size from the space-optimal solution provided by Sopt. The
SB technique seeks to find a set M that adheres to this
space limit but minimizes the expected processing cost (and
thus optimizes the expected performance). In the definition
of the expected processing cost, P( [ti, ti+1) ) denotes the
probability of a query time-point being in [ti, ti+1).
Formally, this space-bound sublist-materialization problem can
be stated as
argmin
M
X
[ti, ti+1) ∈ E
P( [ti, ti+1) ) · PC( [ti, ti+1) ) s.t.
X
m∈M
|Lv : m| ≤ κ |Lv| .
The problem can be solved by using dynamic
programming over an increasing number of time intervals: At each
time interval in E the algorithms decides whether to start a
new materialization time-interval, using the known best
materialization decision from the previous time intervals, and
keeping track of the required space consumption for
materialization. A detailed description of the algorithm is omitted
here, but can be found in the accompanying technical
report [5]. Unfortunately, the algorithm has time complexity
in O(n3
|Lv|) and its space complexity is in O(n2
|Lv|), which
is not practical for large data sets.
We obtain an approximate solution to the problem
using simulated annealing [22, 23]. Simulated annealing takes
a fixed number R of rounds to explore the solution space.
In each round a random successor of the current solution
is looked at. If the successor does not adhere to the space
limit, it is always rejected (i.e., the current solution is kept).
A successor adhering to the space limit is always accepted if
it achieves lower expected processing cost than the current
solution. If it achieves higher expected processing cost, it is
randomly accepted with probability e−∆/r
where ∆ is the
increase in expected processing cost and R ≥ r ≥ 1 denotes
the number of remaining rounds. In addition, throughout
all rounds, the method keeps track of the best solution seen
so far. The solution space for the problem at hand can be
efficiently explored. As we argued above, we solely have
to look at sets M that completely cover the time interval
[t1, tn) and do not contain overlapping time intervals. We
represent such a set M as an array of n boolean variables
b1 . . . bn that convey the boundaries of time intervals in the
set. Note that b1 and bn are always set to true. Initially,
all n − 2 intermediate variables assume false, which
corresponds to the set M = { [t1, tn) }. A random successor
can now be easily generated by switching the value of one
of the n − 2 intermediate variables. The time complexity of
the method is in O(n2
) - the expected processing cost must
be computed in each round. Its space complexity is in O(n)
- for keeping the n boolean variables.
As a side remark note that for κ = 1.0 the SB method
does not necessarily produce the solution that is obtained
from Sopt, but may produce a solution that requires the
same amount of space while achieving better expected
performance.
7. EXPERIMENTAL EVALUATION
We conducted a comprehensive series of experiments on
two real-world datasets to evaluate the techniques proposed
in this paper.
7.1 Setup and Datasets
The techniques described in this paper were implemented
in a prototype system using Java JDK 1.5. All
experiments described below were run on a single SUN V40z
machine having four AMD Opteron CPUs, 16GB RAM, a large
network-attached RAID-5 disk array, and running Microsoft
Windows Server 2003. All data and indexes are kept in an
Oracle 10g database that runs on the same machine. For
our experiments we used two different datasets.
The English Wikipedia revision history (referred to as
WIKI in the remainder) is available for free download as
a single XML file. This large dataset, totaling 0.7 TBytes,
contains the full editing history of the English Wikipedia
from January 2001 to December 2005 (the time of our
download). We indexed all encyclopedia articles excluding
versions that were marked as the result of a minor edit (e.g.,
the correction of spelling errors etc.). This yielded a total of
892,255 documents with 13,976,915 versions having a mean
(µ) of 15.67 versions per document at standard deviation (σ)
of 59.18. We built a time-travel query workload using the
query log temporarily made available recently by AOL
Research as follows - we first extracted the 300 most frequent
keyword queries that yielded a result click on a Wikipedia
article (for e.g., french revolution, hurricane season 2005,
da vinci code etc.). The thus extracted queries contained
a total of 422 distinct terms. For each extracted query, we
randomly picked a time point for each month covered by
the dataset. This resulted in a total of 18, 000 (= 300 × 60)
time-travel queries.
The second dataset used in our experiments was based
on a subset of the European Archive [13], containing weekly
crawls of 11 .gov.uk websites throughout the years 2004 and
2005 amounting close to 2 TBytes of raw data. We filtered
out documents not belonging to MIME-types text/plain
and text/html, to obtain a dataset that totals 0.4 TBytes
and which we refer to as UKGOV in rest of the paper. This
included a total of 502,617 documents with 8,687,108
versions (µ = 17.28 and σ = 13.79). We built a corresponding
query workload as mentioned before, this time choosing
keyword queries that led to a site in the .gov.uk domain (e.g.,
minimum wage, inheritance tax , citizenship ceremony
dates etc.), and randomly sampling a time point for every
month within the two year period spanned by the dataset.
Thus, we obtained a total of 7,200 (= 300 × 24) time-travel
queries for the UKGOV dataset. In total 522 terms appear
in the extracted queries.
The collection statistics (i.e., N and avdl) and term
statistics (i.e., DF) were computed at monthly granularity for
both datasets.
7.2 Impact of Temporal Coalescing
Our first set of experiments is aimed at evaluating the
approximate temporal coalescing technique, described in
Section 5, in terms of index-size reduction and its effect on the
result quality. For both the WIKI and UKGOV datasets, we
compare temporally coalesced indexes for different values of
the error threshold computed using Algorithm 1 with the
non-coalesced index as a baseline.
WIKI UKGOV
# Postings Ratio # Postings Ratio
- 8,647,996,223 100.00% 7,888,560,482 100.00%
0.00 7,769,776,831 89.84% 2,926,731,708 37.10%
0.01 1,616,014,825 18.69% 744,438,831 9.44%
0.05 556,204,068 6.43% 259,947,199 3.30%
0.10 379,962,802 4.39% 187,387,342 2.38%
0.25 252,581,230 2.92% 158,107,198 2.00%
0.50 203,269,464 2.35% 155,434,617 1.97%
Table 1: Index sizes for non-coalesced index (-) and
coalesced indexes for different values of
Table 1 summarizes the index sizes measured as the total
number of postings. As these results demonstrate,
approximate temporal coalescing is highly effective in reducing
index size. Even a small threshold value, e.g. = 0.01, has a
considerable effect by reducing the index size almost by an
order of magnitude. Note that on the UKGOV dataset, even
accurate coalescing ( = 0) manages to reduce the index size
to less than 38% of the original size. Index size continues to
reduce on both datasets, as we increase the value of .
How does the reduction in index size affect the query
results? In order to evaluate this aspect, we compared the
top-k results computed using a coalesced index against the
ground-truth result obtained from the original index, for
different cutoff levels k. Let Gk and Ck be the top-k documents
from the ground-truth result and from the coalesced index
respectively. We used the following two measures for
comparison: (i) Relative Recall at cutoff level k (RR@k), that
measures the overlap between Gk and Ck, which ranges in
[0, 1] and is defined as
RR@k = |Gk ∩ Ck|/k .
(ii) Kendall"s τ (see [7, 14] for a detailed definition) at
cutoff level k (KT@k), measuring the agreement between two
results in the relative order of items in Gk ∩ Ck, with value
1 (or -1) indicating total agreement (or disagreement).
Figure 3 plots, for cutoff levels 10 and 100, the mean of
RR@k and KT@k along with 5% and 95% percentiles, for
different values of the threshold starting from 0.01. Note
that for = 0, results coincide with those obtained by the
original index, and hence are omitted from the graph.
It is reassuring to see from these results that approximate
temporal coalescing induces minimal disruption to the query
results, since RR@k and KT@k are within reasonable limits.
For = 0.01, the smallest value of in our experiments,
RR@100 for WIKI is 0.98 indicating that the results are
-1
-0.5
0
0.5
1
ε
0.01 0.05 0.10 0.25 0.50
Relative Recall @ 10 (WIKI)
Kendall's τ @ 10 (WIKI)
Relative Recall @ 10 (UKGOV)
Kendall's τ @ 10 (UKGOV)
(a) @10
-1
-0.5
0
0.5
1
ε
0.01 0.05 0.10 0.25 0.50
Relative Recall @ 100 (WIKI)
Kendall's τ @ 100 (WIKI)
Relative Recall @ 100 (UKGOV)
Kendall's τ @ 100 (UKGOV)
(b) @100
Figure 3: Relative recall and Kendall"s τ observed on coalesced indexes for different values of
almost indistinguishable from those obtained through the
original index. Even the relative order of these common
results is quite high, as the mean KT@100 is close to 0.95.
For the extreme value of = 0.5, which results in an index
size of just 2.35% of the original, the RR@100 and KT@100
are about 0.8 and 0.6 respectively. On the relatively less
dynamic UKGOV dataset (as can be seen from the σ values
above), results were even better, with high values of RR and
KT seen throughout the spectrum of values for both cutoff
values.
7.3 Sublist Materialization
We now turn our attention towards evaluating the
sublist materialization techniques introduced in Section 6. For
both datasets, we started with the coalesced index produced
by a moderate threshold setting of = 0.10. In order to
reduce the computational effort, boundaries of elementary
time intervals were rounded to day granularity before
computing the sublist materializations. However, note that the
postings in the materialized sublists still retain their
original timestamps. For a comparative evaluation of the four
approaches - Popt, Sopt, PG, and SB - we measure space
and performance as follows. The required space S(M), as
defined earlier, is equal to the total number of postings in
the materialized sublists. To assess performance we
compute the expected processing cost (EPC) for all terms in
the respective query workload assuming a uniform
probability distribution among query time-points. We report the
mean EPC, as well as the 5%- and 95%-percentile. In other
words, the mean EPC reflects the expected length of the
index list (in terms of index postings) that needs to be scanned
for a random time point and a random term from the query
workload.
The Sopt and Popt approaches are, by their definition,
parameter-free. For the PG approach, we varied its
parameter γ, which limits the maximal performance degradation,
between 1.0 and 3.0. Analogously, for the SB approach
the parameter κ, as an upper-bound on the allowed space
blowup, was varied between 1.0 and 3.0. Solutions for the
SB approach were obtained running simulated annealing for
R = 50, 000 rounds.
Table 2 lists the obtained space and performance figures.
Note that EPC values are smaller on WIKI than on
UKGOV, since terms in the query workload employed for WIKI
are relatively rarer in the corpus. Based on the depicted
results, we make the following key observations. i) As
expected, Popt achieves optimal performance at the cost of an
enormous space consumption. Sopt, to the contrary, while
consuming an optimal amount of space, provides only poor
expected processing cost. The PG and SB methods, for
different values of their respective parameter, produce
solutions whose space and performance lie in between the
extremes that Popt and Sopt represent. ii) For the PG method
we see that for an acceptable performance degradation of
only 10% (i.e., γ = 1.10) the required space drops by more
than one order of magnitude in comparison to Popt on both
datasets. iii) The SB approach achieves close-to-optimal
performance on both datasets, if allowed to consume at most
three times the optimal amount of space (i.e., κ = 3.0),
which on our datasets still corresponds to a space reduction
over Popt by more than one order of magnitude.
We also measured wall-clock times on a sample of the
queries with results indicating improvements in execution
time by up to a factor of 12.
8. CONCLUSIONS
In this work we have developed an efficient solution for
time-travel text search over temporally versioned document
collections. Experiments on two real-world datasets showed
that a combination of the proposed techniques can reduce
index size by up to an order of magnitude while achieving
nearly optimal performance and highly accurate results.
The present work opens up many interesting questions
for future research, e.g.: How can we even further improve
performance by applying (and possibly extending) encoding,
compression, and skipping techniques [35]?. How can we
extend the approach for queries q [tb, te]
specifying a time
interval instead of a time point? How can the described
time-travel text search functionality enable or speed up text
mining along the time axis (e.g., tracking sentiment changes
in customer opinions)?
9. ACKNOWLEDGMENTS
We are grateful to the anonymous reviewers for their
valuable comments - in particular to the reviewer who pointed
out the opportunity for algorithmic improvements in
Section 5 and Section 6.2.
10. REFERENCES
[1] V. N. Anh and A. Moffat. Pruned Query Evaluation
Using Pre-Computed Impacts. In SIGIR, 2006.
[2] V. N. Anh and A. Moffat. Pruning Strategies for
Mixed-Mode Querying. In CIKM, 2006.
WIKI UKGOV
S(M) EPC S(M) EPC
5% Mean 95% 5% Mean 95%
Popt 54,821,634,137 11.22 3,132.29 15,658.42 21,372,607,052 39.93 15,593.60 66,938.86
Sopt 379,962,802 114.05 30,186.52 149,820.1 187,387,342 63.15 22,852.67 102,923.85
PG γ = 1.10 3,814,444,654 11.30 3,306.71 16,512.88 1,155,833,516 40.66 16,105.61 71,134.99
PG γ = 1.25 1,827,163,576 12.37 3,629.05 18,120.86 649,884,260 43.62 17,059.47 75,749.00
PG γ = 1.50 1,121,661,751 13.96 4,128.03 20,558.60 436,578,665 46.68 18,379.69 78,115.89
PG γ = 1.75 878,959,582 15.48 4,560.99 22,476.77 345,422,898 51.26 19,150.06 82,028.48
PG γ = 2.00 744,381,287 16.79 4,992.53 24,637.62 306,944,062 51.48 19,499.78 87,136.31
PG γ = 2.50 614,258,576 18.28 5,801.66 28,849.02 269,178,107 53.36 20,279.62 87,897.95
PG γ = 3.00 552,796,130 21.04 6,485.44 32,361.93 247,666,812 55.95 20,800.35 89,591.94
SB κ = 1.10 412,383,387 38.97 12,723.68 60,350.60 194,287,671 63.09 22,574.54 102,208.58
SB κ = 1.25 467,537,173 26.87 9,011.81 45,119.08 204,454,800 57.42 22,036.39 95,337.33
SB κ = 1.50 557,341,140 19.84 6,699.36 32,810.85 246,323,383 53.24 20,566.68 91,691.38
SB κ = 1.75 647,187,522 16.59 5,769.40 28,272.89 296,345,976 49.56 19,065.99 84,377.44
SB κ = 2.00 737,819,354 15.86 5,358.99 27,112.01 336,445,773 47.58 18,569.08 81,386.02
SB κ = 2.50 916,308,766 13.99 4,639.77 23,037.59 427,122,038 44.89 17,153.94 74,449.28
SB κ = 3.00 1,094,973,140 13.01 4,343.72 22,708.37 511,470,192 42.15 16,772.65 72,307.43
Table 2: Required space and expected processing cost (in # postings) observed on coalesced indexes ( = 0.10)
[3] P. G. Anick and R. A. Flynn. Versioning a Full-Text
Information Retrieval System. In SIGIR, 1992.
[4] R. A. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. Addison-Wesley, 1999.
[5] K. Berberich, S. Bedathur, T. Neumann, and
G. Weikum. A Time Machine for Text search.
Technical Report MPI-I-2007-5-002, Max-Planck
Institute for Informatics, 2007.
[6] M. H. B¨ohlen, R. T. Snodgrass, and M. D. Soo.
Coalescing in Temporal Databases. In VLDB, 1996.
[7] P. Boldi, M. Santini, and S. Vigna. Do Your Worst to
Make the Best: Paradoxical Effects in PageRank
Incremental Computations. In WAW, 2004.
[8] A. Z. Broder, N. Eiron, M. Fontoura, M. Herscovici,
R. Lempel, J. McPherson, R. Qi, and E. J. Shekita.
Indexing Shared Content in Information Retrieval
Systems. In EDBT, 2006.
[9] C. Buckley and A. F. Lewit. Optimization of Inverted
Vector Searches. In SIGIR, 1985.
[10] M. Burrows and A. L. Hisgen. Method and Apparatus
for Generating and Searching Range-Based Index of
Word Locations. U.S. Patent 5,915,251, 1999.
[11] S. B¨uttcher and C. L. A. Clarke. A Document-Centric
Approach to Static Index Pruning in Text Retrieval
Systems. In CIKM, 2006.
[12] D. Carmel, D. Cohen, R. Fagin, E. Farchi,
M. Herscovici, Y. S. Maarek, and A. Soffer. Static
Index Pruning for Information Retrieval Systems. In
SIGIR, 2001.
[13] http://www.europarchive.org.
[14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing
Top k Lists. SIAM J. Discrete Math., 17(1):134-160,
2003.
[15] R. Fagin, A. Lotem, and M. Naor. Optimal
Aggregation Algorithms for Middleware. J. Comput.
Syst. Sci., 66(4):614-656, 2003.
[16] S. Guha, K. Shim, and J. Woo. REHIST: Relative
Error Histogram Construction Algorithms. In VLDB,
2004.
[17] M. Hersovici, R. Lempel, and S. Yogev. Efficient
Indexing of Versioned Document Sequences. In ECIR,
2007.
[18] http://www.archive.org.
[19] Y. E. Ioannidis and V. Poosala. Balancing Histogram
Optimality and Practicality for Query Result Size
Estimation. In SIGMOD, 1995.
[20] H. V. Jagadish, N. Koudas, S. Muthukrishnan,
V. Poosala, K. C. Sevcik, and T. Suel. Optimal
Histograms with Quality Guarantees. In VLDB, 1998.
[21] E. J. Keogh, S. Chu, D. Hart, and M. J. Pazzani. An
Online Algorithm for Segmenting Time Series. In
ICDM, 2001.
[22] S. Kirkpatrick, D. G. Jr., and M. P. Vecchi.
Optimization by Simulated Annealing. Science,
220(4598):671-680, 1983.
[23] J. Kleinberg and E. Tardos. Algorithm Design.
Addison-Wesley, 2005.
[24] U. Manber. Introduction to Algorithms: A Creative
Approach. Addison-Wesley, 1989.
[25] K. Nørv˚ag and A. O. N. Nybø. DyST: Dynamic and
Scalable Temporal Text Indexing. In TIME, 2006.
[26] J. M. Ponte and W. B. Croft. A Language Modeling
Approach to Information Retrieval. In SIGIR, 1998.
[27] S. E. Robertson and S. Walker. Okapi/Keenbow at
TREC-8. In TREC, 1999.
[28] B. Salzberg and V. J. Tsotras. Comparison of Access
Methods for Time-Evolving Data. ACM Comput.
Surv., 31(2):158-221, 1999.
[29] M. Stack. Full Text Search of Web Archive
Collections. In IWAW, 2006.
[30] E. Terzi and P. Tsaparas. Efficient Algorithms for
Sequence Segmentation. In SIAM-DM, 2006.
[31] M. Theobald, G. Weikum, and R. Schenkel. Top-k
Query Evaluation with Probabilistic Guarantees. In
VLDB, 2004.
[32] http://www.wikipedia.org.
[33] I. H. Witten, A. Moffat, and T. C. Bell. Managing
Gigabytes: Compressing and Indexing Documents and
Images. Morgan Kaufmann publishers Inc., 1999.
[34] J. Zhang and T. Suel. Efficient Search in Large
Textual Collections with Redundancy. In WWW,
2007.
[35] J. Zobel and A. Moffat. Inverted Files for Text Search
Engines. ACM Comput. Surv., 38(2):6, 2006. | text search;inverted file index;sublist materialization;temporal text index;time-travel text search;timestamped information feed;open source search-engine nutch;collaborative authoring environment;validity time-interval;web archive;approximate temporal coalescing;temporal search;versioned document collection;indexing range-based value;document-content overlap;time machine;static indexpruning technique |
train_H-45 | Query Performance Prediction in Web Search Environments | Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in realworld web settings. | 1. INTRODUCTION
Query performance prediction has many applications in a variety
of information retrieval (IR) areas such as improving retrieval
consistency, query refinement, and distributed IR. The importance
of this problem has been recognized by IR researchers and a
number of new methods have been proposed for prediction
recently [1, 2, 17].
Most work on prediction has focused on the traditional ad-hoc
retrieval task where query performance is measured according to
topical relevance. These prediction models are evaluated on
TREC document collections which typically consist of no more
than one million relatively homogenous newswire articles. With
the popularity and influence of the Web, prediction techniques
that will work well for web-style queries are highly preferable.
However, web search environments pose significant challenges to
current prediction models that are mainly designed for traditional
TREC settings. Here we outline some of these challenges.
First, web collections, which are much larger than conventional
TREC collections, include a variety of documents that are
different in many aspects such as quality and style. Current
prediction techniques can be vulnerable to these characteristics of
web collections. For example, the reported prediction accuracy of
the ranking robustness technique and the clarity technique on the
GOV2 collection (a large web collection) is significantly worse
compared to the other TREC collections [1]. Similar prediction
accuracy on the GOV2 collection using another technique is
reported in [2], confirming the difficult of predicting query
performance on a large web collection.
Furthermore, web search goes beyond the scope of the ad-hoc
retrieval task based on topical relevance. For example, the
Named-Page (NP) finding task, which is a navigational task, is
also popular in web retrieval. Query performance prediction for
the NP task is still necessary since NP retrieval performance is far
from perfect. In fact, according to the report on the NP task of the
2005 Terabyte Track [3], about 40% of the test queries perform
poorly (no correct answer in the first 10 search results) even in the
best run from the top group. To our knowledge, little research has
explicitly addressed the problem of NP-query performance
prediction. Current prediction models devised for content-based
queries will be less effective for NP queries considering the
fundamental differences between the two.
Third, in real-world web search environments, user queries are
usually a mixture of different types and prior knowledge about the
type of each query is generally unavailable. The mixed-query
situation raises new problems for query performance prediction.
For instance, we may need to incorporate a query classifier into
prediction models. Despite these problems, the ability to handle
this situation is a crucial step towards turning query performance
prediction from an interesting research topic into a practical tool
for web retrieval.
In this paper, we present three techniques to address the above
challenges that current prediction models face in Web search
environments. Our work focuses on query performance prediction
for the content-based (ad-hoc) retrieval task and the name-page
finding task in the context of web retrieval. Our first technique,
called weighted information gain (WIG), makes use of both single
term and term proximity features to estimate the quality of top
retrieved documents for prediction. We find that WIG offers
consistent prediction accuracy across various test collections and
query types. Moreover, we demonstrate that good prediction
accuracy can be achieved for the mixed-query situation by using
WIG with the help of a query type classifier. Query feedback and
first rank change, which are our second and third prediction
techniques, perform well for content-based queries and NP
queries respectively.
Our main contributions include: (1) considerably improved
prediction accuracy for web content-based queries over several
state-of-the-art techniques. (2) new techniques for successfully
predicting NP-query performance. (3) a practical and fully
automatic solution to predicting mixed-query performance. In
addition, one minor contribution is that we find that the
robustness score [1], which was originally proposed for
performance prediction, is helpful for query classification.
2. RELATED WORK
As we mentioned in the introduction, a number of prediction
techniques have been proposed recently that focus on
contentbased queries in the topical relevance (ad-hoc) task. We know of
no published work that addresses other types of queries such as
NP queries, let alone a mixture of query types. Next we review
some representative models.
The major difficulty of performance prediction comes from the
fact that many factors have an impact on retrieval performance.
Each factor affects performance to a different degree and the
overall effect is hard to predict accurately. Therefore, it is not
surprising to notice that simple features, such as the frequency of
query terms in the collection [4] and the average IDF of query
terms [5], do not predict well. In fact, most of the successful
techniques are based on measuring some characteristics of the
retrieved document set to estimate topic difficulty. For example,
the clarity score [6] measures the coherence of a list of documents
by the KL-divergence between the query model and the collection
model. The robustness score [1] quantifies another property of a
ranked list: the robustness of the ranking in the presence of
uncertainty. Carmel et al. [2] found that the distance measured by
the Jensen-Shannon divergence between the retrieved document
set and the collection is significantly correlated to average
precision. Vinay et al.[7] proposed four measures to capture the
geometry of the top retrieved documents for prediction. The most
effective measure is the sensitivity to document perturbation, an
idea somewhat similar to the robustness score. Unfortunately,
their way of measuring the sensitivity does not perform equally
well for short queries and prediction accuracy drops considerably
when a state-of-the-art retrieval technique (like Okapi or a
language modeling approach) is adopted for retrieval instead of
the tf-idf weighting used in their paper [16].
The difficulties of applying these models in web search
environments have already been mentioned. In this paper, we
mainly adopt the clarity score and the robustness score as our
baselines. We experimentally show that the baselines, even after
being carefully tuned, are inadequate for the web environment.
One of our prediction models, WIG, is related to the Markov
random field (MRF) model for information retrieval [8]. The
MRF model directly models term dependence and is found be to
highly effective across a variety of test collections (particularly
web collections) and retrieval tasks. This model is used to
estimate the joint probability distribution over documents and
queries, an important part of WIG. The superiority of WIG over
other prediction techniques based on unigram features, which will
be demonstrated later in our paper, coincides with that of MRF
for retrieval. In other word, it is interesting to note that term
dependence, when being modeled appropriately, can be helpful
for both improving and predicting retrieval performance.
3. PREDICTION MODELS
3.1 Weighted Information Gain (WIG)
This section introduces a weighted information gain approach that
incorporates both single term and proximity features for
predicting performance for both content-based and Named-Page
(NP) finding queries.
Given a set of queries Q={Qs} (s=1,2,..N) which includes all
possible user queries and a set of documents D={Dt} (t=1,2…M),
we assume that each query-document pair (Qs,Dt) is manually
judged and will be put in a relevance list if Qs is found to be
relevant to Dt. The joint probability P(Qs,Dt) over queries Q and
documents D denotes the probability that pair (Qs,Dt) will be in
the relevance list. Such assumptions are similar to those used in
[8]. Assuming that the user issues query Qi ∈Q and the retrieval
results in response to Qi is a ranked list L of documents, we
calculate the amount of information contained in P(Qs,Dt) with
respect to Qi and L by Eq.1 which is a variant of entropy called
the weighted entropy[13]. The weights in Eq.1 are solely
determined by Qi and L.
)1(),(log),(),(
,
, ∑−=
ts
tststsLQ DQPDQweightDQH i
In this paper, we choose the weights as follows:
LindocumentsKtopthecontainsLTwhere
otherwise
LTDandisifK
DQweight
K
Kt
ts
)(
)2(
,0
)(,/1
),(
⎩
⎨
⎧ ∈=
=
The cutoff rank K is a parameter in our model that will be
discussed later. Accordingly, Eq.1 can be simplified as follows:
)3(),(log
1
),(
)(
, ∑∈
−=
LTD
titsLQ
Kt
i
DQP
K
DQH
Unfortunately, weighted entropy ),(, tsLQ DQH i
computed by Eq.3,
which represents the amount of information about how likely the
top ranked documents in L would be relevant to query Qi on
average, cannot be compared across different queries, making it
inappropriate for directly predicting query performance. To
mitigate this problem, we come up with a background distribution
P(Qs,C) over Q and D by imagining that every document in D is
replaced by the same special document C which represents
average language usage. In this paper, C is created by
concatenating every document in D. Roughly speaking, C is the
collection (the document set) {Dt} without document boundaries.
Similarly, weighted entropy ),(, CQH sLQi
calculated by Eq.3
represents the amount of information about how likely an average
document (represented by the whole collection) would be relevant
to query Qi.
Now we introduce our performance predictor WIG which is the
weighted information gain [13] computed as the difference
between ),(, tsLQ DQH i
and ),(, CQH sLQi
.Specifically, given query
Qi, collection C and ranked list L of documents, WIG is
calculated as follows:
)4(
),(
),(
log
1
),(
),(
log),(
),(),(),,(
)(,
,,
∑∑ ∈
==
−=
LTD i
ti
ts s
ts
ts
tsLQsLQi
Kt
ii
CQP
DQP
KCQP
DQP
DQweight
DQHCQHLCQWIG
WIG computed by Eq.4 measures the change in information about
the quality of retrieval (in response to query Qi) from an
imaginary state that only an average document is retrieved to a
posterior state that the actual search results are observed. We
hypothesize that WIG is positively correlated with retrieval
effectiveness because high quality retrieval should be much more
effective than just returning the average document.
The heart of this technique is how to estimate the joint
distribution P(Qs,Dt). In the language modeling approach to IR, a
variety of models can be applied readily to estimate this
distribution. Although most of these models are based on the
bagof-words assumption, recent work on modeling term dependence
under the language modeling framework have shown consistent
and significant improvements in retrieval effectiveness over
bagof-words models. Inspired by the success of incorporating term
proximity features into language models, we decide to adopt a
good dependence model to estimate the probability P(Qs,Dt). The
model we chose for this paper is Metzler and Croft"s Markov
Random Field (MRF) model, which has already demonstrated
superiority over a number of collections and different retrieval
tasks [8,9].
According to the MRF model, log P(Qi, Dt) can be written as
)5()|(loglog),(log
)(
1 ∑∈
+−=
iQF
tti DPZDQP
ξ
ξ ξλ
where Z1 is a constant that ensures that P(Qi, Dt) sums up to 1.
F(Qi) consists of a set of features expanded from the original
query Qi . For example, assuming that query Qi is talented
student program, F(Qi) includes features like program and
talented student. We consider two kinds of features: single
term features T and proximity features P. Proximity features
include exact phrase (#1) and unordered window (#uwN) features
as described in [8]. Note that F(Qi) is the union of T(Qi) and
P(Qi). For more details on F(Qi) such as how to expand the
original query Qi to F(Qi), we refer the reader to [8] and [9].
P(ξ|Dt) denotes the probability that feature ξ will occur in Dt.
More details on P(ξ|Dt) will be provided later in this section. The
choice of λξ is somewhat different from that used in [8] since λξ
plays a dual role in our model. The first role, which is the same as
in [8], is to weight between single term and proximity features.
The other role, which is specific to our prediction task, is to
normalize the size of F(Qi).We found that the following weight
strategy for λξ satisfies the above two roles and generalizes well
on a variety of collections and query types.
)6(
)(,
|)(|
1
)(,
|)(|
⎪
⎪
⎩
⎪
⎪
⎨
⎧
∈
−
∈
=
i
i
T
i
i
T
QP
QP
QT
QT
ξ
λ
ξ
λ
λξ
where |T(Qi)| and |P(Qi)| denote the number of single term and
proximity features in F(Qi) respectively. The reason for choosing
the square root function in the denominator of λξ is to penalize a
feature set of large size appropriately, making WIG more
comparable across queries of various lengths. λT is a fixed
parameter and set to 0.8 according to [8] throughout this paper.
Similarly, log P(Qi,C) can be written as:
)7()|(loglog),(log
)(
2 ∑∈
+−=
iQF
i CPZCQP
ξ
ξ ξλ
When constant Z1 and Z2 are dropped, WIG computed in Eq.4 can
be rewritten as follows by plugging in Eq.5 and Eq.7 :
)8(
)|(
)|(
log
1
),,(
)( )(
∑ ∑∈ ∈
=
LTD QF
t
i
Kt i
CP
DP
K
LCQWIG
ξ
ξ
ξ
ξ
λ
One of the advantages of WIG over other techniques is that it can
handle well both content-based and NP queries. Based on the type
(or the predicted type) of Qi, the calculation of WIG in Eq. 8
differs in two aspects: (1) how to estimate P(ξ|Dt) and P(ξ|C), and
(2) how to choose K.
For content-based queries, P(ξ|C) is estimated by the relative
frequency of feature ξ in collection C as a whole. The estimation
of P(ξ|Dt) is the same as in [8]. Namely, we estimate P(ξ|Dt) by
the relative frequency of feature ξ in Dt linearly smoothed with
collection frequency P(ξ|C). K in Eq.8 is treated as a free
parameter. Note that K is the only free parameter in the
computation of WIG for content-based queries because all
parameters involved in P(ξ|Dt) are assumed to be fixed by taking
the suggested values in [8].
Regarding NP queries, we make use of document structure to
estimate P(ξ|Dt) and P(ξ|C) by the so-called mixture of language
models proposed in [10] and incorporated into the MRF model for
Named-Page finding retrieval in [9]. The basic idea is that a
document (collection) is divided into several fields such as the
title field, the main-body field and the heading field. P(ξ|Dt) and
P(ξ|C) are estimated by a linear combination of the language
models from each field. Due to space constraints, we refer the
reader to [9] for details. We adopt the exact same set of
parameters as used in [9] for estimation. With regard to K in Eq.8,
we set K to 1 because the Named-Page finding task heavily
focuses on the first ranked document. Consequently, there are no
free parameters in the computation of WIG for NP queries.
3.2 Query Feedback
In this section, we introduce another technique called query
feedback (QF) for prediction. Suppose that a user issues query Q
to a retrieval system and a ranked list L of documents is returned.
We view the retrieval system as a noisy channel. Specifically, we
assume that the output of the channel is L and the input is Q.
After going through the channel, Q becomes corrupted and is
transformed to ranked list L.
By thinking about the retrieval process this way, the problem of
predicting retrieval effectiveness turns to the task of evaluating
the quality of the channel. In other words, prediction becomes
finding a way to measure the degree of corruption that arises
when Q is transformed to L. As directly computing the degree of
the corruption is difficult, we tackle this problem by
approximation. Our main idea is that we measure to what extent
information on Q can be recovered from L on the assumption that
only L is observed. Specifically, we design a decoder that can
accurately translate L back into new query Q" and the similarity S
between the original query Q and the new query Q" is adopted as
a performance predictor. This is a sketch of how the QF technique
predicts query performance. Before filling in more details, we
briefly discuss why this method would work.
There is a relation between the similarity S defined above and
retrieval performance. On the one hand, if the retrieval has
strayed from the original sense of the query Q, the new query Q"
extracted from ranked list L in response to Q would be very
different from the original query Q. On the other hand, a query
distilled from a ranked list containing many relevant documents is
likely to be similar to the original query. Further examples in
support of the relation will be provided later.
Next we detail how to build the decoder and how to measure the
similarity S.
In essence, the goal of the decoder is to compress ranked list L
into a few informative terms that should represent the content of
the top ranked documents in L. Our approach to this goal is to
represent ranked list L by a language model (distribution over
terms). Then terms are ranked by their contribution to the
language model"s KL (Kullback-Leibler) divergence from the
background collection model. Top ranked terms will be chosen to
form the new query Q". This approach is similar to that used in
Section 4.1 of [11].
Specifically, we take three steps to compress ranked list L into
query Q" without referring to the original query.
1. We adopt the ranked list language model [14], to estimate a
language model based on ranked list L. The model can be written
as:
)9()|()|()|( ∑∈
=
LD
LDPDwPLwP
where w is any term, D is a document. P(D|L) is estimated by a
linearly decreasing function of the rank of document D.
2. Each term in P(w|L) is ranked by the following KL-divergence
contribution:
)10(
)|(
)|(
log)|(
CwP
LwP
LwP
where P(w|C) is the collection model estimated by the relative
frequency of term w in collection C as a whole.
3. The top N ranked terms by Eq.10 form a weighted query
Q"={(wi,ti)} i=1,N. where wi denotes the i-th ranked term and
weight ti is the KL-divergence contribution of wi in Eq. 10.
Term cruise ship vessel sea passenger
KL
contribution
0.050 0.040 0.012 0.010 0.009
Table 1: top 5 terms compressed from the ranked list in
response to query Cruise ship damage sea life
Two representative examples, one for a poorly performing query
Cruise ship damage sea life (TREC topic 719; average
precision: 0.08) and the other for a high performing query
prostate cancer treatments( TREC topic 710; average precision:
0.49), are shown in Table 1 and 2 respectively. These examples
indicate how the similarity between the original and the new
query correlates with retrieval performance. The parameter N in
step 3 is set to 20 empirically and choosing a larger value of N is
unnecessary since the weights after the top 20 are usually too
small to make any difference.
Term prostate cancer treatment men therapy
KL
contribution
0.177 0.140 0.028 0.025 0.020
Table 2: top 5 terms compressed from the ranked list in
response to query prostate cancer treatments
To measure the similarity between original query Q and new
query Q", we first use Q" to do retrieval on the same collection. A
variant of the query likelihood model [15] is adopted for retrieval.
Namely, documents are ranked by:
)11()|()|'(
'),(
∑∈
=
Qtw
t
i
ii
i
DwPDQP
where wi is a term in Q" and ti is the associated weight. D is a
document.
Let L" denote the new ranked list returned from the above
retrieval. The similarity is measured by the overlap of documents
in L and L". Specifically, the percentage of documents in the top
K documents of L that are also present in the top K documents in
L". the cutoff K is treated as a free parameter.
We summarize here how the QF technique predicts performance
given a query Q and the associated ranked list L. We first obtain a
weighted query Q" compressed from L by the above three steps.
Then we use Q" to perform retrieval and the new ranked list is L".
The overlap of documents in L and L" is used for prediction.
3.3 First Rank Change (FRC)
In this section, we propose a method called the first rank change
(FRC) for performance prediction for NP queries. This method is
derived from the ranking robustness technique [1] that is mainly
designed for content-based queries. When directly applied to NP
queries, the robustness technique will be less effective because it
takes the top ranked documents as a whole into account while NP
queries usually have only one single relevant document. Instead,
our technique focuses on the first rank document while the main
idea of the robustness method remains. Specifically, the
pseudocode for computing FRC is shown in figure 1.
Input: (1) ranked list L={Di} where i=1,100. Di denotes the i-th
ranked document. (2) query Q
1 initialize: (1) set the number of trials J=100000 (2) counter c=0;
2 for i=1 to J
3 Perturb every document in L, let the outcome be a set F={Di"}
where Di" denotes the perturbed version of Di.
4 Do retrieval with query Q on set F
5 c=c+1 if and only if D1" is ranked first in step 4
6 end of for
7 return the ratio c/J
Figure 1: pseudo-code for computing FRC
FRC approximates the probability that the first ranked document
in the original list L will remain ranked first even after the
documents are perturbed. The higher the probability is, the more
confidence we have in the first ranked document. On the other
hand, in the extreme case of a random ranking, the probability
would be as low as 0.5. We expect that FRC has a positive
association with NP query performance. We adopt [1] to
implement the document perturbation step (step 4 in Fig.1) using
Poisson distributions. For more details, we refer the reader to [1].
4. EVALUATION
We now present the results of predicting query performance by
our models. Three state-of-the-art techniques are adopted as our
baselines. We evaluate our techniques across a variety of Web
retrieval settings. As mentioned before, we consider two types of
queries, that is, content-based (CB) queries and Named-Page(NP)
finding queries.
First, suppose that the query types are known. We investigate the
correlation between the predicted retrieval performance and the
actual performance for both types of queries separately. Results
show that our methods yield considerable improvements over the
baselines.
We then consider a more challenging scenario where no prior
information on query types is available. Two sub-cases are
considered. In the first one, there exists only one type of query but
the actual type is unknown. We assume a mixture of the two
query types in the second case. We demonstrate that our models
achieve good accuracy under this demanding scenario, making
prediction practical in a real-world Web search environment.
4.1 Experimental Setup
Our evaluation focuses on the GOV2 collection which contains
about 25 million documents crawled from web sites in the .gov
domain during 2004 [3]. We create two kinds of data set for CB
queries and NP queries respectively. For the CB type, we use the
ad-hoc topics of the Terabyte Tracks of 2004, 2005 and 2006 and
name them TB04-adhoc, TB05-adhoc and TB06-adhoc
respectively. In addition, we also use the ad-hoc topics of the
2004 Robust Track (RT04) to test the adaptability of our
techniques to a non-Web environment. For NP queries, we use the
Named-Page finding topics of the Terabyte Tracks of 2005 and
2006 and we name them TB05-NP and TB06-NP respectively. All
queries used in our experiments are titles of TREC topics as we
center on web retrieval. Table 3 summarizes the above data sets.
Name Collection Topic Number Query Type
TB04-adhoc GOV2 701-750 CB
TB05-adhoc GOV2 751-800 CB
TB06-adhoc GOV2 801-850 CB
RT04 Disk 4+5
(minus CR)
301-450;601700
CB
TB05-NP GOV2 NP601-NP872 NP
TB06-NP GOV2 NP901-NP1081 NP
Table 3: Summary of test collections and topics
Retrieval performance of individual content-based and NP queries
is measured by the average precision and reciprocal rank of the
first correct answer respectively. We make use of the Markov
Random field model for both ad-hoc and Named-Page finding
retrieval. We adopt the same setting of retrieval parameters used
in [8,9]. The Indri search engine [12] is used for all of our
experiments. Though not reported here, we also tried the query
likelihood model for ad-hoc retrieval and found that the results
change little because of the very high correlation between the
query performances obtained by the two retrieval models (0.96
measured by Pearson"s coefficient).
4.2 Known Query Types
Suppose that query types are known. We treat each type of query
separately and measure the correlation with average precision (or
the reciprocal rank in the case of NP queries). We adopt the
Pearson"s correlation test which reflects the degree of linear
relationship between the predicted and the actual retrieval
performance.
4.2.1 Content-based Queries
Methods Clarity Robust JSD WIG QF WIG
+QF
TB04+0
5 adhoc
0.333 0.317 0.362 0.574 0.480 0.637
TB06
adhoc
0.076 0.294 N/A 0.464 0.422 0.511
Table 4: Pearson"s correlation coefficients for correlation with
average precision on the Terabyte Tracks (ad-hoc) for clarity
score, robustness score, the JSD-based method(we directly
cites the score reported in [2]), WIG, query feedback(QF) and
a linear combination of WIG and QF. Bold cases mean the
results are statistically significant at the 0.01 level.
Table 4 shows the correlation with average precision on two data
sets: one is a combination of TB04-adhoc and TB05-adhoc(100
topics in total) and the other is TB06-adhoc (50 topics). The
reason that we put TB04-adhoc and TB05-adhoc together is to
make our results comparable to [2]. Our baselines are the clarity
score (clarity) [6],the robustness score (robust)[1] and the
JSDbased method (JSD) [2]. For the clarity and robustness score, we
have tried different parameter settings and report the highest
correlation coefficients we have found. We directly cite the result
of the JSD-based method reported in [2]. The table also shows the
results for the Weighted Information Gain (WIG) method and the
Query Feedback (QF) method for predicting content-based
queries. As we described in the previous section, both WIG and
QF have one free parameter to set, that is, the cutoff rank K. We
train the parameter on one dataset and test on the other. When
combining WIG and QF, a simple linear combination is used and
the combination weight is learned from the training data set.
From these results, we can see that our methods are considerably
more accurate compared to the baselines. We also observe that
further improvements are obtained from the combination of WIG
and QF, suggesting that they measure different properties of the
retrieval process that relate to performance.
We discover that our methods generalize well on TB06-adhoc
while the correlation for the clarity score with retrieval
performance on this data set is considerably worse. Further
investigation shows that the mean average precision of
TB06-adhoc is 0.342 and is about 10% better than that of the first data set.
While the other three methods typically consider the top 100 or
less documents given a ranked list, the clarity method usually
needs the top 500 or more documents to adequately measure the
coherence of a ranked list. Higher mean average precision makes
ranked lists retrieved by different queries more similar in terms of
coherence at the level of top 500 documents. We believe that this
is the main reason for the low accuracy of the clarity score on the
second data set.
Though this paper focuses on a Web search environment, it is
desirable that our techniques will work consistently well in other
situations. To this end, we examine the effectiveness of our
techniques on the Robust 2004 Track. For our methods, we
evenly divide all of the test queries into five groups and perform
five-fold cross validation. Each time we use one group for
training and the remaining four groups for testing. We make use
of all of the queries for our two baselines, that is, the clarity score
and the robustness score. The parameters for our baselines are the
same as those used in [1].The results shown in Table 5
demonstrate that the prediction accuracy of our methods is on a
par with that of the two strong baselines.
Clarity Robust WIG QF
0.464 0.539 0.468 0.464
Table 5: Comparison of Pearson"s correlation coefficients on
the 2004 Robust Track for clarity score, robustness score,
WIG and query feedback (QF). Bold cases mean the results
are statistically significant at the 0.01 level.
Furthermore, we examine the prediction sensitivity of our
methods to the cutoff rank K. With respect to WIG, it is quite
robust to K on the Terabyte Tracks (2004-2006) while it prefers a
small value of K like 5 on the 2004 Robust Track. In other words,
a small value of K is a nearly-optimal choice for both kinds of
tracks. Considering the fact that all other parameters involved in
WIG are fixed and consequently the same for the two cases, this
means WIG can achieve nearly-optimal prediction accuracy in
two considerably different situations with exactly the same
parameter setting. Regarding QF, it prefers a larger value of K
such as 100 on the Terabyte Tracks and a smaller value of K such
as 25 on the 2004 Robust Track.
4.2.2 NP Queries
We adopt WIG and first rank change (FRC) for predicting
NPquery performance. We also try a linear combination of the two as
in the previous section. The combination weight is obtained from
the other data set. We use the correlation with the reciprocal ranks
measured by the Pearson"s correlation test to evaluate prediction
quality. The results are presented in Table 6. Again, our baselines
are the clarity score and the robustness score.
To make a fair comparison, we tune the clarity score in different
ways. We found that using the first ranked document to build the
query model yields the best prediction accuracy. We also
attempted to utilize document structure by using the mixture of
language models mentioned in section 3.1. Little improvement
was obtained. The correlation coefficients for the clarity score
reported in Table 6 are the best we have found. As we can see,
our methods considerably outperform the clarity score technique
on both of the runs. This confirms our intuition that the use of a
coherence-based measure like the clarity score is inappropriate for
NP queries.
Methods Clarity Robust. WIG FRC WIG+FRC
TB05-NP 0.150 -0.370 0.458 0.440 0.525
TB06-NP 0.112 -0.160 0.478 0.386 0.515
Table 6: Pearson"s correlation coefficients for correlation with
reciprocal ranks on the Terabyte Tracks (NP) for clarity
score, robustness score, WIG, the first rank change (FRC)
and a linear combination of WIG and FRC. Bold cases mean
the results are statistically significant at the 0.01 level.
Regarding the robustness score, we also tune the parameters and
report the best we have found. We observe an interesting and
surprising negative correlation with reciprocal ranks. We explain
this finding briefly. A high robustness score means that a number
of top ranked documents in the original ranked list are still highly
ranked after perturbing the documents. The existence of such
documents is a good sign of high performance for content-based
queries as these queries usually contain a number of relevant
documents [1]. However, with regard to NP queries, one
fundamental difference is that there is only one relevant document
for each query. The existence of such documents can confuse the
ranking function and lead to low retrieval performance. Although
the negative correlation with retrieval performance exists, the
strength of the correlation is weaker and less consistent compared
to our methods as shown in Table 6.
Based on the above analysis, we can see that current prediction
techniques like clarity score and robustness score that are mainly
designed for content-based queries face significant challenges and
are inadequate to deal with NP queries. Our two techniques
proposed for NP queries consistently demonstrate good prediction
accuracy, displaying initial success in solving the problem of
predicting performance for NP queries. Another point we want to
stress is that the WIG method works well for both types of
queries, a desirable property that most prediction techniques lack.
4.3 Unknown Query Types
In this section, we run two kinds of experiments without access to
query type labels. First, we assume that only one type of query
exists but the type is unknown. Second, we experiment on a
mixture of content-based and NP queries. The following two
subsections will report results for the two conditions respectively.
4.3.1 Only One Type exists
We assume that all queries are of the same type, that is, they are
either NP queries or content-based queries. We choose WIG to
deal with this case because it shows good prediction accuracy for
both types of queries in the previous section. We consider two
cases: (1) CB: all 150 title queries from the ad-hoc task of the
Terabyte Tracks 2004-2006 (2)NP: all 433 NP queries from the
named page finding task of the Terabyte Tracks 2005 and 2006.
We take a simple strategy by labeling all of the queries in each
case as the same type (either NP or CB) regardless of their actual
type. The computation of WIG will be based on the labeled query
type instead of the actual type. There are four possibilities with
respect to the relation between the actual type and the labeled
type. The correlation with retrieval performance under the four
possibilities is presented in Table 7. For example, the value 0.445
at the intersection between the second row and the third column
shows the Pearson"s correlation coefficient for correlation with
average precision when the content-based queries are incorrectly
labeled as the NP type.
Based on these results, we recommend treating all queries as the
NP type when only one query type exists and accurate query
classification is not feasible, considering the risk that a large loss
of accuracy will occur if NP queries are incorrectly labeled as
content-based queries. These results also demonstrate the strong
adaptability of WIG to different query types.
CB (labeled) NP (labeled)
CB (actual) 0.536 0.445
NP (actual) 0.174 0.467
Table 7: Comparison of Pearson"s correlation coefficients for
correlation with retrieval performance under four possibilities
on the Terabyte Tracks (NP). Bold cases mean the results are
statistically significant at the 0.01 level.
4.3.2 A mixture of contented-based and NP queries
A mixture of the two types of queries is a more realistic situation
that a Web search engine will meet. We evaluate prediction
accuracy by how accurately poorly-performing queries can be
identified by the prediction method assuming that actual query
types are unknown (but we can predict query types). This is a
challenging task because both the predicted and actual
performance for one type of query can be incomparable to that for
the other type.
Next we discuss how to implement our evaluation. We create a
query pool which consists of all of the 150 ad-hoc title queries
from Terabyte Track 2004-2006 and all of the 433 NP queries
from Terabyte Track 2005&2006. We divide the queries in the
pool into classes: good (better than 50% of the queries of the
same type in terms of retrieval performance) and bad
(otherwise). According to these standards, a NP query with the
reciprocal rank above 0.2 or a content-based query with the
average precision above 0.315 will be considered as good.
Then, each time we randomly select one query Q from the pool
with probability p that Q is contented-based. The remaining
queries are used as training data. We first decide the type of query
Q according to a query classifier. Namely, the query classifier
tells us whether query Q is NP or content-based. Based on the
predicted query type and the score computed for query Q by a
prediction technique, a binary decision is made about whether
query Q is good or bad by comparing to the score threshold of the
predicted query type obtained from the training data. Prediction
accuracy is measured by the accuracy of the binary decision. In
our implementation, we repeatedly take a test query from the
query pool and prediction accuracy is computed as the
percentage of correct decisions, that is, a good(bad) query is
predicted to be good (bad). It is obvious that random guessing will
lead to 50% accuracy.
Let us take the WIG method for example to illustrate the process.
Two WIG thresholds (one for NP queries and the other for
content-based queries) are trained by maximizing the prediction
accuracy on the training data. When a test query is labeled as the
NP (CB) type by the query type classifier, it will be predicted to
be good if and only if the WIG score for this query is above the
NP (CB) threshold. Similar procedures will be taken for other
prediction techniques.
Now we briefly introduce the automatic query type classifier used
in this paper. We find that the robustness score, though originally
proposed for performance prediction, is a good indicator of query
types. We find that on average content-based queries have a
much higher robustness score than NP queries. For example,
Figure 2 shows the distributions of robustness scores for NP and
content-based queries. According to this finding, the robustness
score classifier will attach a NP (CB) label to the query if the
robustness score for the query is below (above) a threshold
trained from the training data.
0
0.5
1
1.5
2
2.5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
NP Content-based
Figure 2: Distribution of robustness scores for NP and CB
queries. The NP queries are the 252 NP topics from the 2005
Terabyte Track. The content-based queries are the 150 ad-hoc
title from the Terabyte Tracks 2004-2006. The probability
distributions are estimated by the Kernel density estimation
method.
Strategies Robust WIG-1 WIG-2 WIG-3 Optimal
p=0.6 0.565 0.624 0.665 0.684 0.701
P=0.4 0.567 0.633 0.654 0.673 0.696
Table 8: Comparison of prediction accuracy for five strategies
in the mixed-query situation. Two ways to sample a query
from the pool: (1) the sampled query is content-based with the
probability p=0.6. (that is, the query is NP with probability
0.4 ) (2) set the probability p=0.4.
We consider five strategies in our experiments. In the first
strategy (denoted by robust), we use the robustness score for
query performance prediction with the help of a perfect query
classifier that always correctly map a query into one of the two
categories (that is, NP or CB). This strategy represents the level of
prediction accuracy that current prediction techniques can achieve
in an ideal condition that query types are known. In the next
following three strategies, the WIG method is adopted for
performance prediction. The difference among the three is that
three different query classifiers are used for each strategy: (1) the
classifier always classifies a query into the NP type. (2) the
Robustness Score
ProbabilityDensity
classifier is the robust score classifier mentioned above. (3) the
classifier is a perfect one. These three strategies are denoted by
WIG-1, WIG-2 and WIG-3 respectively. The reason we are
interested in WIG-1 is based on the results from section 4.3.1. In
the last strategy (denoted by Optimal) which serves as an upper
bound on how well we can do so far, we fully make use of our
prediction techniques for each query type assuming a perfect
query classifier is available. Specifically, we linearly combine
WIG and QF for content-based queries and WIG and FRC for NP
queries.
The results for the five strategies are shown in Table 8. For each
strategy, we try two ways to sample a query from the pool: (1) the
sampled query is CB with probability p=0.6. (the query is NP
with probability 0.4) (2) set the probability p=0.4. From Table 8
We can see that in terms of prediction accuracy WIG-2 (the WIG
method with the automatic query classifier) is not only better than
the first two cases, but also is close to WIG-3 where a perfect
classifier is assumed. Some further improvements over WIG-3 are
observed when combined with other prediction techniques. The
merit of WIG-2 is that it provides a practical solution to
automatically identifying poorly performing queries in a Web
search environment with mixed query types, which poses
considerable obstacles to traditional prediction techniques.
5. CONCLUSIONS AND FUTURE WORK
To our knowledge, our paper is the first to thoroughly explore
prediction of query performance in web search environments. We
demonstrated that our models resulted in higher prediction
accuracy than previously published techniques not specially
devised for web search scenarios. In this paper, we focus on two
types of queries in web search: content-based and Named-Page
(NP) finding queries, corresponding to the ad-hoc retrieval task
and the Named-Page finding task respectively. For both types of
web queries, our prediction models were shown to be
substantially more accurate than the current state-of-the-art
techniques. Furthermore, we considered a more realistic case that
no prior information on query types is available. We
demonstrated that the WIG method is particularly suitable for this
situation. Considering the adaptability of WIG to a range of
collections and query types, one of our future plans is to apply
this method to predict user preference of search results on realistic
data collected from a commercial search engine.
Other than accuracy, another major issue that prediction
techniques have to deal with in a Web environment is efficiency.
Fortunately, since the WIG score is computed just over the terms
and the phrases that appear in the query, this calculation can be
made very efficient with the support of index. On the other hand,
the computation of QF and FRC is relatively less efficient since
QF needs to retrieve the whole collection twice and FRC needs to
repeatedly rank the perturbed documents. How to improve the
efficiency of QF and FRC is our future work.
In addition, the prediction techniques proposed in this paper have
the potential of improving retrieval performance by combining
with other IR techniques. For example, our techniques can be
incorporated to popular query modification techniques such as
query expansion and query relaxation. Guided by performance
prediction, we can make a better decision on when to or how to
modify queries to enhance retrieval effectiveness. We would like
to carry out research in this direction in the future.
6. ACKNOWLEGMENTS
This work was supported in part by the Center for Intelligent
Information Retrieval, in part by the Defense Advanced Research
Projects Agency (DARPA) under contract number
HR0011-06-C0023, and in part by an award from Google. Any opinions,
findings and conclusions or recommendations expressed in this
material are those of the author and do not necessarily reflect
those of the sponsor. In addition, we thank Donald Metzler for his
valuable comments on this work.
7. REFERENCES
[1] Y. Zhou ,W. B. Croft ,Ranking Robustness: A Novel
Framework to Predict Query Performance, in Proceedings of
CIKM 2006.
[2] D.Carmel, E.Yom-Tov, A.Darlow,D.Pelleg, What Makes a
Query Difficult?, in Proceedings of SIGIR 2006.
[3] C.L.A. Clarke, F. Scholer, I.Soboroff, The TREC 2005
Terabyte Track, In the Online Proceedings of 2005 TREC.
[4] B. He and I.Ounis. Inferring query performance using
preretrieval predictors. In proceedings of the SPIRE 2004.
[5] S. Tomlinson. Robust, Web and Terabyte Retrieval with
Hummingbird SearchServer at TREC 2004. In the Online
Proceedings of 2004 TREC.
[6] S. Cronen-Townsend, Y. Zhou, W. B. Croft, Predicting
Query Performance, in Proceedings of SIGIR 2002.
[7] V.Vinay, I.J.Cox, N.Mill-Frayling,K.Wood, On Ranking the
Effectiveness of Searcher, in Proceedings of SIGIR 2006.
[8] D.Metzler, W.B.Croft, A Markov Random Filed Model for
Term Dependencies, in Proceedings of SIGIR 2005.
[9] D.Metzler, T.Strohman,Y.Zhou,W.B.Croft, Indri at TREC
2005: Terabyte Track, In the Online Proceedings of 2004
TREC.
[10] P. Ogilvie and J. Callan, Combining document
representations for known-item search, in Proceedings of
SIGIR 2003.
[11] A.Berger, J.Lafferty, Information retrieval as statistical
translation, in Proceedings of SIGIR 1999.
[12] Indri search engine : http://www.lemurproject.org/indri/
[13] I.J. Taneja: On Generalized Information Measures and Their
Applications, Advances in Electronics and Electron Physics,
Academic Press (USA), 76, 1989, 327-413.
[14] S.Cronen-Townsend, Y. Zhou and Croft, W. B. , "A
Framework for Selective Query Expansion," in Proceedings
of CIKM 2004.
[15] F.Song, W.B.Croft, A general language model for
information retrieval, in Proceedings of SIGIR 1999.
[16] Personal email contact with Vishwa Vinay and our own
experiments
[17] E.Yom-Tov, S.Fine, D.Carmel, A.Darlow, Learning to
Estimate Query Difficulty Including Applications to Missing
Content Detection and Distributed Information retrieval, in
Proceedings of SIGIR 2005 | homogenous test collection;web search environment;kl-divergence;content-based and named-page finding;jensen-shannon divergence;weighted information gain;trec document collection;query performance prediction;content-based query;robustness score probabilitydensity classifier;web search;gov2 collection;wig;ranking robustness technique;query classification;mixed-query situation;named-page finding task |
train_H-46 | Broad Expertise Retrieval in Sparse Data Environments | Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. | 1. INTRODUCTION
An organization"s intranet provides a means for exchanging
information between employees and for facilitating employee
collaborations. To efficiently and effectively achieve this, it is necessary
to provide search facilities that enable employees not only to access
documents, but also to identify expert colleagues.
At the TREC Enterprise Track [22] the need to study and
understand expertise retrieval has been recognized through the
introduction of Expert Finding tasks. The goal of expert finding is to
identify a list of people who are knowledgeable about a given topic.
This task is usually addressed by uncovering associations between
people and topics [10]; commonly, a co-occurrence of the name
of a person with topics in the same context is assumed to be
evidence of expertise. An alternative task, which using the same idea
of people-topic associations, is expert profiling, where the task is to
return a list of topics that a person is knowledgeable about [3].
The launch of the Expert Finding task at TREC has generated a
lot of interest in expertise retrieval, with rapid progress being made
in terms of modeling, algorithms, and evaluation aspects. However,
nearly all of the expert finding or profiling work performed has
been validated experimentally using the W3C collection [24] from
the Enterprise Track. While this collection is currently the only
publicly available test collection for expertise retrieval tasks, it only
represents one type of intranet. With only one test collection it is
not possible to generalize conclusions to other realistic settings.
In this paper we focus on expertise retrieval in a realistic setting
that differs from the W3C setting-one in which relatively small
amounts of clean, multilingual data are available, that cover a broad
range of expertise areas, as can be found on the intranets of
universities and other knowledge-intensive organizations. Typically, this
setting features several additional types of structure: topical
structure (e.g., topic hierarchies as employed by the organization),
organizational structure (faculty, department, ...), as well as multiple
types of documents (research and course descriptions, publications,
and academic homepages). This setting is quite different from the
W3C setting in ways that might impact upon the performance of
expertise retrieval tasks.
We focus on a number of research questions in this paper: Does
the relatively small amount of data available on an intranet affect
the quality of the topic-person associations that lie at the heart of
expertise retrieval algorithms? How do state-of-the-art algorithms
developed on the W3C data set perform in the alternative scenario
of the type described above? More generally, do the lessons from
the Expert Finding task at TREC carry over to this setting? How
does the inclusion or exclusion of different documents affect
expertise retrieval tasks? In addition to, how can the topical and
organizational structure be used for retrieval purposes?
To answer our research questions, we first present a set of
baseline approaches, based on generative language modeling, aimed at
finding associations between topics and people. This allows us to
formulate the expert finding and expert profiling tasks in a uniform
way, and has the added benefit of allowing us to understand the
relations between the two tasks. For our experimental evaluation, we
introduce a new data set (the UvT Expert Collection) which is
representative of the type of intranet that we described above. Our
collection is based on publicly available data, crawled from the
website of Tilburg University (UvT). This type of data is particularly
interesting, since (1) it is clean, heterogeneous, structured, and
focused, but comprises a limited number of documents; (2) contains
information on the organizational hierarchy; (3) it is bilingual
(English and Dutch); and (4) the list of expertise areas of an individual
are provided by the employees themselves. Using the UvT Expert
collection, we conduct two sets of experiments. The first is aimed
at determining the effectiveness of baseline expertise finding and
profiling methods in this new setting. A second group of
experiments is aimed at extensions of the baseline methods that exploit
characteristic features of the UvT Expert Collection; specifically,
we propose and evaluate refined expert finding and profiling
methods that incorporate topicality and organizational structure.
Apart from the research questions and data set that we contribute,
our main contributions are as follows. The baseline models
developed for expertise finding perform well on the new data set. While
on the W3C setting the expert finding task appears to be more
difficult than profiling, for the UvT data the opposite is the case. We
find that profiling on the UvT data set is considerably more
difficult than on the W3C set, which we believe is due to the large
(but realistic) number of topical areas that we used for profiling:
about 1,500 for the UvT set, versus 50 in the W3C case.
Taking the similarity between topics into account can significantly
improve retrieval performance. The best performing similarity
measures are content-based, therefore they can be applied on the W3C
(and other) settings as well. Finally, we demonstrate that the
organizational structure can be exploited in the form of a context model,
improving MAP scores for certain models by up to 70%.
The remainder of this paper is organized as follows. In the next
section we review related work. Then, in Section 3 we provide
detailed descriptions of the expertise retrieval tasks that we address
in this paper: expert finding and expert profiling. In Section 4 we
present our baseline models, of which the performance is then
assessed in Section 6 using the UvT data set that we introduce in
Section 5. Advanced models exploiting specific features of our data are
presented in Section 7 and evaluated in Section 8. We formulate our
conclusions in Section 9.
2. RELATED WORK
Initial approaches to expertise finding often employed databases
containing information on the skills and knowledge of each
individual in the organization [11]. Most of these tools (usually called
yellow pages or people-finding systems) rely on people to self-assess
their skills against a predefined set of keywords. For updating
profiles in these systems in an automatic fashion there is a need for
intelligent technologies [5]. More recent approaches use specific
document sets (such as email [6] or software [18]) to find expertise.
In contrast with focusing on particular document types, there is also
an increased interest in the development of systems that index and
mine published intranet documents as sources of evidence for
expertise. One such published approach is the P@noptic system [9],
which builds a representation of each person by concatenating all
documents associated with that person-this is similar to Model 1
of Balog et al. [4], who formalize and compare two methods. Balog
et al."s Model 1 directly models the knowledge of an expert from
associated documents, while their Model 2 first locates documents
on the topic and then finds the associated experts. In the reported
experiments the second method performs significantly better when
there are sufficiently many associated documents per candidate.
Most systems that took part in the 2005 and 2006 editions of the
Expert Finding task at TREC implemented (variations on) one of
these two models; see [10, 20]. Macdonald and Ounis [16] propose
a different approach for ranking candidate expertise with respect to
a topic based on data fusion techniques, without using
collectionspecific heuristics; they find that applying field-based weighting
models improves the ranking of candidates. Petkova and Croft [19]
propose yet another approach, based on a combination of the above
Model 1 and 2, explicitly modeling topics.
Turning to other expert retrieval tasks that can also be addressed
using topic-people associations, Balog and de Rijke [3] addressed
the task of determining topical expert profiles. While their methods
proved to be efficient on the W3C corpus, they require an amount
of data that may not be available in the typical knowledge-intensive
organization. Balog and de Rijke [2] study the related task of
finding experts that are similar to a small set of experts given as input.
As an aside, creating a textual summary of a person shows
some similarities to biography finding, which has received a
considerable amount of attention recently; see e.g., [13].
We use generative language modeling to find associations
between topics and people. In our modeling of expert finding and
profiling we collect evidence for expertise from multiple sources, in
a heterogeneous collection, and integrate it with the co-occurrence
of candidates" names and query terms-the language modeling
setting allows us to do this in a transparent manner. Our modeling
proceeds in two steps. In the first step, we consider three baseline
models, two taken from [4] (the Models 1 and 2 mentioned above),
and one a refined version of a model introduced in [3] (which we
refer to as Model 3 below); this third model is also similar to the
model described by Petkova and Croft [19]. The models we
consider in our second round of experiments are mixture models
similar to contextual language models [1] and to the expanded
documents of Tao et al. [21]; however, the features that we use for
definining our expansions-including topical structure and
organizational structure-have not been used in this way before.
3. TASKS
In the expertise retrieval scenario that we envisage, users seeking
expertise within an organization have access to an interface that
combines a search box (where they can search for experts or topics)
with navigational structures (of experts and of topics) that allows
them to click their way to an expert page (providing the profile of a
person) or a topic page (providing a list of experts on the topic).
To feed the above interface, we face two expertise retrieval
tasks, expert finding and expert profiling, that we first define and
then formalize using generative language models. In order to model
either task, the probability of the query topic being associated to a
candidate expert plays a key role in the final estimates for searching
and profiling. By using language models, both the candidates and
the query are characterized by distributions of terms in the
vocabulary (used in the documents made available by the organization
whose expertise retrieval needs we are addressing).
3.1 Expert finding
Expert finding involves the task of finding the right person with
the appropriate skills and knowledge: Who are the experts on topic
X?. E.g., an employee wants to ascertain who worked on a
particular project to find out why particular decisions were made without
having to trawl through documentation (if there is any). Or, they
may be in need a trained specialist for consultancy on a specific
problem.
Within an organization there are usually many possible
candidates who could be experts for given topic. We can state this
problem as follows:
What is the probability of a candidate ca being an
expert given the query topic q?
That is, we determine p(ca|q), and rank candidates ca according to
this probability. The candidates with the highest probability given
the query are deemed the most likely experts for that topic. The
challenge is how to estimate this probability accurately. Since the
query is likely to consist of only a few terms to describe the
expertise required, we should be able to obtain a more accurate estimate
by invoking Bayes" Theorem, and estimating:
p(ca|q) =
p(q|ca)p(ca)
p(q)
, (1)
where p(ca) is the probability of a candidate and p(q) is the
probability of a query. Since p(q) is a constant, it can be ignored for
ranking purposes. Thus, the probability of a candidate ca being an
expert given the query q is proportional to the probability of a query
given the candidate p(q|ca), weighted by the a priori belief p(ca)
that candidate ca is an expert.
p(ca|q) ∝ p(q|ca)p(ca) (2)
In this paper our main focus is on estimating the probability of
a query given the candidate p(q|ca), because this probability
captures the extent to which the candidate knows about the query topic.
Whereas the candidate priors are generally assumed to be
uniformand thus will not influence the ranking-it has been demonstrated
that a sensible choice of priors may improve the performance [20].
3.2 Expert profiling
While the task of expert searching was concerned with
finding experts given a particular topic, the task of expert profiling
seeks to answer a related question: What topics does a candidate
know about? Essentially, this turns the questions of expert finding
around. The profiling of an individual candidate involves the
identification of areas of skills and knowledge that they have expertise
about and an evaluation of the level of proficiency in each of these
areas. This is the candidate"s topical profile.
Generally, topical profiles within organizations consist of
tabular structures which explicitly catalogue the skills and knowledge
of each individual in the organization. However, such practice is
limited by the resources available for defining, creating,
maintaining, and updating these profiles over time. By focusing on
automatic methods which draw upon the available evidence within the
document repositories of an organization, our aim is to reduce the
human effort associated with the maintenance of topical profiles1
.
A topical profile of a candidate, then, is defined as a vector where
each element i of the vector corresponds to the candidate ca"s
expertise on a given topic ki, (i.e., s(ca, ki)). Each topic ki defines a
particular knowledge area or skill that the organization uses to
define the candidate"s topical profile. Thus, it is assumed that a list of
topics, {k1, . . . , kn}, where n is the number of pre-defined topics,
is given:
profile(ca) = s(ca, k1), s(ca, k2), . . . , s(ca, kn) . (3)
1
Context and evidence are needed to help users of expertise
finding systems to decide whom to contact when seeking expertise in a
particular area. Examples of such context are: Who does she work
with? What are her contact details? Is she well-connected, just
in case she is not able to help us herself? What is her role in the
organization? Who is her superior? Collaborators, and affiliations,
etc. are all part of the candidate"s social profile, and can serve as
a background against which the system"s recommendations should
be interpreted. In this paper we only address the problem of
determining topical profiles, and leave social profiling to further work.
We state the problem of quantifying the competence of a person on
a certain knowledge area as follows:
What is the probability of a knowledge area (ki) being
part of the candidate"s (expertise) profile?
where s(ca, ki) is defined by p(ki|ca). Our task, then, is to
estimate p(ki|ca), which is equivalent to the problem of obtaining
p(q|ca), where the topic ki is represented as a query topic q, i.e., a
sequence of keywords representing the expertise required.
Both the expert finding and profiling tasks rely on the accurate
estimation of p(q|ca). The only difference derives from the prior
probability that a person is an expert (p(ca)), which can be
incorporated into the expert finding task. This prior does not apply to
the profiling task since the candidate (individual) is fixed.
4. BASELINE MODELS
In this section we describe our baseline models for estimating
p(q|ca), i.e., associations between topics and people. Both expert
finding and expert profiling boil down to this estimation. We
employ three models for calculating this probability.
4.1 From topics to candidates
Using Candidate Models: Model 1 Model 1 [4] defines the
probability of a query given a candidate (p(q|ca)) using standard
language modeling techniques, based on a multinomial unigram
language model. For each candidate ca, a candidate language model
θca is inferred such that the probability of a term given θca is
nonzero for all terms, i.e., p(t|θca) > 0. From the candidate model the
query is generated with the following probability:
p(q|θca) =
Y
t∈q
p(t|θca)n(t,q)
,
where each term t in the query q is sampled identically and
independently, and n(t, q) is the number of times t occurs in q. The
candidate language model is inferred as follows: (1) an empirical
model p(t|ca) is computed; (2) it is smoothed with background
probabilities. Using the associations between a candidate and a
document, the probability p(t|ca) can be approximated by:
p(t|ca) =
X
d
p(t|d)p(d|ca),
where p(d|ca) is the probability that candidate ca generates a
supporting document d, and p(t|d) is the probability of a term t
occurring in the document d. We use the maximum-likelihood estimate
of a term, that is, the normalised frequency of the term t in
document d. The strength of the association between document d and
candidate ca expressed by p(d|ca) reflects the degree to which the
candidates expertise is described using this document. The
estimation of this probability is presented later, in Section 4.2.
The candidate model is then constructed as a linear interpolation
of p(t|ca) and the background model p(t) to ensure there are no
zero probabilities, which results in the final estimation:
p(q|θca) = (4)
Y
t∈q
(
(1 − λ)
X
d
p(t|d)p(d|ca)
!
+ λp(t)
)n(t,q)
.
Model 1 amasses all the term information from all the documents
associated with the candidate, and uses this to represent that
candidate. This model is used to predict how likely a candidate would
produce a query q. This can can be intuitively interpreted as the
probability of this candidate talking about the query topic, where
we assume that this is indicative of their expertise.
Using Document Models: Model 2 Model 2 [4] takes a
different approach. Here, the process is broken into two parts. Given
a candidate ca, (1) a document that is associated with a candidate
is selected with probability p(d|ca), and (2) from this document a
query q is generated with probability p(q|d). Then the sum over all
documents is taken to obtain p(q|ca), such that:
p(q|ca) =
X
d
p(q|d)p(d|ca). (5)
The probability of a query given a document is estimated by
inferring a document language model θd for each document d in a
similar manner as the candidate model was inferred:
p(t|θd) = (1 − λ)p(t|d) + λp(t), (6)
where p(t|d) is the probability of the term in the document. The
probability of a query given the document model is:
p(q|θd) =
Y
t∈q
p(t|θd)n(t,q)
.
The final estimate of p(q|ca) is obtained by substituting p(q|d) for
p(q|θd) into Eq. 5 (see [4] for full details). Conceptually, Model 2
differs from Model 1 because the candidate is not directly modeled.
Instead, the document acts like a hidden variable in the process
which separates the query from the candidate. This process is akin
to how a user may search for candidates with a standard search
engine: initially by finding the documents which are relevant, and
then seeing who is associated with that document. By examining a
number of documents the user can obtain an idea of which
candidates are more likely to discuss the topic q.
Using Topic Models: Model 3 We introduce a third model, Model 3.
Instead of attempting to model the query generation process via
candidate or document models, we represent the query as a topic
language model and directly estimate the probability of the
candidate p(ca|q). This approach is similar to the model presented
in [3, 19]. As with the previous models, a language model is
inferred, but this time for the query. We adapt the work of Lavrenko
and Croft [14] to estimate a topic model from the query.
The procedure is as follows. Given a collection of documents
and a query topic q, it is assumed that there exists an unknown
topic model θk that assigns probabilities p(t|θk) to the term
occurrences in the topic documents. Both the query and the documents
are samples from θk (as opposed to the previous approaches, where
a query is assumed to be sampled from a specific document or
candidate model). The main task is to estimate p(t|θk), the probability
of a term given the topic model. Since the query q is very sparse,
and as there are no examples of documents on the topic, this
distribution needs to be approximated. Lavrenko and Croft [14] suggest
a reasonable way of obtaining such an approximation, by assuming
that p(t|θk) can be approximated by the probability of term t given
the query q. We can then estimate p(t|q) using the joint probability
of observing the term t together with the query terms, q1, . . . , qm,
and dividing by the joint probability of the query terms:
p(t|θk) ≈ p(t|q) =
p(t, q1, . . . , qm)
p(q1, . . . , qm)
=
p(t, q1, . . . , qm)
P
t ∈T p(t , q1, . . . , qm)
,
where p(q1, . . . , qm) =
P
t ∈T p(t , q1, . . . , qm), and T is the
entire vocabulary of terms. In order to estimate the joint probability
p(t, q1, . . . , qm), we follow [14, 15] and assume t and q1, . . . , qm
are mutually independent, once we pick a source distribution from
the set of underlying source distributions U. If we choose U to be
a set of document models. then to construct this set, the query q
would be issued against the collection, and the top n returned are
assumed to be relevant to the topic, and thus treated as samples
from the topic model. (Note that candidate models could be used
instead.) With the document models forming U, the joint
probability of term and query becomes:
p(t, q1, . . . , qm) =
X
d∈U
p(d)
˘
p(t|θd)
mY
i=1
p(qi|θd)
¯
. (7)
Here, p(d) denotes the prior distribution over the set U, which
reflects the relevance of the document to the topic. We assume that
p(d) is uniform across U. In order to rank candidates according
to the topic model defined, we use the Kullback-Leibler divergence
metric (KL, [8]) to measure the difference between the candidate
models and the topic model:
KL(θk||θca) =
X
t
p(t|θk) log
p(t|θk)
p(t|θca)
. (8)
Candidates with a smaller divergence from the topic model are
considered to be more likely experts on that topic. The candidate model
θca is defined in Eq. 4. By using KL divergence instead of the
probability of a candidate given the topic model p(ca|θk), we avoid
normalization problems.
4.2 Document-candidate associations
For our models we need to be able to estimate the probability
p(d|ca), which expresses the extent to which a document d
characterizes the candidate ca. In [4], two methods are presented for
estimating this probability, based on the number of person names
recognized in a document. However, in our (intranet) setting it is
reasonable to assume that authors of documents can unambiguously be
identified (e.g., as the author of an article, the teacher assigned to a
course, the owner of a web page, etc.) Hence, we set p(d|ca) to be
1 if candidate ca is author of document d, otherwise the probability
is 0. In Section 6 we describe how authorship can be determined
on different types of documents within the collection.
5. THE UVT EXPERT COLLECTION
The UvT Expert collection used in the experiments in this paper
fits the scenario outlined in Section 3. The collection is based on
the Webwijs (Webwise) system developed at Tilburg University
(UvT) in the Netherlands. Webwijs (http://www.uvt.nl/
webwijs/) is a publicly accessible database of UvT employees
who are involved in research or teaching; currently, Webwijs
contains information about 1168 experts, each of whom has a page with
contact information and, if made available by the expert, a research
description and publications list. In addition, each expert can
select expertise areas from a list of 1491 topics and is encouraged to
suggest new topics that need to be approved by the Webwijs editor.
Each topic has a separate page that shows all experts associated
with that topic and, if available, a list of related topics.
Webwijs is available in Dutch and English, and this bilinguality
has been preserved in the collection. Every Dutch Webwijs page
has an English translation. Not all Dutch topics have an English
translation, but the reverse is true: the 981 English topics all have a
Dutch equivalent.
About 42% of the experts teach courses at Tilburg University;
these courses were also crawled and included in the profile. In
addition, about 27% of the experts link to their academic homepage
from their Webwijs page. These home pages were crawled and
added to the collection. (This means that if experts put the full-text
versions of their publications on their academic homepage, these
were also available for indexing.) We also obtained 1880 full-text
versions of publications from the UvT institutional repository and
Dutch English
no. of experts 1168 1168
no. of experts with ≥ 1 topic 743 727
no. of topics 1491 981
no. of expert-topic pairs 4318 3251
avg. no. of topics/expert 5.8 5.9
max. no. of topics/expert (no. of experts) 60 (1) 35 (1)
min. no. of topics/expert (no. of experts) 1 (74) 1 (106)
avg. no. of experts/topic 2.9 3.3
max. no. of experts/topic (no. of topics) 30 (1) 30 (1)
min. no. of experts/topic (no. of topics) 1 (615) 1 (346)
no. of experts with HP 318 318
no. of experts with CD 318 318
avg. no. of CDs per teaching expert 3.5 3.5
no. of experts with RD 329 313
no. of experts with PUB 734 734
avg. no. of PUBs per expert 27.0 27.0
avg. no. of PUB citations per expert 25.2 25.2
avg. no. of full-text PUBs per expert 1.8 1.8
Table 2: Descriptive statistics of the Dutch and English versions
of the UvT Expert collection.
converted them to plain text. We ran the TextCat [23] language
identifier to classify the language of the home pages and the
fulltext publications. We restricted ourselves to pages where the
classifier was confident about the language used on the page.
This resulted in four document types: research descriptions (RD),
course descriptions (CD), publications (PUB; full-text and
citationonly versions), and academic homepages (HP). Everything was
bundled into the UvT Expert collection which is available at http:
//ilk.uvt.nl/uvt-expert-collection/.
The UvT Expert collection was extracted from a different
organizational setting than the W3C collection and differs from it in
a number of ways. The UvT setting is one with relatively small
amounts of multilingual data. Document-author associations are
clear and the data is structured and clean. The collection covers a
broad range of expertise areas, as one can typically find on intranets
of universities and other knowledge-intensive institutes.
Additionally, our university setting features several types of structure
(topical and organizational), as well as multiple document types.
Another important difference between the two data sets is that the
expertise areas in the UvT Expert collection are self-selected instead
of being based on group membership or assignments by others.
Size is another dimension along which the W3C and UvT Expert
collections differ: the latter is the smaller of the two. Also realistic
are the large differences in the amount of information available for
each expert. Utilizing Webwijs is voluntary; 425 Dutch experts
did not select any topics at all. This leaves us with 743 Dutch and
727 English usable expert profiles. Table 2 provides descriptive
statistics for the UvT Expert collection.
Universities tend to have a hierarchical structure that goes from
the faculty level, to departments, research groups, down to the
individual researchers. In the UvT Expert collection we have
information about the affiliations of researchers with faculties and
institutes, providing us with a two-level organizational hierarchy.
Tilburg University has 22 organizational units at the faculty level
(including the university office and several research institutes) and
71 departments, which amounts to 3.2 departments per faculty.
As to the topical hierarchy used by Webwijs, 131 of the 1491
topics are top nodes in the hierarchy. This hierarchy has an average
topic chain length of 2.65 and a maximum length of 7 topics.
6. EVALUATION
Below, we evaluate Section 4"s models for expert finding and
profiling onthe UvT Expert collection. We detail our research
questions and experimental setup, and then present our results.
6.1 Research Questions
We address the following research questions. Both expert finding
and profiling rely on the estimations of p(q|ca). The question is
how the models compare on the different tasks, and in the setting of
the UvT Expert collection. In [4], Model 2 outperformed Model 1
on the W3C collection. How do they compare on our data set? And
how does Model 3 compare to Model 1? What about performance
differences between the two languages in our test collection?
6.2 Experimental Setup
The output of our models was evaluated against the self-assigned
topic labels, which were treated as relevance judgements. Results
were evaluated separately for English and Dutch. For English we
only used topics for which the Dutch translation was available; for
Dutch all topics were considered. The results were averaged for
the queries in the intersection of relevance judgements and results;
missing queries do not contribute a value of 0 to the scores.
We use standard information retrieval measures, such as Mean
Average Precision (MAP) and Mean Reciprocal Rank (MRR). We
also report the percentage of topics (%q) and candidates (%ca)
covered, for the expert finding and profiling tasks, respectively.
6.3 Results
Table 1 shows the performance of Model 1, 2, and 3 on the
expert finding and profiling tasks. The rows of the table correspond
to the various document types (RD, CD, PUB, and HP) and to their
combinations. RD+CD+PUB+HP is equivalent to the full
collection and will be referred as the BASELINE of our experiments.
Looking at Table 1 we see that Model 2 performs the best across
the board. However, when the data is clean and very focused (RD),
Model 3 outperforms it in a number of cases. Model 1 has the
best coverage of candidates (%ca) and topics (%q). The various
document types differ in their characteristics and how they improve
the finding and profiling tasks. Expert profiling benefits much from
the clean data present in the RD and CD document types, while the
publications contribute the most to the expert finding task. Adding
the homepages does not prove to be particularly useful.
When we compare the results across languages, we find that the
coverage of English topics (%q) is higher than of the Dutch ones
for expert finding. Apart from that, the scores fall in the same range
for both languages. For the profiling task the coverage of the
candidates (%ca) is very similar for both languages. However, the
performance is substantially better for the English topics.
While it is hard to compare scores across collections, we
conclude with a brief comparison of the absolute scores in Table 1 to
those reported in [3, 4] on the W3C test set (2005 edition). For
expert finding the MAP scores for Model 2 reported here are about
50% higher than the corresponding figures in [4], while our MRR
scores are slightly below those in [4]. For expert profiling, the
differences are far more dramatic: the MAP scores for Model 2
reported here are around 50% below the scores in [3], while the (best)
MRR scores are about the same as those in [3]. The cause for the
latter differences seems to reside in the number of knowledge areas
considered here-approx. 30 times more than in the W3C setting.
7. ADVANCED MODELS
Now that we have developed and assessed basic language
modeling techniques for expertise retrieval, we turn to refined models
that exploit special features of our test collection.
7.1 Exploiting knowledge area similarity
One way to improve the scoring of a query given a candidate is
to consider what other requests the candidate would satisfy and use
them as further evidence to support the original query, proportional
Expert finding Expert profiling
Document types Model 1 Model 2 Model 3 Model 1 Model 2 Model 3
%q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR
English
RD 97.8 0.126 0.269 83.5 0.144 0.311 83.3 0.129 0.271 100 0.089 0.189 39.3 0.232 0.465 41.1 0.166 0.337
CD 97.8 0.118 0.227 91.7 0.123 0.248 91.7 0.118 0.226 32.8 0.188 0.381 32.4 0.195 0.385 32.7 0.203 0.370
PUB 97.8 0.200 0.330 98.0 0.216 0.372 98.0 0.145 0.257 78.9 0.167 0.364 74.5 0.212 0.442 78.9 0.135 0.299
HP 97.8 0.081 0.186 97.4 0.071 0.168 97.2 0.062 0.149 31.2 0.150 0.299 28.8 0.185 0.335 30.1 0.136 0.287
RD+CD 97.8 0.188 0.352 92.9 0.193 0.360 92.9 0.150 0.273 100 0.145 0.286 61.3 0.251 0.477 63.2 0.217 0.416
RD+CD+PUB 97.8 0.235 0.373 98.1 0.277 0.439 98.1 0.178 0.305 100 0.196 0.380 87.2 0.280 0.533 89.5 0.170 0.344
RD+CD+PUB+HP 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329
Dutch
RD 61.3 0.094 0.229 38.4 0.137 0.336 38.3 0.127 0.295 38.0 0.127 0.386 34.1 0.138 0.420 38.0 0.105 0.327
CD 61.3 0.107 0.212 49.7 0.128 0.256 49.7 0.136 0.261 32.5 0.151 0.389 31.8 0.158 0.396 32.5 0.170 0.380
PUB 61.3 0.193 0.319 59.5 0.218 0.368 59.4 0.173 0.291 78.8 0.126 0.364 76.0 0.150 0.424 78.8 0.103 0.294
HP 61.3 0.063 0.169 56.6 0.064 0.175 56.4 0.062 0.163 29.8 0.108 0.308 27.8 0.125 0.338 29.8 0.098 0.255
RD+CD 61.3 0.159 0.314 51.9 0.184 0.360 51.9 0.169 0.324 60.5 0.151 0.410 57.2 0.166 0.431 60.4 0.159 0.384
RD+CD+PUB 61.3 0.244 0.398 61.5 0.260 0.424 61.4 0.210 0.350 90.3 0.165 0.445 88.2 0.189 0.479 90.3 0.126 0.339
RD+CD+PUB+HP 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328
Table 1: Performance of the models on the expert finding and profiling tasks, using different document types and their combinations.
%q is the number of topics covered (applies to the expert finding task), %ca is the number of candidates covered (applies to the
expert profiling task). The top and bottom blocks correspond to English and Dutch respectively. The best scores are in boldface.
to how related the other requests are to the original query. This can
be modeled by interpolating between the p(q|ca) and the further
supporting evidence from all similar requests q , as follows:
p (q|ca) = λp(q|ca) + (1 − λ)
X
q
p(q|q )p(q |ca), (9)
where p(q|q ) represents the similarity between the two topics q
and q . To be able to work with similarity methods that are not
necessarily probabilities, we set p(q|q ) = w(q,q )
γ
, where γ is
a normalizing constant, such that γ =
P
q w(q , q ). We
consider four methods for calculating the similarity score between two
topics. Three approaches are strictly content-based, and establish
similarity by examining co-occurrence patterns of topics within the
collection, while the last approach exploits the hierarchical
structure of topical areas that may be present within an organization (see
[7] for further examples of integrating word relationships into
language models).
The Kullback-Leibler (KL) divergence metric defined in Eq. 8
provides a measure of how different or similar two probability
distributions are. A topic model is inferred for q and q using the
method presented in Section 4.1 to describe the query across the
entire vocabulary. Since a lower KL score means the queries are
more similar, we let w(q, q ) = max(KL(θq||·) − KL(θq||θq )).
Pointwise Mutual Information (PMI, [17]) is a measure of
association used in information theory to determine the extent of
independence between variables. The dependence between two queries
is reflected by the SI(q, q ) score, where scores greater than zero
indicate that it is likely that there is a dependence, which we take
to mean that the queries are likely to be similar:
SI(q, q ) = log
p(q, q )
p(q)p(q )
(10)
We estimate the probability of a topic p(q) using the number of
documents relevant to query q within the collection. The joint
probability p(q, q ) is estimated similarly, by using the
concatenation of q and q as a query. To obtain p(q|q ), we then set
w(q, q ) = SI(q, q ) when SI(q, q ) > 0 otherwise w(q, q ) = 0,
because we are only interested in including queries that are similar.
The log-likelihood statistic provides another measure of
dependence, which is more reliable than the pointwise mutual
information measure [17]. Let k1 be the number of co-occurrences of q
and q , k2 the number of occurrences of q not co-occurring with q ,
n1 the total number of occurrences of q , and n2 the total number
of topic tokens minus the number of occurrences of q . Then, let
p1 = k1/n1, p2 = k2/n2, and p = (k1 + k2)/(n1 + n2),
(q, q ) = 2( (p1, k1, n1) + (p2, k2, n2)
− (p, k1, n1) − (p, k2, n2)),
where (p, n, k) = k log p + (n − k) log(1 − p). The higher
score indicate that queries are also likely to be similar, thus we set
w(q, q ) = (q, q ).
Finally, we also estimate the similarity of two topics based on
their distance within the topic hierarchy. The topic hierarchy is
viewed as a directed graph, and for all topic-pairs the shortest path
SP(q, q ) is calculated. We set the similarity score to be the
reciprocal of the shortest path: w(q, q ) = 1/SP(q, q ).
7.2 Contextual information
Given the hierarchy of an organization, the units to which a
person belong are regarded as a context so as to compensate for data
sparseness. We model it as follows:
p (q|ca) =
1 −
P
ou∈OU(ca) λou
· p(q|ca)
+
P
ou∈OU(ca) λou · p(q|ou),
where OU(ca) is the set of organizational units of which
candidate ca is a member of, and p(q|o) expresses the strength of the
association between query q and the unit ou. The latter probability
can be estimated using either of the three basic models, by simply
replacing ca with ou in the corresponding equations. An
organizational unit is associated with all the documents that its members
have authored. That is, p(d|ou) = maxca∈ou p(d|ca).
7.3 A simple multilingual model
For knowledge institutes in Europe, academic or otherwise, a
multilingual (or at least bilingual) setting is typical. The following
model builds on a kind of independence assumption: there is no
spill-over of expertise/profiles across language boundaries. While a
simplification, this is a sensible first approach. That is: p (q|ca) =P
l∈L λl · p(ql|ca), where L is the set of languages used in the
collection, ql is the translation of the query q to language l, and λl is
a language specific smoothing parameter, such that
P
l∈L λl = 1.
8. ADVANCED MODELS: EVALUATION
In this section we present an experimental evaluation of our
advanced models.
Expert finding Expert profiling
Language Model 1 Model 2 Model 3 Model 1 Model 2 Model 3
%q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR
English only 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329
Dutch only 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328
Combination 99.4 0.297 0.444 99.7 0.324 0.491 99.7 0.223 0.388 100 0.241 0.445 92.1 0.313 0.564 93.2 0.224 0.411
Table 3: Performance of the combination of languages on the expert finding and profiling tasks (on candidates). Best scores for each
model are in italic, absolute best scores for the expert finding and profiling tasks are in boldface.
Method Model 1 Model 2 Model 3
MAP MRR MAP MRR MAP MRR
English
BASELINE 0.296 0.454 0.339 0.509 0.221 0.333
KLDIV 0.291 0.453 0.327 0.503 0.219 0.330
PMI 0.291 0.453 0.337 0.509 0.219 0.331
LL 0.319 0.490 0.360 0.524 0.233 0.368
HDIST 0.299 0.465 0.346 0.537 0.219 0.332
Dutch
BASELINE 0.240 0.350 0.271 0.403 0.227 0.389
KLDIV 0.239 0.347 0.253 0.386 0.224 0.385
PMI 0.239 0.350 0.260 0.392 0.227 0.389
LL 0.255 0.372 0.281 0.425 0.231 0.389
HDIST 0.253 0.365 0.271 0.407 0.236 0.402
Method Model 1 Model 2 Model 3
MAP MRR MAP MRR MAP MRR
English
BASELINE 0.485 0.546 0.499 0.548 0.381 0.416
KLDIV 0.510 0.564 0.513 0.558 0.381 0.416
PMI 0.486 0.546 0.495 0.542 0.407 0.451
LL 0.558 0.589 0.586 0.617 0.408 0.453
HDIST 0.507 0.567 0.512 0.563 0.386 0.420
Dutch
BASELINE 0.263 0.313 0.294 0.358 0.262 0.315
KLDIV 0.284 0.336 0.271 0.321 0.261 0.314
PMI 0.265 0.317 0.265 0.316 0.273 0.330
LL 0.312 0.351 0.330 0.377 0.284 0.331
HDIST 0.280 0.327 0.288 0.341 0.266 0.321
Table 4: Performance on the expert finding (top) and profiling
(bottom) tasks, using knowledge area similarities. Runs were
evaluated on the main topics set. Best scores are in boldface.
8.1 Research Questions
Our questions follow the refinements presented in the preceding
section: Does exploiting the knowledge area similarity improve
effectiveness? Which of the various methods for capturing word
relationships is most effective? Furthermore, is our way of bringing
in contextual information useful? For which tasks? And finally, is
our simple way of combining the monolingual scores sufficient for
obtaining significant improvements?
8.2 Experimental setup
Given that the self-assessments are also sparse in our collection,
in order to be able to measure differences between the various
models, we selected a subset of topics, and evaluated (some of the) runs
only on this subset. This set is referred as main topics, and consists
of topics that are located at the top level of the topical hierarchy. (A
main topic has subtopics, but is not a subtopic of any other topic.)
This main set consists of 132 Dutch and 119 English topics. The
relevance judgements were restricted to the main topic set, but were
not expanded with subtopics.
8.3 Exploiting knowledge area similarity
Table 4 presents the results. The four methods used for
estimating knowledge-area similarity are KL divergence (KLDIV),
PointLang. Topics Model 1 Model 2 Model 3
MAP MRR MAP MRR MAP MRR
Expert finding
UK ALL 0.423 0.545 0.654 0.799 0.494 0.629
UK MAIN 0.500 0.621 0.704 0.834 0.587 0.699
NL ALL 0.439 0.560 0.672 0.826 0.480 0.630
NL MAIN 0.440 0.584 0.645 0.816 0.515 0.655
Expert profiling
UK ALL 0.240 0.640 0.306 0.778 0.223 0.616
UK MAIN 0.523 0.677 0.519 0.648 0.461 0.587
NL ALL 0.203 0.716 0.254 0.770 0.183 0.627
NL MAIN 0.332 0.576 0.380 0.624 0.332 0.549
Table 5: Evaluating the context models on organizational units.
wise mutual information (PMI), log-likelihood (LL), and distance
within topic hierarchy (HDIST). We managed to improve upon the
baseline in all cases, but the improvement is more noticeable for
the profiling task. For both tasks, the LL method performed best.
The content-based approaches performed consistently better than
HDIST.
8.4 Contextual information
A two level hierarchy of organizational units (faculties and
institutes) is available in the UvT Expert collection. The unit a person
belongs to is used as a context for that person. First, we evaluated
the models of the organizational units, using all topics (ALL) and
only the main topics (MAIN). An organizational unit is considered
to be relevant for a given topic (or vice versa) if at least one member
of the unit selected the given topic as an expertise area.
Table 5 reports on the results. As far as expert finding goes, given
a topic, the corresponding organizational unit can be identified with
high precision. However, the expert profiling task shows a different
picture: the scores are low, and the task seems hard. The
explanation may be that general concepts (i.e., our main topics) may belong
to several organizational units.
Second, we performed another evaluation, where we combined
the contextual models with the candidate models (to score
candidates again). Table 6 reports on the results. We find a positive
impact of the context models only for expert finding. Noticably,
for expert finding (and Model 1), it improves over 50% (for
English) and over 70% (for Dutch) on MAP. The poor performance
on expert profiling may be due to the fact that context models alone
did not perform very well on the profiling task to begin with.
8.5 Multilingual models
In this subsection we evaluate the method for combining
results across multiple languages that we described in Section 7.3.
In our setting the set of languages consists of English and Dutch:
L = {UK, NL}. The weights on these languages were set to be
identical (λUK = λNL = 0.5). We performed experiments with
various λ settings, but did not observe significant differences in
performance.
Table 3 reports on the multilingual results, where performance is
evaluated on the full topic set. All three models significantly
imLang. Method Model 1 Model 2 Model 3
MAP MRR MAP MRR MAP MRR
Expert finding
UK BL 0.296 0.454 0.339 0.509 0.221 0.333
UK CT 0.330 0.491 0.342 0.500 0.228 0.342
NL BL 0.240 0.350 0.271 0.403 0.227 0.389
NL CT 0.251 0.382 0.267 0.410 0.246 0.404
Expert profiling
UK BL 0.485 0.546 0.499 0.548 0.381 0.416
UK CT 0.562 0.620 0.508 0.558 0.440 0.486
NL BL 0.263 0.313 0.294 0.358 0.262 0.315
NL CT 0.330 0.384 0.317 0.387 0.294 0.345
Table 6: Performance of the context models (CT) compared to
the baseline (BL). Best scores are in boldface.
proved over all measures for both tasks. The coverage of topics
and candidates for the expert finding and profiling tasks,
respectively, is close to 100% in all cases. The relative improvement
of the precision scores ranges from 10% to 80%. These scores
demonstrate that despite its simplicity, our method for combining
results over multiple languages achieves substantial improvements
over the baseline.
9. CONCLUSIONS
In this paper we focused on expertise retrieval (expert finding
and profiling) in a new setting of a typical knowledge-intensive
organization in which the available data is of high quality,
multilingual, and covering a broad range of expertise area. Typically, the
amount of available data in such an organization (e.g., a university,
a research institute, or a research lab) is limited when compared to
the W3C collection that has mostly been used for the experimental
evaluation of expertise retrieval so far.
To examine expertise retrieval in this setting, we introduced (and
released) the UvT Expert collection as a representative case of such
knowledge intensive organizations. The new collection reflects the
typical properties of knowledge-intensive institutes noted above and
also includes several features which may are potentially useful for
expertise retrieval, such as topical and organizational structure.
We evaluated how current state-of-the-art models for expert
finding and profiling performed in this new setting and then refined
these models in order to try and exploit the different
characteristics within the data environment (language, topicality, and
organizational structure). We found that current models of expertise
retrieval generalize well to this new environment; in addition we
found that refining the models to account for the differences results
in significant improvements, thus making up for problems caused
by data sparseness issues.
Future work includes setting up manual assessments of
automatically generated profiles by the employees themselves, especially in
cases where the employees have not provided a profile themselves.
10. ACKNOWLEDGMENTS
Krisztian Balog was supported by the Netherlands Organisation
for Scientific Research (NWO) under project number 220-80-001.
Maarten de Rijke was also supported by NWO under project
numbers 017.001.190, 220-80-001, 264-70-050, 354-20-005,
600.065.120, 612-13-001, 612.000.106, 612.066.302, 612.069.006,
640.001.501, 640.002.501, and by the E.U. IST programme of the 6th
FP for RTD under project MultiMATCH contract IST-033104.
The work of Toine Bogers and Antal van den Bosch was funded
by the IOP-MMI-program of SenterNovem / The Dutch Ministry
of Economic Affairs, as part of the `A Propos project.
11. REFERENCES
[1] L. Azzopardi. Incorporating Context in the Language Modeling
Framework for ad-hoc Information Retrieval. PhD thesis, University
of Paisley, 2005.
[2] K. Balog and M. de Rijke. Finding similar experts. In This volume,
2007.
[3] K. Balog and M. de Rijke. Determining expert profiles (with an
application to expert finding). In IJCAI "07: Proc. 20th Intern. Joint Conf.
on Artificial Intelligence, pages 2657-2662, 2007.
[4] K. Balog, L. Azzopardi, and M. de Rijke. Formal models for expert
finding in enterprise corpora. In SIGIR "06: Proc. 29th annual
intern. ACM SIGIR conf. on Research and development in information
retrieval, pages 43-50, 2006.
[5] I. Becerra-Fernandez. The role of artificial intelligence technologies
in the implementation of people-finder knowledge management
systems. In AAAI Workshop on Bringing Knowledge to Business
Processes, March 2000.
[6] C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom. Expertise
identification using email communications. In CIKM "03: Proc. twelfth
intern. conf. on Information and knowledge management, pages
528531, 2003.
[7] G. Cao, J.-Y. Nie, and J. Bai. Integrating word relationships into
language models. In SIGIR "05: Proc. 28th annual intern. ACM SIGIR
conf. on Research and development in information retrieval, pages
298-305, 2005.
[8] T. M. Cover and J. A. Thomas. Elements of Information Theory.
Wiley-Interscience, 1991.
[9] N. Craswell, D. Hawking, A. M. Vercoustre, and P. Wilkins. P@noptic
expert: Searching for experts not just for documents. In Ausweb, 2001.
[10] N. Craswell, A. de Vries, and I. Soboroff. Overview of the
TREC2005 Enterprise Track. In The Fourteenth Text REtrieval Conf. Proc.
(TREC 2005), 2006.
[11] T. H. Davenport and L. Prusak. Working Knowledge: How
Organizations Manage What They Know. Harvard Business School Press,
Boston, MA, 1998.
[12] T. Dunning. Accurate methods for the statistics of surprise and
coincidence. Computational Linguistics, 19(1):61-74, 1993.
[13] E. Filatova and J. Prager. Tell me what you do and I"ll tell you what
you are: Learning occupation-related activities for biographies. In
HLT/EMNLP, 2005.
[14] V. Lavrenko and W. B. Croft. Relevance based language models. In
SIGIR "01: Proc. 24th annual intern. ACM SIGIR conf. on Research
and development in information retrieval, pages 120-127, 2001.
[15] V. Lavrenko, M. Choquette, and W. B. Croft. Cross-lingual relevance
models. In SIGIR "02: Proc. 25th annual intern. ACM SIGIR conf. on
Research and development in information retrieval, pages 175-182,
2002.
[16] C. Macdonald and I. Ounis. Voting for candidates: adapting data
fusion techniques for an expert search task. In CIKM "06: Proc. 15th
ACM intern. conf. on Information and knowledge management, pages
387-396, 2006.
[17] C. Manning and H. Sch¨utze. Foundations of Statistical Natural
Language Processing. The MIT Press, 1999.
[18] A. Mockus and J. D. Herbsleb. Expertise browser: a quantitative
approach to identifying expertise. In ICSE "02: Proc. 24th Intern. Conf.
on Software Engineering, pages 503-512, 2002.
[19] D. Petkova and W. B. Croft. Hierarchical language models for expert
finding in enterprise corpora. In Proc. ICTAI 2006, pages 599-608,
2006.
[20] I. Soboroff, A. de Vries, and N. Craswell. Overview of the TREC
2006 Enterprise Track. In TREC 2006 Working Notes, 2006.
[21] T. Tao, X. Wang, Q. Mei, and C. Zhai. Language model information
retrieval with document expansion. In HLT-NAACL 2006, 2006.
[22] TREC. Enterprise track, 2005. URL: http://www.ins.cwi.
nl/projects/trec-ent/wiki/.
[23] G. van Noord. TextCat Language Guesser. URL: http://www.
let.rug.nl/˜vannoord/TextCat/.
[24] W3C. The W3C test collection, 2005. URL: http://research.
microsoft.com/users/nickcr/w3c-summary.html. | language model;broad expertise retrieval;expertise search;expert finding task;expert colleague;bayes' theorem;organizational structure;baseline model;intranet search;trec enterprise track;baseline expertise retrieval method;co-occurrence;expert find;generative language modeling;sparse datum environment;topicality and organizational structure |
train_H-47 | A Semantic Approach to Contextual Advertising | Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features. | 1. INTRODUCTION
Web advertising supports a large swath of today"s Internet
ecosystem. The total internet advertiser spend in US alone
in 2006 is estimated at over 17 billion dollars with a growth
rate of almost 20% year over year. A large part of this
market consists of textual ads, that is, short text messages
usually marked as sponsored links or similar. The main
advertising channels used to distribute textual ads are:
1. Sponsored Search or Paid Search advertising which
consists in placing ads on the result pages from a web
search engine, with ads driven by the originating query.
All major current web search engines (Google, Yahoo!,
and Microsoft) support such ads and act
simultaneously as a search engine and an ad agency.
2. Contextual advertising or Context Match which refers
to the placement of commercial ads within the
content of a generic web page. In contextual advertising
usually there is a commercial intermediary, called an
ad-network, in charge of optimizing the ad selection
with the twin goal of increasing revenue (shared
between publisher and ad-network) and improving user
experience. Again, all major current web search
engines (Google, Yahoo!, and Microsoft) provide such
ad-networking services but there are also many smaller
players.
The SS market developed quicker than the CM market,
and most textual ads are still characterized by bid phrases
representing those queries where the advertisers would like
to have their ad displayed. (See [5] for a brief history).
However, today, almost all of the for-profit non-transactional
web sites (that is, sites that do not sell anything directly)
rely at least in part on revenue from context match. CM
supports sites that range from individual bloggers and small
niche communities to large publishers such as major
newspapers. Without this model, the web would be a lot smaller!
The prevalent pricing model for textual ads is that the
advertisers pay a certain amount for every click on the
advertisement (pay-per-click or PPC). There are also other
models used: pay-per-impression, where the advertisers pay
for the number of exposures of an ad and pay-per-action
where the advertiser pays only if the ad leads to a sale or
similar transaction. For simplicity, we only deal with the
PPC model in this paper.
Given a page, rather than placing generic ads, it seems
preferable to have ads related to the content to provide a
better user experience and thus to increase the probability
of clicks. This intuition is supported by the analogy to
conventional publishing where there are very successful
magazines (e.g. Vogue) where a majority of the content is topical
advertising (fashion in the case of Vogue) and by user
studies that have confirmed that increased relevance increases
the number of ad-clicks [4, 13].
Previous published approaches estimated the ad relevance
based on co-occurrence of the same words or phrases within
the ad and within the page (see [7, 8] and Section 3 for
more details). However targeting mechanisms based solely
on phrases found within the text of the page can lead to
problems: For example, a page about a famous golfer named
John Maytag might trigger an ad for Maytag
dishwashers since Maytag is a popular brand. Another example
could be a page describing the Chevy Tahoe truck (a
popular vehicle in US) triggering an ad about Lake Tahoe
vacations. Polysemy is not the only culprit: there is a (maybe
apocryphal) story about a lurid news item about a headless
body found in a suitcase triggering an ad for Samsonite
luggage! In all these examples the mismatch arises from the
fact that the ads are not appropriate for the context.
In order to solve this problem we propose a matching
mechanism that combines a semantic phase with the
traditional keyword matching, that is, a syntactic phase. The
semantic phase classifies the page and the ads into a
taxonomy of topics and uses the proximity of the ad and page
classes as a factor in the ad ranking formula. Hence we
favor ads that are topically related to the page and thus avoid
the pitfalls of the purely syntactic approach. Furthermore,
by using a hierarchical taxonomy we allow for the gradual
generalization of the ad search space in the case when there
are no ads matching the precise topic of the page. For
example if the page is about an event in curling, a rare winter
sport, and contains the words Alpine Meadows, the
system would still rank highly ads for skiing in Alpine Meadows
as these ads belong to the class skiing which is a sibling
of the class curling and both of these classes share the
parent winter sports.
In some sense, the taxonomy classes are used to select the
set of applicable ads and the keywords are used to narrow
down the search to concepts that are of too small
granularity to be in the taxonomy. The taxonomy contains nodes for
topics that do not change fast, for example, brands of digital
cameras, say Canon. The keywords capture the specificity
to a level that is more dynamic and granular. In the
digital camera example this would correspond to the level of a
particular model, say Canon SD450 whose advertising life
might be just a few months. Updating the taxonomy with
new nodes or even new vocabulary each time a new model
comes to the market is prohibitively expensive when we are
dealing with millions of manufacturers.
In addition to increased click through rate (CTR) due to
increased relevance, a significant but harder to quantify
benefit of the semantic-syntactic matching is that the resulting
page has a unified feel and improves the user experience. In
the Chevy Tahoe example above, the classifier would
establish that the page is about cars/automotive and only those
ads will be considered. Even if there are no ads for this
particular Chevy model, the chosen ads will still be within the
automotive domain.
To implement our approach we need to solve a challenging
problem: classify both pages and ads within a large
taxonomy (so that the topic granularity would be small enough)
with high precision (to reduce the probability of mis-match).
We evaluated several classifiers and taxonomies and in this
paper we present results using a taxonomy with close to
6000 nodes using a variation of the Rocchio"s classifier [9].
This classifier gave the best results in both page and ad
classification, and ultimately in ad relevance.
The paper proceeds as follows. In the next section we
review the basic principles of the contextual advertising.
Section 3 overviews the related work. Section 4 describes
the taxonomy and document classifier that were used for
page and ad classification. Section 5 describes the
semanticsyntactic method. In Section 6 we briefly discuss how to
search efficiently the ad space in order to return the top-k
ranked ads. Experimental evaluation is presented in
Section 7. Finally, Section 8 presents the concluding remarks.
2. OVERVIEW OF CONTEXTUAL
ADVERTISING
Contextual advertising is an interplay of four players:
• The publisher is the owner of the web pages on which
the advertising is displayed. The publisher typically
aims to maximize advertising revenue while providing
a good user experience.
• The advertiser provides the supply of ads. Usually
the activity of the advertisers are organized around
campaigns which are defined by a set of ads with a
particular temporal and thematic goal (e.g. sale of digital
cameras during the holiday season). As in traditional
advertising, the goal of the advertisers can be broadly
defined as the promotion of products or services.
• The ad network is a mediator between the advertiser
and the publisher and selects the ads that are put on
the pages. The ad-network shares the advertisement
revenue with the publisher.
• Users visit the web pages of the publisher and interact
with the ads.
Contextual advertising usually falls into the category of
direct marketing (as opposed to brand advertising), that is
advertising whose aim is a direct response where the
effect of an campaign is measured by the user reaction. One
of the advantages of online advertising in general and
contextual advertising in particular is that, compared to the
traditional media, it is relatively easy to measure the user
response. Usually the desired immediate reaction is for the
user to follow the link in the ad and visit the advertiser"s
web site and, as noted, the prevalent financial model is that
the advertiser pays a certain amount for every click on the
advertisement (PPC). The revenue is shared between the
publisher and the network.
Context match advertising has grown from Sponsored Search
advertising, which consists in placing ads on the result pages
from a web search engine, with ads driven by the originating
query. In most networks, the amount paid by the advertiser
for each SS click is determined by an auction process where
the advertisers place bids on a search phrase, and their
position in the tower of ads displayed in conjunction with the
result is determined by their bid. Thus each ad is
annotated with one or more bid phrases. The bid phrase has no
direct bearing on the ad placement in CM. However, it is a
concise description of target ad audience as determined by
the advertiser and it has been shown to be an important
feature for successful CM ad placement [8]. In addition to
the bid phrase, an ad is also characterized by a title usually
displayed in a bold font, and an abstract or creative, which
is the few lines of text, usually less than 120 characters,
displayed on the page.
The ad-network model aligns the interests of the
publishers, advertisers and the network. In general, clicks bring
benefits to both the publisher and the ad network by
providing revenue, and to the advertiser by bringing traffic to
the target web site. The revenue of the network, given a
page p, can be estimated as:
R =
X
i=1..k
P(click|p, ai)price(ai, i)
where k is the number of ads displayed on page p and price(ai, i)
is the click-price of the current ad ai at position i. The
price in this model depends on the set of ads presented on
the page. Several models have been proposed to determine
the price, most of them based on generalizations of second
price auctions. However, for simplicity we ignore the pricing
model and concentrate on finding ads that will maximize the
first term of the product, that is we search for
arg max
i
P(click|p, ai)
Furthermore we assume that the probability of click for a
given ad and page is determined by its relevance score with
respect to the page, thus ignoring the positional effect of
the ad placement on the page. We assume that this is an
orthogonal factor to the relevance component and could be
easily incorporated in the model.
3. RELATED WORK
Online advertising in general and contextual advertising
in particular are emerging areas of research. The published
literature is very sparse. A study presented in [13] confirms
the intuition that ads need to be relevant to the user"s
interest to avoid degrading the user"s experience and increase
the probability of reaction.
A recent work by Ribeiro-Neto et. al. [8] examines a
number of strategies to match pages to ads based on extracted
keywords. The ads and pages are represented as vectors in
a vector space. The first five strategies proposed in that
work match the pages and the ads based on the cosine of
the angle between the ad vector and the page vector. To
find out the important part of the ad, the authors explore
using different ad sections (bid phrase, title, body) as a
basis for the ad vector. The winning strategy out of the first
five requires the bid phrase to appear on the page and then
ranks all such ads by the cosine of the union of all the ad
sections and the page vectors.
While both pages and ads are mapped to the same space,
there is a discrepancy (impendence mismatch) between the
vocabulary used in the ads and in the pages. Furthermore,
since in the vector model the dimensions are determined
by the number of unique words, plain cosine similarity will
not take into account synonyms. To solve this problem,
Ribeiro-Neto et al expand the page vocabulary with terms
from other similar pages weighted based on the overall
similarity of the origin page to the matched page, and show
improved matching precision.
In a follow-up work [7] the authors propose a method to
learn impact of individual features using genetic
programming to produce a matching function. The function is
represented as a tree composed of arithmetic operators and the log
function as internal nodes, and different numerical features
of the query and ad terms as leafs. The results show that
genetic programming finds matching functions that
significantly improve the matching compared to the best method
(without page side expansion) reported in [8].
Another approach to contextual advertising is to reduce it
to the problem of sponsored search advertising by
extracting phrases from the page and matching them with the bid
phrase of the ads. In [14] a system for phrase extraction is
described that used a variety of features to determine the
importance of page phrases for advertising purposes. The
system is trained with pages that have been hand
annotated with important phrases. The learning algorithm takes
into account features based on tf-idf, html meta data and
query logs to detect the most important phrases. During
evaluation, each page phrase up to length 5 is considered
as potential result and evaluated against a trained classifier.
In our work we also experimented with a phrase extractor
based on the work reported in [12]. While increasing slightly
the precision, it did not change the relative performance of
the explored algorithms.
4. PAGE AND AD CLASSIFICATION
4.1 Taxonomy Choice
The semantic match of the pages and the ads is performed
by classifying both into a common taxonomy. The
matching process requires that the taxonomy provides sufficient
differentiation between the common commercial topics. For
example, classifying all medical related pages into one node
will not result into a good classification since both sore
foot and flu pages will end up in the same node. The
ads suitable for these two concepts are, however, very
different. To obtain sufficient resolution, we used a taxonomy of
around 6000 nodes primarily built for classifying commercial
interest queries, rather than pages or ads. This taxonomy
has been commercially built by Yahoo! US. We will explain
below how we can use the same taxonomy to classify pages
and ads as well.
Each node in our source taxonomy is represented as a
collection of exemplary bid phrases or queries that correspond
to that node concept. Each node has on average around 100
queries. The queries placed in the taxonomy are high
volume queries and queries of high interest to advertisers, as
indicated by an unusually high cost-per-click (CPC) price.
The taxonomy has been populated by human editors
using keyword suggestions tools similar to the ones used by
ad networks to suggest keywords to advertisers. After
initial seeding with a few queries, using the provided tools a
human editor can add several hundreds queries to a given
node. Nevertheless, it has been a significant effort to
develop this 6000-nodes taxonomy and it has required several
person-years of work. A similar-in-spirit process for
building enterprise taxonomies via queries has been presented in
[6]. However, the details and tools are completely different.
Figure 1 provides some statistics about the taxonomy used
in this work.
4.2 Classification Method
As explained, the semantic phase of the matching relies
on ads and pages being topically close. Thus we need to
classify pages into the same taxonomy used to classify ads.
In this section we overview the methods we used to build a
page and an ad classifier pair. The detailed description and
evaluation of this process is outside the scope of this paper.
Given the taxonomy of queries (or bid-phrases - we use
these terms interchangeably) described in the previous
section, we tried three methods to build corresponding page
and ad classifiers. For the first two methods we tried to
find exemplary pages and ads for each concept as follows:
Number of Categories By Level
0
200
400
600
800
1000
1200
1400
1600
1800
2000
1 2 3 4 5 6 7 8 9
Level
NumberofCategories
Number of Children per Nodes
0
50
100
150
200
250
300
350
400
0
2
4
6
8
10
12
14
16
18
20
22
24
29
31
33
35
52
Number of Children
NumberofNodes
Queries per Node
0
500
1000
1500
2000
2500
3000
1
50
80
120
160
200
240
280
320
360
400
440
480
Number Queries (up to 500+)
NumberofNodes
Figure 1: Taxonomy statistics: categories per level; fanout for non-leaf nodes; and queries per node
We generated a page training set by running the queries in
the taxonomy over a Web search index and using the top
10 results after some filtering as documents labeled with the
query"s label. On the ad side we generated a training set
for each class by selecting the ads that have a bid phrase
assigned to this class. Using this training sets we then trained
a hierarchical SVM [2] (one against all between every group
of siblings) and a log-regression [11] classifier. (The
second method differs from the first in the type of secondary
filtering used. This filtering eliminates low content pages,
pages deemed unsuitable for advertising, pages that lead to
excessive class confusion, etc.)
However, we obtained the best performance by using the
third document classifier, based on the information in the
source taxonomy queries only. For each taxonomy node we
concatenated all the exemplary queries into a single
metadocument. We then used the meta document as a centroid
for a nearest-neighbor classifier based on Rocchio"s
framework [9] with only positive examples and no relevance
feedback. Each centroid is defined as a sum of the tf-idf values
of each term, normalized by the number of queries in the
class
cj =
1
|Cj|
X
q∈Cj
q
q
where cj is the centroid for class Cj; q iterates over the
queries in a particular class.
The classification is based on the cosine of the angle
between the document d and the centroid meta-documents:
Cmax = arg max
Cj ∈C
cj
cj
·
d
d
= arg max
Cj ∈C
P
i∈|F | ci
j· di
qP
i∈|F |(ci
j)2
qP
i∈|F |(di)2
where F is the set of features. The score is normalized by
the document and class length to produce comparable score.
The terms ci
and di
represent the weight of the ith feature
in the class centroid and the document respectively. These
weights are based on the standard tf-idf formula. As the
score of the max class is normalized with regard to document
length, the scores for different documents are comparable.
We conducted tests using professional editors to judge the
quality of page and ad class assignments. The tests showed
that for both ads and pages the Rocchio classifier returned
the best results, especially on the page side. This is
probably a result of the noise induced by using a search engine
to machine generate training pages for the SVM and
logregression classifiers. It is an area of current investigation
how to improve the classification using a noisy training set.
Based on the test results we decided to use the Rocchio"s
classifier on both the ad and the page side.
5. SEMANTIC-SYNTACTIC MATCHING
Contextual advertising systems process the content of the
page, extract features, and then search the ad space to find
the best matching ads. Given a page p and a set of ads
A = {a1 . . . as} we estimate the relative probability of click
P(click|p, a) with a score that captures the quality of the
match between the page and the ad. To find the best ads
for a page we rank the ads in A and select the top few for
display. The problem can be formally defined as matching
every page in the set of all pages P = {p1, . . . ppc} to one or
more ads in the set of ads. Each page is represented as a
set of page sections pi = {pi,1, pi,2 . . . pi,m}. The sections of
the page represent different structural parts, such as title,
metadata, body, headings, etc. In turn, each section is an
unordered bag of terms (keywords). A page is represented
by the union of the terms in each section:
pi = {pws1
1 , pws1
2 . . . pwsi
m}
where pw stands for a page word and the superscript
indicates the section of each term. A term can be a unigram or
a phrase extracted by a phrase extractor [12].
Similarly, we represent each ad as a set of sections a =
{a1, a2, . . . al}, each section in turn being an unordered set
of terms:
ai = {aws1
1 , aws1
2 . . . awsj
l }
where aw is an ad word. The ads in our experiments have
3 sections: title, body, and bid phrase. In this work, to
produce the match score we use only the ad/page textual
information, leaving user information and other data for
future work.
Next, each page and ad term is associated with a weight
based on the tf-idf values. The tf value is determined based
on the individual ad sections. There are several choices for
the value of idf, based on different scopes. On the ad side,
it has been shown in previous work that the set of ads of
one campaign provide good scope for the estimation of idf
that leads to improved matching results [8]. However, in this
work for simplicity we do not take into account campaigns.
To combine the impact of the term"s section and its tf-idf
score, the ad/page term weight is defined as:
tWeight(kwsi
) = weightSection(Si) · tf idf(kw)
where tWeight stands for term weight and weightSection(Si)
is the weight assigned to a page or ad section. This weight
is a fixed parameter determined by experimentation.
Each ad and page is classified into the topical taxonomy.
We define these two mappings:
Tax(pi) = {pci1, . . . pciu}
Tax(aj) = {acj1 . . . acjv}
where pc and ac are page and ad classes correspondingly.
Each assignment is associated with a weight given by the
classifier. The weights are normalized to sum to 1:
X
c∈T ax(xi)
cWeight(c) = 1
where xi is either a page or an ad, and cWeights(c) is the
class weight - normalized confidence assigned by the
classifier. The number of classes can vary between different pages
and ads. This corresponds to the number of topics a page/ad
can be associated with and is almost always in the range 1-4.
Now we define the relevance score of an ad ai and page
pi as a convex combination of the keyword (syntactic) and
classification (semantic) score:
Score(pi, ai) = α · TaxScore(Tax(pi), Tax(ai))
+(1 − α) · KeywordScore(pi, ai)
The parameter α determines the relative weight of the
taxonomy score and the keyword score.
To calculate the keyword score we use the vector space
model [1] where both the pages and ads are represented
in n-dimensional space - one dimension for each distinct
term. The magnitude of each dimension is determined by
the tWeight() formula. The keyword score is then defined as
the cosine of the angle between the page and the ad vectors:
KeywordScore(pi, ai)
=
P
i∈|K| tWeight(pwi)· tWeight(awi)
qP
i∈|K|(tWeight(pwi))2
qP
i∈|K|(tWeight(awi))2
where K is the set of all the keywords. The formula
assumes independence between the words in the pages and
ads. Furthermore, it ignores the order and the proximity of
the terms in the scoring. We experimented with the impact
of phrases and proximity on the keyword score and did not
see a substantial impact of these two factors.
We now turn to the definition of the TaxScore. This
function indicates the topical match between a given ad and
a page. As opposed to the keywords that are treated as
independent dimensions, here the classes (topics) are
organized into a hierarchy. One of the goals in the design of
the TaxScore function is to be able to generalize within the
taxonomy, that is accept topically related ads.
Generalization can help to place ads in cases when there is no ad that
matches both the category and the keywords of the page.
The example in Figure 2 illustrates this situation. In this
example, in the absence of ski ads, a page about skiing
containing the word Atomic could be matched to the available
snowboarding ad for the same brand.
In general we would like the match to be stronger when
both the ad and the page are classified into the same node
Figure 2: Two generalization paths
and weaker when the distance between the nodes in the
taxonomy gets larger. There are multiple ways to specify the
distance between two taxonomy nodes. Besides the above
requirement, this function should lend itself to an efficient
search of the ad space. Given a page we have to find the
ad in a few milliseconds, as this impacts the presentation to
a waiting user. This will be further discussed in the next
section.
To capture both the generalization and efficiency
requirements we define the TaxScore function as follows:
TaxScore(PC, AC) =
X
pc∈P C
X
ac∈AC
idist(LCA(pc, ac), ac)·cWeight(pc)·cWeight(ac)
In this function we consider every combination of page class
and ad class. For each combination we multiply the product
of the class weights with the inverse distance function
between the least common ancestor of the two classes (LCA)
and the ad class. The inverse distance function idist(c1, c2)
takes two nodes on the same path in the class taxonomy
and returns a number in the interval [0, 1] depending on the
distance of the two class nodes. It returns 1 if the two nodes
are the same, and declines toward 0 when LCA(pc, ac) or ac
is the root of the taxonomy. The rate of decline determines
the weight of the generalization versus the other terms in
the scoring formula.
To determine the rate of decline we consider the impact
on the specificity of the match when we substitute a class
with one of its ancestors. In general the impact is topic
dependent. For example the node Hobby in our taxonomy
has tens of children, each representing a different hobby, two
examples being Sailing and Knitting. Placing an ad
about Knitting on a page about Sailing does not make
lots of sense. However, in the Winter Sports example
above, in the absence of better alternative, skiing ads could
be put on snowboarding pages as they might promote the
same venues, equipment vendors etc. Such detailed analysis
on a case by case basis is prohibitively expensive due to the
size of the taxonomy.
One option is to use the confidences of the ancestor classes
as given by the classifier. However we found these
numbers not suitable for this purpose as the magnitude of the
confidences does not necessarily decrease when going up the
tree. Another option is to use explore-exploit methods based
on machine-learning from the click feedback as described
in [10]. However for simplicity, in this work we chose a
simple heuristic to determine the cost of generalization from a
child to a parent. In this heuristic we look at the
broadening of the scope when moving from a child to a parent. We
estimate the broadening by the density of ads classified in
the parent nodes vs the child node. The density is obtained
by classifying a large set of ads in the taxonomy using the
document classifier described above. Based on this, let nc
be the number of document classified into the subtree rooted
at c. Then we define:
idist(c, p) =
nc
np
where c represents the child node and p is the parent node.
This fraction can be viewed as a probability of an ad
belonging to the parent topic being suitable for the child topic.
6. SEARCHING THE AD SPACE
In the previous section we discussed the choice of scoring
function to estimate the match between an ad and a page.
The top-k ads with the highest score are offered by the
system for placement on the publisher"s page. The process of
score calculation and ad selection is to be done in real time
and therefore must be very efficient. As the ad collections
are in the range of hundreds of millions of entries, there is a
need for indexed access to the ads.
Inverted indexes provide scalable and low latency
solutions for searching documents. However, these have been
traditionally used to search based on keywords. To be able
to search the ads on a combination of keywords and classes
we have mapped the classification match to term match and
adapted the scoring function to be suitable for fast
evaluation over inverted indexes. In this section we overview the
ad indexing and the ranking function of our prototype ad
search system for matching ads and pages.
We used a standard inverted index framework where there
is one posting list for each distinct term. The ads are parsed
into terms and each term is associated with a weight based
on the section in which it appears. Weights from distinct
occurrences of a term in an ad are added together, so that
the posting lists contain one entry per term/ad combination.
The next challenge is how to index the ads so that the class
information is preserved in the index? A simple method is to
create unique meta-terms for the classes and annotate each
ad with one meta-term for each assigned class. However
this method does not allow for generalization, since only the
ads matching an exact label of the page would be selected.
Therefore we annotated each ad with one meta-term for each
ancestor of the assigned class. The weights of meta-terms
are assigned according to the value of the idist() function
defined in the previous section. On the query side, given the
keywords and the class of a page, we compose a keyword only
query by inserting one class term for each ancestor of the
classes assigned to the page.
The scoring function is adapted to the two part
scoreone for the class meta-terms and another for the text term.
During the class score calculation, for each class path we use
only the lowest class meta-term, ignoring the others. For
example, if an ad belongs to the Skiing class and is
annotated with both Skiing and its parent Winter Sports,
the index will contain the special class meta-terms for both
Skiing and Winter Sports (and all their ancestors) with
the weights according to the product of the classifier
confidence and the idist function. When matching with a page
classified into Skiing, the query will contain terms for class
Skiing and for each of its ancestors. However when scoring
an ad classified into Skiing we will use the weight for the
Skiing class meta-term. Ads classified into
Snowboarding will be scored using the weight of the Winter Sports
meta-term. To make this check efficiently we keep a sorted
list of all the class paths and, at scoring time, we search the
paths bottom up for a meta-term appearing in the ad. The
first meta-term is used for scoring, the rest are ignored.
At runtime, we evaluate the query using a variant of the
WAND algorithm [3]. This is a document-at-a-time
algorithm [1] that uses a branch-and-bound approach to derive
efficient moves for the cursors associated to the postings
lists. It finds the next cursor to be moved based on an
upper bound of the score for the documents at which the
cursors are currently positioned. The algorithm keeps a heap of
current best candidates. Documents with an upper bound
smaller than the current minimum score among the
candidate documents can be eliminated from further
considerations, and thus the cursors can skip over them. To find the
upper bound for a document, the algorithm assumes that all
cursors that are before it will hit this document (i.e. the
document contains all those terms represented by cursors before
or at that document). It has been shown that WAND can
be used with any function that is monotonic with respect to
the number of matching terms in the document.
Our scoring function is monotonic since the score can
never decrease when more terms are found in the document.
In the special case when we add a cursor representing an
ancestor of a class term already factored in the score, the
score simply does not change (we add 0). Given these
properties, we use an adaptation of the WAND algorithm where
we change the calculation of the scoring function and the
upper bound score calculation to reflect our scoring function.
The rest of the algorithm remains unchanged.
7. EXPERIMENTAL EVALUATION
7.1 Data and Methodology
We quantify the effect of the semantic-syntactic matching
using a set of 105 pages. This set of pages was selected
by a random sample of a larger set of around 20 million
pages with contextual advertising. Ads for each of these
pages have been selected from a larger pool of ads (tens of
millions) by previous experiments conducted by Yahoo! US
for other purposes. Each page-ad pair has been judged by
three or more human judges on a 1 to 3 scale:
1. Relevant The ad is semantically directly related to
the main subject of the page. For example if the page
is about the National Football League and the ad is
about tickets for NFL games, it would be scored as 1.
2. Somewhat relevant The ad is related to the
secondary subject of the page, or is related to the main
topic of the page in a general way. In the NFL page
example, an ad about NFL branded products would
be judged as 2.
3. Irrelevant The ad is unrelated to the page. For
example a mention of the NFL player John Maytag triggers
washing machine ads on a NFL page.
pages 105
words per page 868
judgments 2946
judg. inter-editor agreement 84%
unique ads 2680
unique ads per page 25.5
page classification precision 70%
ad classification precision 86%
Table 1: Dataset statistics
To obtain a score for a page-ad pair we average all the scores
and then round to the closest integer. We then used these
judgments to evaluate how well our methods distinguish the
positive and the negative ad assignments for each page. The
statistics of the page dataset is given in Table 1.
The original experiments that paired the pages and the
ads are loosely related to the syntactic keyword based
matching and classification based assignment but used different
taxonomies and keyword extraction techniques. Therefore
we could not use standard pooling as an evaluation method
since we did not control the way the pairs were selected and
could not precisely establish the set of ads from which the
placed ads were selected.
Instead, in our evaluation for each page we consider only
those ads for which we have judgment. Each different method
was applied to this set and the ads were ranked by the score.
The relative effectiveness of the algorithms were judged by
comparing how well the methods separated the ads with
positive judgment from the ads with negative judgment. We
present precision on various levels of recall within this set.
As the set of ads per page is relatively small, the evaluation
reports precision that is higher than it would be with a larger
set of negative ads. However, these numbers still establish
the relative performance of the algorithms and we can use
it to evaluate performance at different score thresholds.
In addition to the precision-recall over the judged ads,
we also present Kendall"s τ rank correlation coefficient to
establish how far from the perfect ordering are the orderings
produced by our ranking algorithms. For this test we ranked
the judged ads by the scores assigned by the judges and then
compared this order to the order assigned by our algorithms.
Finally we also examined the precision at position 1, 3 and
5.
7.2 Results
Figure 3 shows the precision recall curves for the
syntactic matching (keywords only used) vs. a syntactic-semantic
matching with the optimal value of α = 0.8 (judged by the
11-point score [1]). In this figure, we assume that the
adpage pairs judged with 1 or 2 are positive examples and the
3s are negative examples. We also examined counting only
the pairs judged with 1 as positive examples and did not
find a significant change in the relative performance of the
tested methods. Overlaid are also results using phrases in
the keyword match. We see that these additional features
do not change the relative performance of the algorithm.
The graphs show significant impact of the class
information, especially in the mid range of recall values. In the
low recall part of the chart the curves meet. This indicates
that when the keyword match is really strong (i.e. when
the ad is almost contained within the page) the precision
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.2 0.4 0.6 0.8 1
Recall
Precision
Alpha=.9, no phrase Alpha=0, no phrase
Alpha=0, phrase Alpha=.9, phrase
Figure 3: Data Set 2: Precision vs. Recall of
syntactic match (α = 0) vs. syntactic-semantic match
(α = 0.8)
α Kendall"s τ
α = 0 0.086
α = 0.25 0.155
α = 0.50 0.166
α = 0.75 0.158
α = 1 0.136
Table 2: Kendall"s τ for different α values
of the syntactic keyword match is comparable with that of
the semantic-syntactic match. Note however that the two
methods might produce different ads and could be used as
a complement at level of recall.
The semantic components provides largest lift in
precision at the mid range of recall where 25% improvement is
achieved by using the class information for ad placement.
This means that when there is somewhat of a match
between the ad and the page, the restriction to the right classes
provides a better scope for selecting the ads.
Table 2 shows the Kendall"s τ values for different values of
α. We calculated the values by ranking all the judged ads for
each page and averaging the values over all the pages. The
ads with tied judgment were given the rank of the middle
position. The results show that when we take into account
all the ad-page pairs, the optimal value of α is around 0.5.
Note that purely syntactic match (α = 0) is by far the
weakest method.
Figure 4 shows the effect of the parameter α in the scoring.
We see that in most cases precision grows or is flat when we
increase α, except at the low level of recall where due to
small number of data points there is a bit of jitter in the
results.
Table 3 shows the precision at positions 1, 3 and 5. Again,
the purely syntactic method has clearly the lowest score by
individual positions and the total number of correctly placed
ads. The numbers are close due to the small number of the
ads considered, but there are still some noticeable trends.
For position 1 the optimal α is in the range of 0.25 to 0.75.
For positions 3 and 5 the optimum is at α = 1. This also
indicates that for those ads that have a very high keyword
score, the semantic information is somewhat less important.
If almost all the words in an ad appear in the page, this ad is
Precision Vs Alpha for Different Levels of Recall
(Data Set 2)
0.45
0.55
0.65
0.75
0.85
0.95
0 0.2 0.4 0.6 0.8 1
Alpha
Precision
80% Recall 60% Recall 40% Recall 20% Recall
Figure 4: Impact of α on precision for different levels
of recall
α #1 #3 #5 sum
α = 0 80 70 68 218
α = 0.25 89 76 73 238
α = 0.5 89 74 73 236
α = 0.75 89 78 73 240
α = 1 86 79 74 239
Table 3: Precision at position 1, 3 and 5
likely to be relevant for this page. However when there is no
such clear affinity, the class information becomes a dominant
factor.
8. CONCLUSION
Contextual advertising is the economic engine behind a
large number of non-transactional sites on the Web. Studies
have shown that one of the main success factors for
contextual ads is their relevance to the surrounding content. All
existing commercial contextual match solutions known to us
evolved from search advertising solutions whereby a search
query is matched to the bid phrase of the ads. A natural
extension of search advertising is to extract phrases from the
page and match them to the bid phrase of the ads. However,
individual phrases and words might have multiple meanings
and/or be unrelated to the overall topic of the page leading
to miss-matched ads.
In this paper we proposed a novel way of matching
advertisements to web pages that rely on a topical (semantic)
match as a major component of the relevance score. The
semantic match relies on the classification of pages and ads
into a 6000 nodes commercial advertising taxonomy to
determine their topical distance. As the classification relies
on the full content of the page, it is more robust than
individual page phrases. The semantic match is complemented
with a syntactic match and the final score is a convex
combination of the two sub-scores with the relative weight of
each determined by a parameter α.
We evaluated the semantic-syntactic approach against a
syntactic approach over a set of pages with different
contextual advertising. As shown in our experimental evaluation,
the optimal value of the parameter α depends on the precise
objective of optimization (precision at particular position,
precision at given recall). However in all cases the optimal
value of α is between 0.25 and 0.9 indicating significant effect
of the semantic score component. The effectiveness of the
syntactic match depends on the quality of the pages used. In
lower quality pages we are more likely to make classification
errors that will then negatively impact the matching. We
demonstrated that it is feasible to build a large scale
classifier that has sufficient good precision for this application.
We are currently examining how to employ machine
learning algorithms to learn the optimal value of α based on a
collection of features of the input pages.
9. REFERENCES
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information
Retrieval. ACM, 1999.
[2] Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik.
A training algorithm for optimal margin classifiers. In
Computational Learning Theory, pages 144-152, 1992.
[3] A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and
J. Zien. Efficient query evaluation using a two-level
retrieval process. In CIKM "03: Proc. of the twelfth intl.
conf. on Information and knowledge management, pages
426-434, New York, NY, 2003. ACM.
[4] P. Chatterjee, D. L. Hoffman, and T. P. Novak. Modeling
the clickstream: Implications for web-based advertising
efforts. Marketing Science, 22(4):520-541, 2003.
[5] D. Fain and J. Pedersen. Sponsored search: A brief history.
In In Proc. of the Second Workshop on Sponsored Search
Auctions, 2006. Web publication, 2006.
[6] S. C. Gates, W. Teiken, and K.-Shin F. Cheng. Taxonomies
by the numbers: building high-performance taxonomies. In
CIKM "05: Proc. of the 14th ACM intl. conf. on
Information and knowledge management, pages 568-577,
New York, NY, 2005. ACM.
[7] A. Lacerda, M. Cristo, M. Andre; G., W. Fan, N. Ziviani,
and B. Ribeiro-Neto. Learning to advertise. In SIGIR "06:
Proc. of the 29th annual intl. ACM SIGIR conf., pages
549-556, New York, NY, 2006. ACM.
[8] B. Ribeiro-Neto, M. Cristo, P. B. Golgher, and E. S.
de Moura. Impedance coupling in content-targeted
advertising. In SIGIR "05: Proc. of the 28th annual intl.
ACM SIGIR conf., pages 496-503, New York, NY, 2005.
ACM.
[9] J. Rocchio. Relevance feedback in information retrieval. In
The SMART Retrieval System: Experiments in Automatic
Document Processing, pages 313-323. PrenticeHall, 1971.
[10] P. Sandeep, D. Agarwal, D. Chakrabarti, and V. Josifovski.
Bandits for taxonomies: A model-based approach. In In
Proc. of the SIAM intl. conf. on Data Mining, 2007.
[11] T. Santner and D. Duffy. The Statistical Analysis of
Discrete Data. Springer-Verlag, 1989.
[12] R. Stata, K. Bharat, and F. Maghoul. The term vector
database: fast access to indexing terms for web pages.
Computer Networks, 33(1-6):247-255, 2000.
[13] C. Wang, P. Zhang, R. Choi, and M. D. Eredita.
Understanding consumers attitude toward advertising. In
Eighth Americas conf. on Information System, pages
1143-1148, 2002.
[14] W. Yih, J. Goodman, and V. R. Carvalho. Finding
advertising keywords on web pages. In WWW "06: Proc. of
the 15th intl. conf. on World Wide Web, pages 213-222,
New York, NY, 2006. ACM. | pay-per-click;semantics;contextual advertising;matching mechanism;semantic-syntactic matching;match;keyword matching;document classifier;ad relevance;contextual advertise;topical distance;top-k ad;hierarchical taxonomy class |
train_H-48 | A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch | The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing. | 1. INTRODUCTION
In our domain,1
and unlike web search, it is very
important for attorneys to find all documents (e.g., cases) that
are relevant to an issue. Missing relevant documents may
have non-trivial consequences on the outcome of a court
proceeding. Attorneys are especially concerned about missing
relevant documents when researching a legal topic that is
new to them, as they may not be aware of all language
variations in such topics. Therefore, it is important to develop
information retrieval systems that are robust with respect to
language variations or term mismatch between queries and
relevant documents. During our work on developing such
systems, we concluded that current evaluation methods are
not sufficient for this purpose.
{Whooping cough, pertussis}, {heart attack, myocardial
infarction}, {car wash, automobile cleaning}, {attorney,
legal counsel, lawyer} are all examples of things that share
the same meaning. Often, the terms chosen by users in their
queries are different than those appearing in the documents
relevant to their information needs. This query-document
term mismatch arises from two sources: (1) the synonymy
found in natural language, both at the term and the phrasal
level, and (2) the degree to which the user is an expert at
searching and/or has expert knowledge in the domain of the
collection being searched.
IR evaluations are comparative in nature (cf. TREC).
Generally, IR evaluations show how System A did in
relation to System B on the same test collection based on various
precision- and recall-based metrics. Similarly, IR systems
with QE capabilities are typically evaluated by executing
each search twice, once with and once without query
expansion, and then comparing the two result sets. While this
approach shows which system may have performed better
overall with respect to a particular test collection, it does
not directly or systematically measure the effectiveness of
IR systems in overcoming query-document term mismatch.
If the goal of QE is to increase search performance by
mitigating the effects of query-document term mismatch, then
the degree to which a system does so should be measurable
in evaluation. An effective evaluation method should
measure the performance of IR systems under varying degrees of
query-document term mismatch, not just in terms of overall
performance on a collection relative to another system.
1
Thomson Corporation builds information based solutions
to the professional markets including legal, financial, health
care, scientific, and tax and accounting.
In order to measure that a particular IR system is able
to overcome query-document term mismatch by retrieving
documents that are relevant to a user"s query, but that do
not necessarily contain the query terms themselves, we
systematically introduce term mismatch into the test collection
by removing query terms from known relevant documents.
Because we are purposely inducing term mismatch between
the queries and known relevant documents in our test
collections, the proposed evaluation framework is able to measure
the effectiveness of QE in a way that testing on the whole
collection is not. If a QE search method finds a document
that is known to be relevant but that is nonetheless missing
query terms, it shows that QE technique is indeed robust
with respect to query-document term mismatch.
2. RELATED WORK
Accounting for term mismatch between the terms in user
queries and the documents relevant to users" information
needs has been a fundamental issue in IR research for
almost 40 years [38, 37, 47]. Query expansion (QE) is one
technique used in IR to improve search performance by
increasing the likelihood of term overlap (either explicitly or
implicitly) between queries and documents that are relevant
to users" information needs. Explicit query expansion
occurs at run-time, based on the initial search results, as is
the case with relevance feedback and pseudo relevance
feedback [34, 37]. Implicit query expansion can be based on
statistical properties of the document collection, or it may
rely on external knowledge sources such as a thesaurus or an
ontology [32, 17, 26, 50, 51, 2]. Regardless of method, QE
algorithms that are capable of retrieving relevant documents
despite partial or total term mismatch between queries and
relevant documents should increase the recall of IR systems
(by retrieving documents that would have previously been
missed) as well as their precision (by retrieving more
relevant documents).
In practice, QE tends to improve the average overall
retrieval performance, doing so by improving performance on
some queries while making it worse on others. QE
techniques are judged as effective in the case that they help
more than they hurt overall on a particular collection [47,
45, 41, 27]. Often, the expansion terms added to a query
in the query expansion phase end up hurting the overall
retrieval performance because they introduce semantic noise,
causing the meaning of the query to drift. As such, much
work has been done with respect to different strategies for
choosing semantically relevant QE terms to include in order
to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5].
The evaluation of IR systems has received much attention
in the research community, both in terms of developing test
collections for the evaluation of different systems [11, 12, 13,
43] and in terms of the utility of evaluation metrics such as
recall, precision, mean average precision, precision at rank,
Bpref, etc. [7, 8, 44, 14]. In addition, there have been
comparative evaluations of different QE techniques on various
test collections [47, 45, 41].
In addition, the IR research community has given
attention to differences between the performance of individual
queries. Research efforts have been made to predict which
queries will be improved by QE and then selectively
applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve
optimal overall performance. In addition, related work on
predicting query difficulty, or which queries are likely to
perform poorly, has been done [1, 4, 5, 9]. There is general
interest in the research community to improve the
robustness of IR systems by improving retrieval performance on
difficult queries, as is evidenced by the Robust track in the
TREC competitions and new evaluation measures such as
GMAP. GMAP (geometric mean average precision) gives
more weight to the lower end of the average precision (as
opposed to MAP), thereby emphasizing the degree to which
difficult or poorly performing queries contribute to the score
[33].
However, no attention is given to evaluating the
robustness of IR systems implementing QE with respect to
querydocument term mismatch in quantifiable terms. By
purposely inducing mismatch between the terms in queries and
relevant documents, our evaluation framework allows us a
controlled manner in which to degrade the quality of the
queries with respect to their relevant documents, and then
to measure the both the degree of (induced) difficulty of the
query and the degree to which QE improves the retrieval
performance of the degraded query.
The work most similar to our own in the literature consists
of work in which document collections or queries are altered
in a systematic way to measure differences query
performance. [42] introduces into the document collection
pseudowords that are ambiguous with respect to word sense, in
order to measure the degree to which word sense
disambiguation is useful in IR. [6] experiments with altering the
document collection by adding semantically related expansion
terms to documents at indexing time. In cross-language IR,
[28] explores different query expansion techniques while
purposely degrading their translation resources, in what amounts
to expanding a query with only a controlled percentage of
its translation terms. Although similar in introducing a
controlled amount of variance into their test collections, these
works differ from the work being presented in this paper
in that the work being presented here explicitly and
systematically measures query effectiveness in the presence of
query-document term mismatch.
3. METHODOLOGY
In order to accurately measure IR system performance in
the presence of query-term mismatch, we need to be able
to adjust the degree of term mismatch in a test corpus in
a principled manner. Our approach is to introduce
querydocument term mismatch into a corpus in a controlled
manner and then measure the performance of IR systems as
the degree of term mismatch changes. We systematically
remove query terms from known relevant documents,
creating alternate versions of a test collection that differ only in
how many or which query terms have been removed from
the documents relevant to a particular query. Introducing
query-document term mismatch into the test collection in
this manner allows us to manipulate the degree of term
mismatch between relevant documents and queries in a
controlled manner.
This removal process affects only the relevant documents
in the search collection. The queries themselves remain
unaltered. Query terms are removed from documents one by
one, so the differences in IR system performance can be
measured with respect to missing terms. In the most extreme
case (i.e., when the length of the query is less than or equal
to the number of query terms removed from the relevant
documents), there will be no term overlap between a query
and its relevant documents. Notice that, for a given query,
only relevant documents are modified. Non-relevant
documents are left unchanged, even in the case that they contain
query terms.
Although, on the surface, we are changing the
distribution of terms between the relevant and non-relevant
documents sets by removing query terms from the relevant
documents, doing so does not change the conceptual relevancy
of these documents. Systematically removing query terms
from known relevant documents introduces a controlled amount
of query-document term mismatch by which we can
evaluate the degree to which particular QE techniques are able
to retrieve conceptually relevant documents, despite a lack
of actual term overlap. Removing a query term from
relevant documents simply masks the presence of that query
term in those documents. It does not in any way change the
conceptual relevancy of the documents.
The evaluation framework presented in this paper consists
of three elements: a test collection, C; a strategy for selecting
which query terms to remove from the relevant documents in
that collection, S; and a metric by which to compare
performance of the IR systems, m. The test collection, C, consists
of a document collection, queries, and relevance judgments.
The strategy, S, determines the order and manner in which
query terms are removed from the relevant documents in C.
This evaluation framework is not metric-specific; any metric
(MAP, P@10, recall, etc.) can be used to measure IR system
performance.
Although test collections are difficult to come by, it should
be noted that this evaluation framework can be used on
any available test collection. In fact, using this framework
stretches the value of existing test collections in that one
collection becomes several when query terms are removed from
relevant documents, thereby increasing the amount of
information that can be gained from evaluating on a particular
collection.
In other evaluations of QE effectiveness, the controlled
variable is simply whether or not queries have been
expanded or not, compared in terms of some metric. In
contrast, the controlled variable in this framework is the query
term that has been removed from the documents relevant to
that query, as determined by the removal strategy, S. Query
terms are removed one by one, in a manner and order
determined by S, so that collections differ only with respect
to the one term that has been removed (or masked) in the
documents relevant to that query. It is in this way that we
can explicitly measure the degree to which an IR system
overcomes query-document term mismatch.
The choice of a query term removal strategy is relatively
flexible; the only restriction in choosing a strategy S is that
query terms must be removed one at a time. Two
decisions must be made when choosing a removal strategy S.
The first is the order in which S removes terms from the
relevant documents. Possible orders for removal could be
based on metrics such as IDF or the global probability of a
term in a document collection. Based on the purpose of the
evaluation and the retrieval algorithm being used, it might
make more sense to choose a removal order for S based on
query term IDF or perhaps based on a measure of query
term probability in the document collection.
Once an order for removal has been decided, a manner for
term removal/masking must be decided. It must be
determined if S will remove the terms individually (i.e., remove
just one different term each time) or additively (i.e., remove
one term first, then that term in addition to another, and so
on). The incremental additive removal of query terms from
relevant documents allows the evaluation to show the
degree to which IR system performance degrades as more and
more query terms are missing, thereby increasing the degree
of query-document term mismatch. Removing terms
individually allows for a clear comparison of the contribution of
QE in the absence of each individual query term.
4. EXPERIMENTAL SET-UP
4.1 IR Systems
We used the proposed evaluation framework to evaluate
four IR systems on two test collections. Of the four
systems used in the evaluation, two implement query
expansion techniques: Okapi (with pseudo-feedback for QE), and
a proprietary concept search engine (we"ll call it TCS, for
Thomson Concept Search). TCS is a language modeling
based retrieval engine that utilizes a subject-appropriate
external corpus (i.e., legal or news) as a knowledge source.
This external knowledge source is a corpus separate from,
but thematically related to, the document collection to be
searched. Translation probabilities for QE [2] are calculated
from these large external corpora.
Okapi (without feedback) and a language model query
likelihood (QL) model (implemented using Indri) are
included as keyword-only baselines. Okapi without feedback
is intended as an analogous baseline for Okapi with
feedback, and the QL model is intended as an appropriate
baseline for TCS, as they both implement language-modeling
based retrieval algorithms. We choose these as baselines
because they are dependent only on the words appearing in
the queries and have no QE capabilities. As a result, we
expect that when query terms are removed from relevant
documents, the performance of these systems should degrade
more dramatically than their counterparts that implement
QE.
The Okapi and QL model results were obtained using the
Lemur Toolkit.2
Okapi was run with the parameters k1=1.2,
b=0.75, and k3=7. When run with feedback, the feedback
parameters used in Okapi were set at 10 documents and 25
terms. The QL model used Jelinek-Mercer smoothing, with
λ = 0.6.
4.2 Test Collections
We evaluated the performance of the four IR systems
outlined above on two different test collections. The two test
collections used were the TREC AP89 collection (TIPSTER
disk 1) and the FSupp Collection.
The FSupp Collection is a proprietary collection of 11,953
case law documents for which we have 44 queries (ranging
from four to twenty-two words after stop word removal) with
full relevance judgments.3
The average length of documents
in the FSupp Collection is 3444 words.
2
www.lemurproject.org
3
Each of the 11,953 documents was evaluated by domain
experts with respect to each of the 44 queries.
The TREC AP89 test collection contains 84,678
documents, averaging 252 words in length. In our evaluation, we
used both the title and the description fields of topics
151200 as queries, so we have two sets of results for the AP89
Collection. After stop word removal, the title queries range
from two to eleven words and the description queries range
from four to twenty-six terms.
4.3 Query Term Removal Strategy
In our experiments, we chose to sequentially and
additively remove query terms from highest-to-lowest inverse
document frequency (IDF) with respect to the entire
document collection. Terms with high IDF values tend to
influence document ranking more than those with lower IDF
values. Additionally, high IDF terms tend to be
domainspecific terms that are less likely to be known to non-expert
user, hence we start by removing these first.
For the FSupp Collection, queries were evaluated
incrementally with one, two, three, five, and seven terms
removed from their corresponding relevant documents. The
longer description queries from TREC topics 151-200 were
likewise evaluated on the AP89 Collection with one, two,
three, five, and seven query terms removed from their
relevant documents. For the shorter TREC title queries, we
removed one, two, three, and five terms from the relevant
documents.
4.4 Metrics
In this implementation of the evaluation framework, we
chose three metrics by which to compare IR system
performance: mean average precision (MAP), precision at 10
documents (P10), and recall at 1000 documents. Although
these are the metrics we chose to demonstrate this
framework, any appropriate IR metrics could be used within the
framework.
5. RESULTS
5.1 FSupp Collection
Figures 1, 2, and 3 show the performance (in terms of
MAP, P10 and Recall, respectively) for the four search
engines on the FSupp Collection. As expected, the
performance of the keyword-only IR systems, QL and Okapi, drops
quickly as query terms are removed from the relevant
documents in the collection. The performance of Okapi with
feedback (Okapi FB) is somewhat surprising in that on the
original collection (i.e., prior to query term removal), its
performance is worse than that of Okapi without feedback on
all three measures.
TCS outperforms the QL keyword baseline on every
measure except for MAP on the original collection (i.e., prior
to removing any query terms). Because TCS employs
implicit query expansion using an external domain specific
knowledge base, it is less sensitive to term removal (i.e.,
mismatch) than the Okapi FB, which relies on terms from
the top-ranked documents retrieved by an initial
keywordonly search. Because overall search engine performance is
frequently measured in terms of MAP, and because other
evaluations of QE often only consider performance on the
entire collection (i.e., they do not consider term mismatch),
the QE implemented in TCS would be considered (in
an0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
MeanAveragePrecision(MAP)
Okapi FB
Okapi
TCS
QL
FSupp: Mean Average Precision with Query Terms Removed
Figure 1: The performance of the four retrieval
systems on the FSupp collection in terms of Mean
Average Precision (MAP) and as a function of the
number of query terms removed (the horizontal axis).
0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
Precisionat10Documents(P10)
Okapi FB
Okapi
TCS
QL
FSupp: P10 with Query Terms Removed
Figure 2: The performance of the four retrieval
systems on the FSupp collection in terms of Precision
at 10 and as a function of the number of query terms
removed (the horizontal axis).
other evaluation) to hurt performance on the FSupp
Collection. However, when we look at the comparison of TCS to
QL when query terms are removed from the relevant
documents, we can see that the QE in TCS is indeed contributing
positively to the search.
5.2 The AP89 Collection: using the
description queries
Figures 4, 5, and 6 show the performance of the four IR
systems on the AP89 Collection, using the TREC topic
descriptions as queries. The most interesting difference
between the performance on the FSupp Collection and the
AP89 collection is the reversal of Okapi FB and TCS. On
FSupp, TCS outperformed the other engines consistently
(see Figures 1, 2, and 3); on the AP89 Collection, Okapi
FB is clearly the best performer (see Figures 4, 5, and 6).
This is all the more interesting, based on the fact that QE in
Okapi FB takes place after the first search iteration, which
0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Okapi FB
Okapi
TCS
Indri
FSupp: Recall at 1000 documents with Query Terms Removed
Figure 3: The Recall (at 1000) of the four retrieval
systems on the FSupp collection as a function of
the number of query terms removed (the horizontal
axis).
0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
MeanAveragePrecision(MAP)
Okapi FB
Okapi
TCS
QL
AP89: Mean Average Precision with Query Terms Removed (description queries)
Figure 4: MAP of the four IR systems on the AP89
Collection, using TREC description queries. MAP
is measured as a function of the number of query
terms removed.
0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Precisionat10Documents(P10)
Okapi FB
Okapi
TCS
QL
AP89: P10 with Query Terms Removed (description queries)
Figure 5: Precision at 10 of the four IR systems
on the AP89 Collection, using TREC description
queries. P at 10 is measured as a function of the
number of query terms removed.
0 1 2 3 4 5 6 7
Number of Query Terms Removed from Relevant Documents
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Recall
Okapi FB
Okapi
TCS
QL
AP89: Recall at 1000 documents with Query Terms Removed (description queries)
Figure 6: Recall (at 1000) of the four IR systems
on the AP89 Collection, using TREC description
queries, and as a function of the number of query
terms removed.
we would expect to be handicapped when query terms are
removed.
Looking at P10 in Figure 5, we can see that TCS and
Okapi FB score similarly on P10, starting at the point where
one query term is removed from relevant documents. At two
query terms removed, TCS starts outperforming Okapi FB.
If modeling this in terms of expert versus non-expert users,
we could conclude that TCS might be a better search engine
for non-experts to use on the AP89 Collection, while Okapi
FB would be best for an expert searcher.
It is interesting to note that on each metric for the AP89
description queries, TCS performs more poorly than all the
other systems on the original collection, but quickly
surpasses the baseline systems and approaches Okapi FB"s
performance as terms are removed. This is again a case where
the performance of a system on the entire collection is not
necessarily indicative of how it handles query-document term
mismatch.
5.3 The AP89 Collection: using the title queries
Figures 7, 8, and 9 show the performance of the four IR
systems on the AP89 Collection, using the TREC topic titles
as queries. As with the AP89 description queries, Okapi
FB is again the best performer of the four systems in the
evaluation. As before, the performance of the Okapi and
QL systems, the non-QE baseline systems, sharply degrades
as query terms are removed. On the shorter queries, TCS
seems to have a harder time catching up to the performance
of Okapi FB as terms are removed.
Perhaps the most interesting result from our evaluation
is that although the keyword-only baselines performed
consistently and as expected on both collections with respect
to query term removal from relevant documents, the
performances of the engines implementing QE techniques differed
dramatically between collections.
0 1 2 3 4 5
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
MeanAveragePrecision(MAP)
Okapi FB
Okapi
TCS
QL
AP89: Mean Average Precision with Query Terms Removed (title queries)
Figure 7: MAP of the four IR systems on the AP89
Collection, using TREC title queries and as a
function of the number of query terms removed.
0 1 2 3 4 5
Number of Query Terms Removed from Relevant Documents
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Precisionat10Documents(P10)
Okapi FB
Okapi
TCS
QL
AP89: P10 with Query Terms Removed (title queries)
Figure 8: Precision at 10 of the four IR systems on
the AP89 Collection, using TREC title queries, and
as a function of the number of query terms removed.
0 1 2 3 4 5
Number of Query Terms Removed from Relevant Documents
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Recall
Okapi FB
Okapi
TCS
QL
AP89: Recall at 1000 documents with Query Terms Removed (title queries)
Figure 9: Recall (at 1000) of the four IR systems on
the AP89 Collection, using TREC title queries and
as a function of the number of query terms removed.
6. DISCUSSION
The intuition behind this evaluation framework is to
measure the degree to which various QE techniques overcome
term mismatch between queries and relevant documents. In
general, it is easy to evaluate the overall performance of
different techniques for QE in comparison to each other or
against a non-QE variant on any complete test collection.
Such an approach does tell us which systems perform better
on a complete test collection, but it does not measure the
ability of a particular QE technique to retrieve relevant
documents despite partial or complete term mismatch between
queries and relevant documents.
A systematic evaluation of IR systems as outlined in this
paper is useful not only with respect to measuring the
general success or failure of particular QE techniques in the
presence of query-document term mismatch, but it also
provides insight into how a particular IR system will perform
when used by expert versus non-expert users on a
particular collection. The less a user knows about the domain of
the document collection on which they are searching, the
more prevalent query-document term mismatch is likely to
be. This distinction is especially relevant in the case that
the test collection is domain-specific (i.e., medical or legal, as
opposed to a more general domain, such as news), where the
distinction between experts and non-experts may be more
marked. For example, a non-expert in the medical domain
might search for whooping cough, but relevant documents
might instead contain the medical term pertussis.
Since query terms are masked only the in relevant
documents, this evaluation framework is actually biased against
retrieving relevant documents. This is because non-relevant
documents may also contain query terms, which can cause
a retrieval system to rank such documents higher than it
would have before terms were masked in relevant documents.
Still, we think this is a more realistic scenario than removing
terms from all documents regardless of relevance.
The degree to which a QE technique is well-suited to a
particular collection can be evaluated in terms of its ability
to still find the relevant documents, even when they are
missing query terms, despite the bias of this approach against
relevant documents. However, given that Okapi FB and TCS
outperformed each other on two different collection sets,
further investigation into the degree of compatibility between
QE expansion approach and target collection is probably
warranted. Furthermore, the investigation of other term
removal strategies could provide insight into the behavior of
different QE techniques and their overall impact on the user
experience.
As mentioned earlier, our choice of the term removal
strategy was motivated by (1) our desire to see the highest
impact on system performance as terms are removed and (2)
because high IDF terms, in our domain context, are more
likely to be domain specific, which allows us to better
understand the performance of an IR system as experienced
by expert and non-expert users.
Although not attempted in our experiments, another
application of this evaluation framework would be to remove
query terms individually, rather than incrementally, to
analyze which terms (or possibly which types of terms) are
being helped most by a QE technique on a particular test
collection. This could lead to insight as to when QE should
and should not be applied.
This evaluation framework allows us to see how IR
systems perform in the presence of query-document term
mismatch. In other evaluations, the performance of a system is
measured only on the entire collection, in which the degree
of query-term document mismatch is not known. By
systematically introducing this mismatch, we can see that even
if an IR system is not the best performer on the entire
collection, its performance may nonetheless be more robust to
query-document term mismatch than other systems. Such
robustness makes a system more user-friendly, especially to
non-expert users.
This paper presents a novel framework for IR system
evaluation, the applications of which are numerous. The results
presented in this paper are not by any means meant to be
exhaustive or entirely representative of the ways in which
this evaluation could be applied. To be sure, there is much
future work that could be done using this framework.
In addition to looking at average performance of IR
systems, the results of individual queries could be examined and
compared more closely, perhaps giving more insight into the
classification and prediction of difficult queries, or perhaps
showing which QE techniques improve (or degrade)
individual query performance under differing degrees of
querydocument term mismatch. Indeed, this framework would
also benefit from further testing on a larger collection.
7. CONCLUSION
The proposed evaluation framework allows us to measure
the degree to which different IR systems overcome (or don"t
overcome) term mismatch between queries and relevant
documents. Evaluations of IR systems employing QE performed
only on the entire collection do not take into account that
the purpose of QE is to mitigate the effects of term mismatch
in retrieval. By systematically removing query terms from
relevant documents, we can measure the degree to which
QE contributes to a search by showing the difference
between the performances of a QE system and its
keywordonly baseline when query terms have been removed from
known relevant documents. Further, we can model the
behavior of expert versus non-expert users by manipulating
the amount of query-document term mismatch introduced
into the collection.
The evaluation framework proposed in this paper is
attractive for several reasons. Most importantly, it provides
a controlled manner in which to measure the performance
of QE with respect to query-document term mismatch. In
addition, this framework takes advantage and stretches the
amount of information we can get from existing test
collections. Further, this evaluation framework is not
metricspecific: information in terms of any metric (MAP, P@10,
etc.) can be gained from evaluating an IR system this way.
It should also be noted that this framework is
generalizable to any IR system, in that it evaluates how well IR
systems evaluate users" information needs as represented by
their queries. An IR system that is easy to use should be
good at retrieving documents that are relevant to users"
information needs, even if the queries provided by the users do
not contain the same keywords as the relevant documents.
8. REFERENCES
[1] Amati, G., C. Carpineto, and G. Romano. Query
difficulty, robustness and selective application of query
expansion. In Proceedings of the 25th European
Conference on Information Retrieval (ECIR 2004),
pp. 127-137.
[2] Berger, A. and J.D. Lafferty. 1999. Information
retrieval as statistical translation. In Research and
Development in Information Retrieval, pages 222-229.
[3] Billerbeck, B., F. Scholer, H. E. Williams, and J.
Zobel. 2003. Query expansion using associated queries.
In Proceedings of CIKM 2003, pp. 2-9.
[4] Billerbeck, B., and J. Zobel. 2003. When Query
Expansion Fails. In Proceedings of SIGIR 2003, pp.
387-388.
[5] Billerbeck, B. and J. Zobel. 2004. Questioning Query
Expansion: An Examination of Behaviour and
Parameters. In Proceedings of the 15th Australasian
Database Conference (ADC2004), pp. 69-76.
[6] Billerbeck, B. and J. Zobel. 2005. Document
Expansion versus Query Expansion for ad-hoc
Retrieval. In Proceedings of the 10th Australasian
Document Computing Symposium.
[7] Buckley, C. and E.M. Voorhees. 2000. Evaluating
Evaluation Measure Stability. In Proceedings of SIGIR
2000, pp. 33-40.
[8] Buckley, C. and E.M. Voorhees. 2004. Retrieval
evaluation with incomplete information. In
Proceedings of SIGIR 2004, pp. 25-32.
[9] Carmel, D., E. Yom-Tov, A. Darlow, D. Pelleg. 2006.
What Makes A Query Difficult? In Proceedings of
SIGIR 2006, pp. 390-397.
[10] Carpineto, C., R. Mori and G. Romano. 1998.
Informative Term Selection for Automatic Query
Expansion. In The 7th Text REtrieval Conference,
pp.363:369.
[11] Carterette, B. and J. Allan. 2005. Incremental Test
Collections. In Proceedings of CIKM 2005, pp.
680-687.
[12] Carterette, B., J. Allan, and R. Sitaraman. 2006.
Minimal Test Collections for Retrieval Evaluation. In
Proceedings of SIGIR 2006, pp. 268-275.
[13] Cormack, G.V., C. R. Palmer, and C.L. Clarke. 1998.
Efficient Construction of Large Test Collections. In
Proceedings of SIGIR 1998, pp. 282-289.
[14] Cormack, G. and T.R. Lynam. 2006. Statistical
Precision of Information Retrieval Evaluation. In
Proceedings of SIGIR 2006, pp. 533-540.
[15] Cronen-Townsend, S., Y. Zhou, and W.B. Croft. 2004.
A Language Modeling Framework for Selective Query
Expansion, CIIR Technical Report.
[16] Efthimiadis, E.N. Query Expansion. 1996. In Martha
E. Williams (ed.), Annual Review of Information
Systems and Technology (ARIST), v31, pp 121- 187.
[17] Evans, D.A. and Lefferts, R.G. 1995. CLARIT-TREC
Experiments. Information Processing & Management.
31(3): 385-295.
[18] Fang, H. and C.X. Zhai. 2006. Semantic Term
Matching in Axiomatic Approaches to Information
Retrieval. In Proceedings of SIGIR 2006, pp. 115-122.
[19] Gao, J., J. Nie, G. Wu and G. Cao. 2004. Dependence
language model for information retrieval. In
Proceedings of SIGIR 2004, pp. 170-177.
[20] Harman, D.K. 1992. Relevance feedback revisited. In
Proceedings of ACM SIGIR 1992, pp. 1-10.
[21] Harman, D.K., ed. 1993. The First Text REtrieval
Conference (TREC-1): 1992.
[22] Harman, D.K., ed. 1994. The Second Text REtrieval
Conference (TREC-2): 1993.
[23] Harman, D.K., ed. 1995. The Third Text REtrieval
Conference (TREC-3): 1994.
[24] Harman, D.K., 1998. Towards Interactive Query
Expansion. In Proceedings of SIGIR 1998, pp. 321-331.
[25] Hofmann, T. 1999. Probabilistic latent semantic
indexing. In Proceedings of SIGIR 1999, pp 50-57.
[26] Jing, Y. and W.B. Croft. 1994. The Association
Thesaurus for Information Retrieval. In Proceedings of
RIAO 1994, pp. 146-160
[27] Lu, X.A. and R.B. Keefer. Query expansion/reduction
and its impact on retrieval effectiveness. In: D.K.
Harman, ed. The Third Text REtrieval Conference
(TREC-3). Gaithersburg, MD: National Institute of
Standards and Technology, 1995,231-239.
[28] McNamee, P. and J. Mayfield. 2002. Comparing
Cross-Language Query Expansion Techniques by
Degrading Translation Resources. In Proceedings of
SIGIR 2002, pp. 159-166.
[29] Mitra, M., A. Singhal, and C. Buckley. 1998.
Improving Automatic Query Expansion. In
Proceedings of SIGIR 1998, pp. 206-214.
[30] Peat, H. J. and P. Willett. 1991. The limitations of
term co-occurrence data for query expansion in
document retrieval systems. Journal of the American
Society for Information Science, 42(5): 378-383.
[31] Ponte, J.M. and W.B. Croft. 1998. A language
modeling approach to information retrieval. In
Proceedings of SIGIR 1998, pp.275-281.
[32] Qiu Y., and Frei H. 1993. Concept based query
expansion. In Proceedings of SIGIR 1993, pp. 160-169.
[33] Robertson, S. 2006. On GMAP - and other
transformations. In Proceedings of CIKM 2006, pp.
78-83.
[34] Robertson, S.E. and K. Sparck Jones. 1976. Relevance
Weighting of Search Terms. Journal of the American
Society for Information Science, 27(3): 129-146.
[35] Robertson, S.E., S. Walker, S. Jones, M.M.
Hancock-Beaulieu, and M. Gatford. 1994. Okapi at
TREC-2. In D.K. Harman (ed). 1994. The Second Text
REtrieval Conference (TREC-2): 1993, pp. 21-34.
[36] Robertson, S.E., S. Walker, S. Jones, M.M.
Hancock-Beaulieu, and M. Gatford. 1995. Okapi at
TREC-3. In D.K. Harman (ed). 1995. The Third Text
REtrieval Conference (TREC-2): 1993, pp. 109-126
[37] Rocchio, J.J. 1971. Relevance feedback in information
retrieval. In G. Salton (Ed.), The SMART Retrieval
System. Prentice-Hall, Inc., Englewood Cliffs, NJ, pp.
313-323.
[38] Salton, G. 1968. Automatic Information Organization
and Retrieval. McGraw-Hill.
[39] Salton, G. 1971. The SMART Retrieval System:
Experiments in Automatic Document Processing.
Englewood Cliffs NJ; Prentice-Hall.
[40] Salton,G. 1980. Automatic term class construction
using relevance-a summary of work in automatic
pseudoclassification. Information Processing &
Management. 16(1): 1-15.
[41] Salton, G., and C. Buckley. 1988. On the Use of
Spreading Activation Methods in Automatic
Information Retrieval. In Proceedings of SIGIR 1998,
pp. 147-160.
[42] Sanderson, M. 1994. Word sense disambiguation and
information retrieval. In Proceedings of SIGIR 1994,
pp. 161-175.
[43] Sanderson, M. and H. Joho. 2004. Forming test
collections with no system pooling. In Proceedings of
SIGIR 2004, pp. 186-193.
[44] Sanderson, M. and Zobel, J. 2005. Information
Retrieval System Evaluation: Effort, Sensitivity, and
Reliability. In Proceedings of SIGIR 2005, pp. 162-169.
[45] Smeaton, A.F. and C.J. Van Rijsbergen. 1983. The
Retrieval Effects of Query Expansion on a Feedback
Document Retrieval System. Computer Journal.
26(3):239-246.
[46] Song, F. and W.B. Croft. 1999. A general language
model for information retrieval. In Proceedings of the
Eighth International Conference on Information and
Knowledge Management, pages 316-321.
[47] Sparck Jones, K. 1971. Automatic Keyword
Classification for Information Retrieval. London:
Butterworths.
[48] Terra, E. and C. L. Clarke. 2004. Scoring missing
terms in information retrieval tasks. In Proceedings of
CIKM 2004, pp. 50-58.
[49] Turtle, Howard. 1994. Natural Language vs. Boolean
Query Evaluation: A Comparison of Retrieval
Performance. In Proceedings of SIGIR 1994, pp.
212-220.
[50] Voorhees, E.M. 1994a. On Expanding Query Vectors
with Lexically Related Words. In Harman, D. K., ed.
Text REtrieval Conference (TREC-1): 1992.
[51] Voorhees, E.M. 1994b. Query Expansion Using
Lexical-Semantic Relations. In Proceedings of SIGIR
1994, pp. 61-69. | evaluation;document processing;query expansion;relevant document;query-document term mismatch;information retrieval;information search;document expansion |
train_H-49 | Performance Prediction Using Spatial Autocorrelation | Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages. | 1. INTRODUCTION
In information retrieval, a user poses a query to a system.
The system retrieves n documents each receiving a
realvalued score indicating the predicted degree of relevance.
If we randomly select pairs of documents from this set, we
expect some pairs to share the same topic and other pairs to
not share the same topic. Take two topically-related
documents from the set and call them a and b. If the scores of a
and b are very different, we may be concerned about the
performance of our system. That is, if a and b are both on the
topic of the query, we would like them both to receive a high
score; if a and b are not on the topic of the query, we would
like them both to receive a low score. We might become more
worried as we find more differences between scores of related
documents. We would be more comfortable with a retrieval
where scores are consistent between related documents.
Our paper studies the quantification of this inconsistency
in a retrieval from a spatial perspective. Spatial analysis is
appropriate since many retrieval models embed documents
in some vector space. If documents are embedded in a space,
proximity correlates with topical relationships. Score
consistency can be measured by the spatial version of
autocorrelation known as the Moran coefficient or IM [5, 10]. In
this paper, we demonstrate a strong correlation between IM
and retrieval performance.
The discussion up to this point is reminiscent of the
cluster hypothesis. The cluster hypothesis states: closely-related
documents tend to be relevant to the same request [12]. As we
shall see, a retrieval function"s spatial autocorrelation
measures the degree to which closely-related documents receive
similar scores. Because of this, we interpret
autocorrelation as measuring the degree to which a retrieval function
satisfies the clustering hypothesis. If this connection is
reasonable, in Section 6, we present evidence that failure to
satisfy the cluster hypothesis correlates strongly with poor
performance.
In this work, we provide the following contributions,
1. A general, robust method for predicting the
performance of retrievals with zero relevance judgments
(Section 3).
2. A theoretical treatment of the similarities and
motivations behind several state-of-the-art performance
prediction techniques (Section 4).
3. The first large-scale experiments of zero-judgment,
single run performance prediction (Sections 5 and 6).
2. PROBLEM DEFINITION
Given a query, an information retrieval system produces
a ranking of documents in the collection encoded as a set
of scores associated with documents. We refer to the set
of scores for a particular query-system combination as a
retrieval. We would like to predict the performance of this
retrieval with respect to some evaluation measure (eg, mean
average precision). In this paper, we present results for
ranking retrievals from arbitrary systems. We would like
this ranking to approximate the ranking of retrievals by the
evaluation measure. This is different from ranking queries
by the average performance on each query. It is also
different from ranking systems by the average performance on a
set of queries.
Scores are often only computed for the top n documents
from the collection. We place these scores in the length
n vector, y, where yi refers to the score of the ith-ranked
document. We adjust scores to have zero mean and unit
variance. We use this method because of its simplicity and
its success in previous work [15].
3. SPATIAL CORRELATION
In information retrieval, we often assume that the
representations of documents exist in some high-dimensional
vector space. For example, given a vocabulary, V, this vector
space may be an arbitrary |V|-dimensional space with cosine
inner-product or a multinomial simplex with a
distributionbased distance measure. An embedding space is often
selected to respect topical proximity; if two documents are
near, they are more likely to share a topic.
Because of the prevalence and success of spatial models
of information retrieval, we believe that the application of
spatial data analysis techniques are appropriate. Whereas
in information retrieval, we are concerned with the score at
a point in a space, in spatial data analysis, we are concerned
with the value of a function at a point or location in a space.
We use the term function here to mean a mapping from a
location to a real value. For example, we might be interested
in the prevalence of a disease in the neighborhood of some
city. The function would map the location of a neighborhood
to an infection rate.
If we want to quantify the spatial dependencies of a
function, we would employ a measure referred to as the spatial
autocorrelation [5, 10]. High spatial autocorrelation suggests
that knowing the value of a function at location a will tell
us a great deal about the value at a neighboring location
b. There is a high spatial autocorrelation for a function
representing the temperature of a location since knowing
the temperature at a location a will tell us a lot about the
temperature at a neighboring location b. Low spatial
autocorrelation suggests that knowing the value of a function
at location a tells us little about the value at a neighboring
location b. There is low spatial autocorrelation in a function
measuring the outcome of a coin toss at a and b.
In this section, we will begin by describing what we mean
by spatial proximity for documents and then define a
measure of spatial autocorrelation. We conclude by extending
this model to include information from multiple retrievals
from multiple systems for a single query.
3.1 Spatial Representation of Documents
Our work does not focus on improving a specific similarity
measure or defining a novel vector space. Instead, we choose
an inner product known to be effective at detecting
interdocument topical relationships. Specifically, we adopt tf.idf
document vectors,
˜di = di log
„
(n + 0.5) − ci
0.5 + ci
«
(1)
where d is a vector of term frequencies, c is the length-|V|
document frequency vector. We use this weighting scheme
due to its success for topical link detection in the context
of Topic Detection and Tracking (TDT) evaluations [6].
Assuming vectors are scaled by their L2 norm, we use the inner
product, ˜di, ˜dj , to define similarity.
Given documents and some similarity measure, we can
construct a matrix which encodes the similarity between
pairs of documents. Recall that we are given the top n
documents retrieved in y. We can compute an n × n
similarity matrix, W. An element of this matrix, Wij represents
the similarity between documents ranked i and j. In
practice, we only include the affinities for a document"s k-nearest
neighbors. In all of our experiments, we have fixed k to 5.
We leave exploration of parameter sensitivity to future work.
We also row normalize the matrix so that
Pn
j=1 Wij = 1 for
all i.
3.2 Spatial Autocorrelation of a Retrieval
Recall that we are interested in measuring the similarity
between the scores of spatially-close documents. One such
suitable measure is the Moran coefficient of spatial
autocorrelation. Assuming the function y over n locations, this is
defined as
˜IM =
n
eTWe
P
i,j Wijyiyj
P
i y2
i
=
n
eTWe
yT
Wy
yTy
(2)
where eT
We =
P
ij Wij.
We would like to compare autocorrelation values for
different retrievals. Unfortunately, the bound for Equation 2
is not consistent for different W and y. Therefore, we use
the Cauchy-Schwartz inequality to establish a bound,
˜IM ≤
n
eTWe
s
yTWTWy
yTy
And we define the normalized spatial autocorrelation as
IM =
yT
Wy
p
yTy × yTWTWy
Notice that if we let ˜y = Wy, then we can write this formula
as,
IM =
yT
˜y
y 2 ˜y 2
(3)
which can be interpreted as the correlation between the
original retrieval scores and a set of retrieval scores diffused
in the space.
We present some examples of autocorrelations of functions
on a grid in Figure 1.
3.3 Correlation with Other Retrievals
Sometimes we are interested in the performance of a single
retrieval but have access to scores from multiple systems for
(a) IM = 0.006 (b) IM = 0.241 (c) IM = 0.487
Figure 1: The Moran coefficient, IM for a several
binary functions on a grid. The Moran coefficient
is a local measure of function consistency. From the
perspective of information retrieval, each of these
grid spaces would represent a document and
documents would be organized so that they lay next to
topically-related documents. Binary retrieval scores
would define a pattern on this grid. Notice that,
as the Moran coefficient increases, neighboring cells
tend to have similar values.
the same query. In this situation, we can use combined
information from these scores to construct a surrogate for
a high-quality ranking [17]. We can treat the correlation
between the retrieval we are interested in and the combined
scores as a predictor of performance.
Assume that we are given m score functions, yi, for the
same n documents. We will represent the mean of these
vectors as yµ =
Pm
i=1 yi. We use the mean vector as an
approximation to relevance. Since we use zero mean and unit
variance normalization, work in metasearch suggests that
this assumption is justified [15]. Because yµ represents a
very good retrieval, we hypothesize that a strong similarity
between yµ and y will correlate positively with system
performance. We use Pearson"s product-moment correlation to
measure the similarity between these vectors,
ρ(y, yµ) =
yT
yµ
y 2 yµ 2
(4)
We will comment on the similarity between Equation 3 and
4 in Section 7.
Of course, we can combine ρ(y, ˜y) and ρ(y, yµ) if we
assume that they capture different factors in the prediction.
One way to accomplish this is to combine these predictors
as independent variables in a linear regression. An
alternative means of combination is suggested by the mathematical
form of our predictors. Since ˜y encodes the spatial
dependencies in y and yµ encodes the spatial properties of the
multiple runs, we can compute a third correlation between
these two vectors,
ρ(˜y, yµ) =
˜yT
yµ
˜y 2 yµ 2
(5)
We can interpret Equation 5 as measuring the correlation
between a high quality ranking (yµ) and a spatially smoothed
version of the retrieval (˜y).
4. RELATIONSHIP WITH OTHER
PREDICTORS
One way to predict the effectiveness of a retrieval is to
look at the shared vocabulary of the top n retrieved
documents. If we computed the most frequent content words
in this set, we would hope that they would be consistent
with our topic. In fact, we might believe that a bad
retrieval would include documents on many disparate topics,
resulting in an overlap of terminological noise. The Clarity
of a query attempts to quantify exactly this [7]. Specifically,
Clarity measures the similarity of the words most frequently
used in retrieved documents to those most frequently used
in the whole corpus. The conjecture is that a good retrieval
will use language distinct from general text; the overlapping
language in a bad retrieval will tend to be more similar to
general text. Mathematically, we can compute a
representation of the language used in the initial retrieval as a weighted
combination of document language models,
P(w|θQ) =
nX
i=1
P(w|θi)
P(Q|θi)
Z
(6)
where θi is the language model of the ith-ranked
document, P(Q|θi) is the query likelihood score of the ith-ranked
document and Z =
Pn
i=1 P(Q|θi) is a normalization
constant. The similarity between the multinomial P(w|θQ)
and a model of general text can be computed using the
Kullback-Leibler divergence, DV
KL(θQ θC ). Here, the
distribution P(w|θC ) is our model of general text which can be
computed using term frequencies in the corpus. In Figure
2a, we present Clarity as measuring the distance between the
weighted center of mass of the retrieval (labeled y) and the
unweighted center of mass of the collection (labeled O).
Clarity reaches a minimum when a retrieval assigns every
document the same score.
Let"s again assume we have a set of n documents retrieved
for our query. Another way to quantify the dispersion of a
set of documents is to look at how clustered they are. We
may hypothesize that a good retrieval will return a single,
tight cluster. A poorly performing retrieval will return a
loosely related set of documents covering many topics. One
proposed method of quantifying this dispersion is to
measure the distance from a random document a to it"s nearest
neighbor, b. A retrieval which is tightly clustered will, on
average, have a low distance between a and b; a retrieval
which is less tightly-closed will, on average have high
distances between a and b. This average corresponds to using
the Cox-Lewis statistic to measure the randomness of the
top n documents retrieved from a system [18]. In Figure
2a, this is roughly equivalent to measuring the area of the
set n. Notice that we are throwing away information about
the retrieval function y. Therefore the Cox-Lewis statistic
is highly dependent on selecting the top n documents.1
Remember that we have n documents and a set of scores.
Let"s assume that we have access to the system which
provided the original scores and that we can also request scores
for new documents. This suggests a third method for
predicting performance. Take some document, a, from the
retrieved set and arbitrarily add or remove words at random
to create a new document ˜a. Now, we can ask our system
to score ˜a with respect to our query. If, on average over
the n documents, the scores of a and ˜a tend to be very
different, we might suspect that the system is failing on this
query. So, an alternative approach is to measure the
simi1
The authors have suggested coupling the query with the
distance measure [18]. The information introduced by the
query, though, is retrieval-independent so that, if two
retrievals return the same set of documents, the approximate
Cox-Lewis statistic will be the same regardless of the
retrieval scores.
yOy
(a) Global Divergence
µ(y)˜y
y
(b) Score Perturbation
µ(y)
y
(c) Multirun Averaging
Figure 2: Representation of several performance predictors on a grid. In Figure 2a, we depict predictors
which measure the divergence between the center of mass of a retrieval and the center of the embedding
space. In Figure 2b, we depict predictors which compare the original retrieval, y, to a perturbed version of
the retrieval, ˜y. Our approach uses a particular type of perturbation based on score diffusion. Finally, in
Figure 2c, we depict prediction when given retrievals from several other systems on the same query. Here,
we can consider the fusion of these retrieval as a surrogate for relevance.
larity between the retrieval and a perturbed version of that
retrieval [18, 19]. This can be accomplished by either
perturbing the documents or queries. The similarity between
the two retrievals can be measured using some correlation
measure. This is depicted in Figure 2b. The upper grid
represents the original retrieval, y, while the lower grid
represents the function after having been perturbed, ˜y. The
nature of the perturbation process requires additional
scorings or retrievals. Our predictor does not require access to
the original scoring function or additional retrievals. So,
although our method is similar to other perturbation methods
in spirit, it can be applied in situations when the retrieval
system is inaccessible or costly to access.
Finally, assume that we have, in addition to the retrieval
we want to evaluate, m retrievals from a variety of
different systems. In this case, we might take a document a,
compare its rank in the retrieval to its average rank in the
m retrievals. If we believe that the m retrievals provide a
satisfactory approximation to relevance, then a very large
difference in rank would suggest that our retrieval is
misranking a. If this difference is large on average over all
n documents, then we might predict that the retrieval is
bad. If, on the other hand, the retrieval is very consistent
with the m retrievals, then we might predict that the
retrieval is good. The similarity between the retrieval and
the combined retrieval may be computed using some
correlation measure. This is depicted in Figure 2c. In previous
work, the Kullback-Leibler divergence between the
normalized scores of the retrieval and the normalized scores of the
combined retrieval provides the similarity [1].
5. EXPERIMENTS
Our experiments focus on testing the predictive power of
each of our predictors: ρ(y, ˜y), ρ(y, yµ), and ρ(˜y, yµ). As
stated in Section 2, we are interested in predicting the
performance of the retrieval generated by an arbitrary system.
Our methodology is consistent with previous research in that
we predict the relative performance of a retrieval by
comparing a ranking based on our predictor to a ranking based on
average precision.
We present results for two sets of experiments. The first
set of experiments presents detailed comparisons of our
predictors to previously-proposed predictors using identical data
sets. Our second set of experiments demonstrates the
generalizability of our approach to arbitrary retrieval methods,
corpus types, and corpus languages.
5.1 Detailed Experiments
In these experiments, we will predict the performance of
language modeling scores using our autocorrelation
predictor, ρ(y, ˜y); we do not consider ρ(y, yµ) or ρ(˜y, yµ)
because, in these detailed experiments, we focus on ranking
the retrievals from a single system. We use retrievals, values
for baseline predictors, and evaluation measures reported in
previous work [19].
5.1.1 Topics and Collections
These performance prediction experiments use language
model retrievals performed for queries associated with
collections in the TREC corpora. Using TREC collections
allows us to confidently associate an average precision with a
retrieval. In these experiments, we use the following topic
collections: TREC 4 ad-hoc, TREC 5 ad-hoc, Robust 2004,
Terabyte 2004, and Terabyte 2005.
5.1.2 Baselines
We provide two baselines. Our first baseline is the
classic Clarity predictor presented in Equation 6. Clarity is
designed to be used with language modeling systems. Our
second baseline is Zhou and Croft"s ranking robustness
predictor. This predictor corrupts the top k documents
from retrieval and re-computes the language model scores
for these corrupted documents. The value of the predictor
is the Spearman rank correlation between the original
ranking and the corrupted ranking. In our tables, we will label
results for Clarity using DV
KL and the ranking robustness
predictor using P.
5.2 Generalizability Experiments
Our predictors do not require a particular baseline
retrieval system; the predictors can be computed for an
arbitrary retrieval, regardless of how scores were generated. We
believe that that is one of the most attractive aspects of our
algorithm. Therefore, in a second set of experiments, we
demonstrate the ability of our techniques to generalize to a
variety of collections, topics, and retrieval systems.
5.2.1 Topics and Collections
We gathered a diverse set of collections from all possible
TREC corpora. We cast a wide net in order to locate
collections where our predictors might fail. Our hypothesis is that
documents with high topical similarity should have
correlated scores. Therefore, we avoided collections where scores
were unlikely to be correlated (eg, question-answering) or
were likely to be negatively correlated (eg, novelty).
Nevertheless, our collections include corpora where correlations
are weakly justified (eg, non-English corpora) or not
justified at all (eg, expert search). We use the ad-hoc tracks from
TREC3-8, TREC Robust 2003-2005, TREC Terabyte
20042005, TREC4-5 Spanish, TREC5-6 Chinese, and TREC
Enterprise Expert Search 2005. In all cases, we use only the
automatic runs for ad-hoc tracks submitted to NIST.
For all English and Spanish corpora, we construct the
matrix W according to the process described in Section 3.1. For
Chinese corpora, we use na¨ıve character-based tf.idf vectors.
For entities, entries in W are proportional to the number of
documents in which two entities cooccur.
5.2.2 Baselines
In our detailed experiments, we used the Clarity measure
as a baseline. Since we are predicting the performance of
retrievals which are not based on language modeling, we
use a version of Clarity referred to as ranked-list Clarity
[7]. Ranked-list clarity converts document ranks to P(Q|θi)
values. This conversion begins by replacing all of the scores
in y with the respective ranks. Our estimation of P(Q|θi)
from the ranks, then is,
P(Q|θi) =
(
2(c+1−yi)
c(c+1)
if yi ≤ c
0 otherwise
(7)
where c is a cutoff parameter. As suggested by the authors,
we fix the algorithm parameters c and λ2 so that c = 60
and λ2 = 0.10. We use Equation 6 to estimate P(w|θQ) and
DV
KL(θQ θC ) to compute the value of the predictor. We
will refer to this predictor as DV
KL, superscripted by V to
indicate that the Kullback-Leibler divergence is with respect
to the term embedding space.
When information from multiple runs on the same query is
available, we use Aslam and Pavlu"s document-space
multinomial divergence as a baseline [1]. This rank-based method
first normalizes the scores in a retrieval as an n-dimensional
multinomial. As with ranked-list Clarity, we begin by
replacing all of the scores in y with their respective ranks.
Then, we adjust the elements of y in the following way,
ˆyi =
1
2n
0
@1 +
nX
k=yi
1
k
1
A (8)
In our multirun experiments, we only use the top 75
documents from each retrieval (n = 75); this is within the range
of parameter values suggested by the authors. However, we
admit not tuning this parameter for either our system or the
baseline. The predictor is the divergence between the
candidate distribution, y, and the mean distribution, yµ . With
the uniform linear combination of these m retrievals
represented as yµ, we can compute the divergence as Dn
KL(ˆy ˆyµ)
where we use the superscript n to indicate that the
summation is over the set of n documents. This baseline was
developed in the context of predicting query difficulty but
we adopt it as a reasonable baseline for predicting retrieval
performance.
5.2.3 Parameter Settings
When given multiple retrievals, we use documents in the
union of the top k = 75 documents from each of the m
retrievals for that query. If the size of this union is ˜n, then
yµ and each yi is of length ˜n. In some cases, a system
did not score a document in the union. Since we are
making a Gaussian assumption about our scores, we can sample
scores for these unseen documents from the negative tail
of the distribution. Specifically, we sample from the part
of the distribution lower than the minimum value of in the
normalized retrieval. This introduces randomness into our
algorithm but we believe it is more appropriate than
assigning an arbitrary fixed value.
We optimized the linear regression using the square root
of each predictor. We found that this substantially improved
fits for all predictors, including the baselines. We considered
linear combinations of pairs of predictors (labeled by the
components) and all predictors (labeled as β).
5.3 Evaluation
Given a set of retrievals, potentially from a combination
of queries and systems, we measure the correlation of the
rank ordering of this set by the predictor and by the
performance metric. In order to ensure comparability with
previous results, we present Kendall"s τ correlation between the
predictor"s ranking and ranking based on average precision
of the retrieval. Unless explicitly noted, all correlations are
significant with p < 0.05.
Predictors can sometimes perform better when linearly
combined [9, 11]. Although previous work has presented
the coefficient of determination (R2
) to measure the quality
of the regression, this measure cannot be reliably used when
comparing slight improvements from combining predictors.
Therefore, we adopt the adjusted coefficient of
determination which penalizes models with more variables. The
adjusted R2
allows us to evaluate the improvement in
prediction achieved by adding a parameter but loses the statistical
interpretation of R2
. We will use Kendall"s τ to evaluate the
magnitude of the correlation and the adjusted R2
to
evaluate the combination of variables.
6. RESULTS
We present results for our detailed experiments comparing
the prediction of language model scores in Table 1. Although
the Clarity measure is theoretically designed for language
model scores, it consistently underperforms our system-agnostic
predictor. Ranking robustness was presented as an
improvement to Clarity for web collections (represented in our
experiments by the terabyte04 and terabyte05 collections),
shifting the τ correlation from 0.139 to 0.150 for terabyte04 and
0.171 to 0.208 for terabyte05. However, these improvements
are slight compared to the performance of autocorrelation
on these collections. Our predictor achieves a τ correlation
of 0.454 for terabyte04 and 0.383 for terabyte05. Though
not always the strongest, autocorrelation achieves
correlations competitive with baseline predictors. When
examining the performance of linear combinations of predictors, we
note that in every case, autocorrelation factors as a
necessary component of a strong predictor. We also note that the
adjusted R2
for individual baselines are always significantly
improved by incorporating autocorrelation.
We present our generalizability results in Table 2. We
begin by examining the situation in column (a) where we
are presented with a single retrieval and no information
from additional retrievals. For every collection except one,
we achieve significantly better correlations than ranked-list
Clarity. Surprisingly, we achieve relatively strong
correlations for Spanish and Chinese collections despite our na¨ıve
processing. We do not have a ranked-list clarity correlation
for ent05 because entity modeling is itself an open research
question. However, our autocorrelation measure does not
achieve high correlations perhaps because relevance for
entity retrieval does not propagate according to the
cooccurrence links we use.
As noted above, the poor Clarity performance on web
data is consistent with our findings in the detailed
experiments. Clarity also notably underperforms for several news
corpora (trec5, trec7, and robust04). On the other hand,
autocorrelation seems robust to the changes between different
corpora.
Next, we turn to the introduction of information from
multiple retrievals. We compare the correlations between
those predictors which do not use this information in column
(a) and those which do in column (b). For every collection,
the predictors in column (b) outperform the predictors in
column (a), indicating that the information from additional
runs can be critical to making good predictions.
Inspecting the predictors in column (b), we only draw
weak conclusions. Our new predictors tend to perform
better on news corpora. And between our new predictors, the
hybrid ρ(˜y, yµ) predictor tends to perform better. Recall
that our ρ(˜y, yµ) measure incorporates both spatial and
multiple retrieval information. Therefore, we believe that
the improvement in correlation is the result of
incorporating information from spatial behavior.
In column (c), we can investigate the utility of
incorporating spatial information with information from
multiple retrievals. Notice that in the cases where
autocorrelation, ρ(y, ˜y), alone performs well (trec3, trec5-spanish, and
trec6-chinese), it is substantially improved by
incorporating multiple-retrieval information from ρ(y, yµ) in the
linear regression, β. In the cases where ρ(y, yµ) performs well,
incorporating autocorrelation rarely results in a significant
improvement in performance. In fact, in every case where
our predictor outperforms the baseline, it includes
information from multiple runs.
7. DISCUSSION
The most important result from our experiments involves
prediction when no information is available from multiple
runs (Tables 1 and 2a). This situation arises often in system
design. For example, a system may need to, at retrieval
time, assess its performance before deciding to conduct more
intensive processing such as pseudo-relevance feedback or
interaction. Assuming the presence of multiple retrievals is
unrealistic in this case.
We believe that autocorrelation is, like multiple-retrieval
algorithms, approximating a good ranking; in this case by
diffusing scores. Why is ˜y a reasonable surrogate? We know
that diffusion of scores on the web graph and language model
graphs improves performance [14, 16]. Therefore, if score
diffusion tends to, in general, improve performance, then
diffused scores will, in general, provide a good surrogate for
relevance. Our results demonstrate that this approximation
is not as powerful as information from multiple retrievals.
Nevertheless, in situations where this information is lacking,
autocorrelation provides substantial information.
The success of autocorrelation as a predictor may also
have roots in the clustering hypothesis. Recall that we
regard autocorrelation as the degree to which a retrieval
satisfies the clustering hypothesis. Our experiments, then,
demonstrate that a failure to respect the clustering
hypothesis correlates with poor performance. Why might systems
fail to conform to the cluster hypothesis? Query-based
information retrieval systems often score documents
independently. The score of document a may be computed by
examining query term or phrase matches, the document length,
and perhaps global collection statistics. Once computed,
a system rarely compares the score of a to the score of a
topically-related document b. With some exceptions, the
correlation of document scores has largely been ignored.
We should make it clear that we have selected tasks where
topical autocorrelation is appropriate. There are certainly
cases where there is no reason to believe that retrieval scores
will have topical autocorrelation. For example, ranked lists
which incorporate document novelty should not exhibit
spatial autocorrelation; if anything autocorrelation should be
negative for this task. Similarly, answer candidates in a
question-answering task may or may not exhibit
autocorrelation; in this case, the semantics of links is questionable
too. It is important before applying this measure to confirm
that, given the semantics for some link between two retrieved
items, we should expect a correlation between scores.
8. RELATED WORK
In this section we draw more general comparisons to other
work in performance prediction and spatial data analysis.
There is a growing body of work which attempts to predict
the performance of individual retrievals [7, 3, 11, 9, 19]. We
have attempted to place our work in the context of much of
this work in Section 4. However, a complete comparison is
beyond the scope of this paper. We note, though, that our
experiments cover a larger and more diverse set of retrievals,
collections, and topics than previously examined.
Much previous work-particularly in the context of
TRECfocuses on predicting the performance of systems. Here,
each system generates k retrievals. The task is, given these
retrievals, to predict the ranking of systems according to
some performance measure. Several papers attempt to
address this task under the constraint of few judgments [2, 4].
Some work even attempts to use zero judgments by
leveraging multiple retrievals for the same query [17]. Our task
differs because we focus on ranking retrievals independent
of the generating system. The task here is not to test the
hypothesis system A is superior to system B but to test
the hypothesis retrieval A is superior to retrieval B.
Autocorrelation manifests itself in many classification tasks.
Neville and Jensen define relational autocorrelation for
relational learning problems and demonstrate that many
classification tasks manifest autocorrelation [13]. Temporal
autocorrelation of initial retrievals has also been used to predict
performance [9]. However, temporal autocorrelation is
performed by projecting the retrieval function into the temporal
embedding space. In our work, we focus on the behavior of
the function over the relationships between documents.
τ adjusted R2
DV
KL P ρ(y, ˜y) DV
KL P ρ(y, ˜y) DV
KL, P DV
KL, ρ(y, ˜y) Pρ(y, ˜y) β
trec4 0.353 0.548 0.513 0.168 0.363 0.422 0.466 0.420 0.557 0.553
trec5 0.311 0.329 0.357 0.116 0.190 0.236 0.238 0.244 0.266 0.269
robust04 0.418 0.398 0.373 0.256 0.304 0.278 0.403 0.373 0.402 0.442
terabyte04 0.139 0.150 0.454 0.059 0.045 0.292 0.076 0.293 0.289 0.284
terabyte05 0.171 0.208 0.383 0.022 0.072 0.193 0.120 0.225 0.218 0.257
Table 1: Comparison to Robustness and Clarity measures for language model scores. Evaluation replicates
experiments from [19]. We present correlations between the classic Clarity measure (DV
KL), the ranking
robustness measure (P), and autocorrelation (ρ(y, ˜y)) each with mean average precision in terms of Kendall"s
τ. The adjusted coefficient of determination is presented to measure the effectiveness of combining predictors.
Measures in bold represent the strongest correlation for that test/collection pair.
multiple run
(a) (b) (c)
τ τ adjusted R2
DKL ρ(y, ˜y) Dn
KL ρ(y, yµ) ρ(˜y, yµ) Dn
KL ρ(y, ˜y) ρ(y, yµ) ρ(˜y, yµ) β
trec3 0.201 0.461 0.461 0.439 0.456 0.444 0.395 0.394 0.386 0.498
trec4 0.252 0.396 0.455 0.482 0.489 0.379 0.263 0.429 0.482 0.483
trec5 0.016 0.277 0.433 0.459 0.393 0.280 0.157 0.375 0.323 0.386
trec6 0.230 0.227 0.352 0.428 0.418 0.203 0.089 0.323 0.325 0.325
trec7 0.083 0.326 0.341 0.430 0.483 0.264 0.182 0.363 0.442 0.400
trec8 0.235 0.396 0.454 0.508 0.567 0.402 0.272 0.490 0.580 0.523
robust03 0.302 0.354 0.377 0.385 0.447 0.269 0.206 0.274 0.392 0.303
robust04 0.183 0.308 0.301 0.384 0.453 0.200 0.182 0.301 0.393 0.335
robust05 0.224 0.249 0.371 0.377 0.404 0.341 0.108 0.313 0.328 0.336
terabyte04 0.043 0.245 0.544 0.420 0.392 0.516 0.105 0.357 0.343 0.365
terabyte05 0.068 0.306 0.480 0.434 0.390 0.491 0.168 0.384 0.309 0.403
trec4-spanish 0.307 0.388 0.488 0.398 0.395 0.423 0.299 0.282 0.299 0.388
trec5-spanish 0.220 0.458 0.446 0.484 0.475 0.411 0.398 0.428 0.437 0.529
trec5-chinese 0.092 0.199 0.367 0.379 0.384 0.379 0.199 0.273 0.276 0.310
trec6-chinese 0.144 0.276 0.265 0.353 0.376 0.115 0.128 0.188 0.223 0.199
ent05 - 0.181 0.324 0.305 0.282 0.211 0.043 0.158 0.155 0.179
Table 2: Large scale prediction experiments. We predict the ranking of large sets of retrievals for various
collections and retrieval systems. Kendall"s τ correlations are computed between the predicted ranking and
a ranking based on the retrieval"s average precision. In column (a), we have predictors which do not use
information from other retrievals for the same query. In columns (b) and (c) we present performance for
predictors which incorporate information from multiple retrievals. The adjusted coefficient of determination
is computed to determine effectiveness of combining predictors. Measures in bold represent the strongest
correlation for that test/collection pair.
Finally, regularization-based re-ranking processes are also
closely-related to our work [8]. These techniques seek to
maximize the agreement between scores of related
documents by solving a constrained optimization problem. The
maximization of consistency is equivalent to maximizing the
Moran autocorrelation. Therefore, we believe that our work
provides explanation for why regularization-based re-ranking
works.
9. CONCLUSION
We have presented a new method for predicting the
performance of a retrieval ranking without any relevance
judgments. We consider two cases. First, when making
predictions in the absence of retrievals from other systems, our
predictors demonstrate robust, strong correlations with
average precision. This performance, combined with a simple
implementation, makes our predictors, in particular, very
attractive. We have demonstrated this improvement for many,
diverse settings. To our knowledge, this is the first large
scale examination of zero-judgment, single-retrieval
performance prediction. Second, when provided retrievals from
other systems, our extended methods demonstrate
competitive performance with state of the art baselines. Our
experiments also demonstrate the limits of the usefulness of our
predictors when information from multiple runs is provided.
Our results suggest two conclusions. First, our results
could affect retrieval algorithm design. Retrieval algorithms
designed to consider spatial autocorrelation will conform to
the cluster hypothesis and improve performance. Second,
our results could affect the design of minimal test collection
algorithms. Much of the recent work in ranking systems
sometimes ignores correlations between document labels and
scores. We believe that these two directions could be
rewarding given the theoretical and experimental evidence in this
paper.
10. ACKNOWLEDGMENTS
This work was supported in part by the Center for
Intelligent Information Retrieval and in part by the Defense
Advanced Research Projects Agency (DARPA) under
contract number HR0011-06-C-0023. Any opinions, findings
and conclusions or recommendations expressed in this
material are the author"s and do not necessarily reflect those
of the sponsor. We thank Yun Zhou and Desislava Petkova
for providing data and Andre Gauthier for technical
assistance.
11. REFERENCES
[1] J. Aslam and V. Pavlu. Query hardness estimation using
jensen-shannon divergence among multiple scoring
functions. In ECIR 2007: Proceedings of the 29th European
Conference on Information Retrieval, 2007.
[2] J. A. Aslam, V. Pavlu, and E. Yilmaz. A statistical method
for system evaluation using incomplete judgments. In
S. Dumais, E. N. Efthimiadis, D. Hawking, and K. Jarvelin,
editors, Proceedings of the 29th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 541-548. ACM Press, August
2006.
[3] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg. What
makes a query difficult? In SIGIR "06: Proceedings of the
29th annual international ACM SIGIR conference on
Research and development in information retrieval, pages
390-397, New York, NY, USA, 2006. ACM Press.
[4] B. Carterette, J. Allan, and R. Sitaraman. Minimal test
collections for retrieval evaluation. In SIGIR "06:
Proceedings of the 29th annual international ACM SIGIR
conference on Research and development in information
retrieval, pages 268-275, New York, NY, USA, 2006. ACM
Press.
[5] A. D. Cliff and J. K. Ord. Spatial Autocorrelation. Pion
Ltd., 1973.
[6] M. Connell, A. Feng, G. Kumaran, H. Raghavan, C. Shah,
and J. Allan. Umass at tdt 2004. Technical Report CIIR
Technical Report IR - 357, Department of Computer
Science, University of Massachusetts, 2004.
[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Precision
prediction based on ranked list coherence. Inf. Retr.,
9(6):723-755, 2006.
[8] F. Diaz. Regularizing ad-hoc retrieval scores. In CIKM "05:
Proceedings of the 14th ACM international conference on
Information and knowledge management, pages 672-679,
New York, NY, USA, 2005. ACM Press.
[9] F. Diaz and R. Jones. Using temporal profiles of queries for
precision prediction. In SIGIR "04: Proceedings of the 27th
annual international ACM SIGIR conference on Research
and development in information retrieval, pages 18-24,
New York, NY, USA, 2004. ACM Press.
[10] D. A. Griffith. Spatial Autocorrelation and Spatial
Filtering. Springer Verlag, 2003.
[11] B. He and I. Ounis. Inferring Query Performance Using
Pre-retrieval Predictors. In The Eleventh Symposium on
String Processing and Information Retrieval (SPIRE),
2004.
[12] N. Jardine and C. J. V. Rijsbergen. The use of hierarchic
clustering in information retrieval. Information Storage and
Retrieval, 7:217-240, 1971.
[13] D. Jensen and J. Neville. Linkage and autocorrelation cause
feature selection bias in relational learning. In ICML "02:
Proceedings of the Nineteenth International Conference on
Machine Learning, pages 259-266, San Francisco, CA,
USA, 2002. Morgan Kaufmann Publishers Inc.
[14] O. Kurland and L. Lee. Corpus structure, language models,
and ad-hoc information retrieval. In SIGIR "04:
Proceedings of the 27th annual international conference on
Research and development in information retrieval, pages
194-201, New York, NY, USA, 2004. ACM Press.
[15] M. Montague and J. A. Aslam. Relevance score
normalization for metasearch. In CIKM "01: Proceedings of
the tenth international conference on Information and
knowledge management, pages 427-433, New York, NY,
USA, 2001. ACM Press.
[16] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A
study of relevance propagation for web search. In SIGIR
"05: Proceedings of the 28th annual international ACM
SIGIR conference on Research and development in
information retrieval, pages 408-415, New York, NY, USA,
2005. ACM Press.
[17] I. Soboroff, C. Nicholas, and P. Cahan. Ranking retrieval
systems without relevance judgments. In SIGIR "01:
Proceedings of the 24th annual international ACM SIGIR
conference on Research and development in information
retrieval, pages 66-73, New York, NY, USA, 2001. ACM
Press.
[18] V. Vinay, I. J. Cox, N. Milic-Frayling, and K. Wood. On
ranking the effectiveness of searches. In SIGIR "06:
Proceedings of the 29th annual international ACM SIGIR
conference on Research and development in information
retrieval, pages 398-404, New York, NY, USA, 2006. ACM
Press.
[19] Y. Zhou and W. B. Croft. Ranking robustness: a novel
framework to predict query performance. In CIKM "06:
Proceedings of the 15th ACM international conference on
Information and knowledge management, pages 567-574,
New York, NY, USA, 2006. ACM Press. | query ranking;spatial autocorrelation;predictor predictive power;predictor relationship;language model score;ranking of query;regularization;autocorrelation;performance prediction;information retrieval;zero relevance judgment;cluster hypothesis;predictive power of predictor;relationship of predictor |
train_H-50 | An Outranking Approach for Rank Aggregation in Information Retrieval | Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators. | 1. INTRODUCTION
A wide range of current Information Retrieval (IR)
approaches are based on various search models (Boolean,
Vector Space, Probabilistic, Language, etc. [2]) in order to
retrieve relevant documents in response to a user request. The
result lists produced by these approaches depend on the
exact definition of the relevance concept.
Rank aggregation approaches, also called data fusion
approaches, consist in combining these result lists in order
to produce a new and hopefully better ranking. Such
approaches give rise to metasearch engines in the Web context.
We consider, in the following, cases where only ranks are
available and no other additional information is provided
such as the relevance scores. This corresponds indeed to the
reality, where only ordinal information is available.
Data fusion is also relevant in other contexts, such as when
the user writes several queries of his/her information need
(e.g., a boolean query and a natural language query) [4], or
when many document surrogates are available [16].
Several studies argued that rank aggregation has the
potential of combining effectively all the various sources of
evidence considered in various input methods. For instance,
experiments carried out in [16], [30], [4] and [19] showed that
documents which appear in the lists of the majority of the
input methods are more likely to be relevant. Moreover, Lee
[19] and Vogt and Cottrell [31] found that various retrieval
approaches often return very different irrelevant documents,
but many of the same relevant documents. Bartell et al.
[3] also found that rank aggregation methods improve the
performances w.r.t. those of the input methods, even when
some of them have weak individual performances. These
methods also tend to smooth out biases of the input
methods according to Montague and Aslam [22]. Data fusion has
recently been proved to improve performances for both the
ad-hoc retrieval and categorization tasks within the TREC
genomics track in 2005 [1].
The rank aggregation problem was addressed in various
fields such as i) in social choice theory which studies
voting algorithms which specify winners of elections or winners
of competitions in tournaments [29], ii) in statistics when
studying correlation between rankings, iii) in distributed
databases when results from different databases must be
combined [12], and iv) in collaborative filtering [23].
Most current rank aggregation methods consider each
input ranking as a permutation over the same set of items.
They also give rigid interpretation to the exact ranking of
the items. Both of these assumptions are rather not valid in
the IR context, as will be shown in the following sections.
The remaining of the paper is organized as follows. We
first review current rank aggregation methods in Section 2.
Then we outline the specificities of the data fusion problem
in the IR context (Section 3). In Section 4, we present a
new aggregation method which is proven to best fit the IR
context. Experimental results are presented in Section 5 and
conclusions are provided in a final section.
2. RELATED WORK
As pointed out by Riker [25], we can distinguish two
families of rank aggregation methods: positional methods which
assign scores to items to be ranked according to the ranks
they receive and majoritarian methods which are based on
pairwise comparisons of items to be ranked. These two
families of methods find their roots in the pioneering works of
Borda [5] and Condorcet [7], respectively, in the social choice
literature.
2.1 Preliminaries
We first introduce some basic notations to present the
rank aggregation methods in a uniform way. Let D =
{d1, d2, . . . , dnd } be a set of nd documents. A list or a
ranking j is an ordering defined on Dj ⊆ D (j = 1, . . . , n).
Thus, di j di means di ‘is ranked better than" di in j.
When Dj = D, j is said to be a full list. Otherwise, it
is a partial list. If di belongs to Dj, rj
i denotes the rank
or position of di in j. We assume that the best answer
(document) is assigned the position 1 and the worst one is
assigned the position |Dj|. Let D be the set of all
permutations on D or all subsets of D. A profile is a n-tuple
of rankings PR = ( 1, 2, . . . , n). Restricting PR to the
rankings containing document di defines PRi. We also call
the number of rankings which contain document di the rank
hits of di [19].
The rank aggregation or data fusion problem consists of
finding a ranking function or mechanism Ψ (also called a
social welfare function in the social choice theory terminology)
defined by:
Ψ :
n
D → D
PR = ( 1, 2, . . . , n) → σ = Ψ(PR)
where σ is called a consensus ranking.
2.2 Positional Methods
2.2.1 Borda Count
This method [5] first assigns a score n
j=1 rj
i to each
document di. Documents are then ranked by increasing order
of this score, breaking ties, if any, arbitrarily.
2.2.2 Linear Combination Methods
This family of methods basically combine scores of
documents. When used for the rank aggregation problem, ranks
are assumed to be scores or performances to be combined
using aggregation operators such as the weighted sum or
some variation of it [3, 31, 17, 28].
For instance, Callan et al. [6] used the inference
networks model [30] to combine rankings. Fox and Shaw [15]
proposed several combination strategies which are
CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ.
The first three operators correspond to the sum, min and
max operators, respectively. CombANZ and CombMNZ
respectively divides and multiplies the CombSUM score by
the rank hits. It is shown in [19] that the CombSUM and
CombMNZ operators perform better than the others.
Metasearch engines such as SavvySearch and MetaCrawler use
the CombSUM strategy to fuse rankings.
2.2.3 Footrule Optimal Aggregation
In this method, a consensus ranking minimizes the
Spearman footrule distance from the input rankings [21].
Formally, given two full lists j and j , this distance is given
by F( j, j ) = nd
i=1 |rj
i − rj
i |. It extends to several lists
as follows. Given a profile PR and a consensus ranking
σ, the Spearman footrule distance of σ to PR is given by
F(σ, PR) = n
j=1 F(σ, j).
Cook and Kress [8] proposed a similar method which
consists in optimizing the distance D( j, j ) = 1
2
nd
i,i =1 |rj
i,i −
rj
i,i |, where rj
i,i = rj
i −rj
i . This formulation has the
advantage that it considers the intensity of preferences.
2.2.4 Probabilistic Methods
This kind of methods assume that the performance of the
input methods on a number of training queries is indicative
of their future performance. During the training process,
probabilities of relevance are calculated. For subsequent
queries, documents are ranked based on these probabilities.
For instance, in [20], each input ranking j is divided into a
number of segments, and the conditional probability of
relevance (R) of each document di depending on the segment
k it occurs in, is computed, i.e. prob(R|di, k, j). For
subsequent queries, the score of each document di is given by
n
j=1
prob(R|di,k, j )
k
. Le Calve and Savoy [18] suggest using
a logistic regression approach for combining scores. Training
data is needed to infer the model parameters.
2.3 Majoritarian Methods
2.3.1 Condorcet Procedure
The original Condorcet rule [7] specifies that a winner of
the election is any item that beats or ties with every other
item in a pairwise contest. Formally, let C(diσdi ) = { j∈
PR : di j di } be the coalition of rankings that are
concordant with establishing diσdi , i.e. with the proposition
di ‘should be ranked better than" di in the final ranking σ.
di beats or ties with di iff |C(diσdi )| ≥ |C(di σdi)|.
The repetitive application of the Condorcet algorithm can
produce a ranking of items in a natural way: select the
Condorcet winner, remove it from the lists, and repeat the
previous two steps until there are no more documents to rank.
Since there is not always Condorcet winners, variations of
the Condorcet procedure have been developed within the
multiple criteria decision aid theory, with methods such as
ELECTRE [26].
2.3.2 Kemeny Optimal Aggregation
As in section 2.2.3, a consensus ranking minimizes a
geometric distance from the input rankings, where the Kendall
tau distance is used instead of the Spearman footrule
distance. Formally, given two full lists j and j , the Kendall
tau distance is given by K( j, j ) = |{(di, di ) : i < i , rj
i <
rj
i , rj
i > rj
i }|, i.e. the number of pairwise disagreements
between the two lists. It is easy to show that the consensus
ranking corresponds to the geometric median of the input
rankings and that the Kemeny optimal aggregation problem
corresponds to the minimum feedback edge set problem.
2.3.3 Markov Chain Methods
Markov chains (MCs) have been used by Dwork et al. [11]
as a ‘natural" method to obtain a consensus ranking where
states correspond to the documents to be ranked and the
transition probabilities vary depending on the interpretation
of the transition event. In the same reference, the authors
proposed four specific MCs and experimental testing had
shown that the following MC is the best performing one
(see also [24]):
• MC4: move from the current state di to the next state
di by first choosing a document di uniformly from D.
If for the majority of the rankings, we have rj
i ≤ rj
i ,
then move to di , else stay in di.
The consensus ranking corresponds to the stationary
distribution of MC4.
3. SPECIFICITIES OF THE RANK
AGGREGATION PROBLEM IN THE IR CONTEXT
3.1 Limited Significance of the Rankings
The exact positions of documents in one input ranking
have limited significance and should not be overemphasized.
For instance, having three relevant documents in the first
three positions, any perturbation of these three items will
have the same value. Indeed, in the IR context, the complete
order provided by an input method may hide ties. In this
case, we call such rankings semi orders. This was outlined in
[13] as the problem of aggregation with ties. It is therefore
important to build the consensus ranking based on robust
information:
• Documents with near positions in j are more likely
to have similar interest or relevance. Thus a slight
perturbation of the initial ranking is meaningless.
• Assuming that document di is better ranked than
document di in a ranking j, di is more likely to be
definitively more relevant than di in j when the number
of intermediate positions between di and di increases.
3.2 Partial Lists
In real world applications, such as metasearch engines,
rankings provided by the input methods are often partial
lists. This was outlined in [14] as the problem of having to
merge top-k results from various input lists. For instance,
in the experiments carried out by Dwork et al. [11], authors
found that among the top 100 best documents of 7 input
search engines, 67% of the documents were present in only
one search engine, whereas less than two documents were
present in all the search engines.
Rank aggregation of partial lists raises four major
difficulties which we state hereafter, proposing for each of them
various working assumptions:
1. Partial lists can have various lengths, which can favour
long lists. We thus consider the following two working
hypotheses:
H1
k : We only consider the top k best documents from
each input ranking.
H1
all: We consider all the documents from each input
ranking.
2. Since there are different documents in the input
rankings, we must decide which documents should be kept
in the consensus ranking. Two working hypotheses are
therefore considered:
H2
k : We only consider documents which are present in
at least k input rankings (k > 1).
H2
all: We consider all the documents which are ranked
in at least one input ranking.
Hereafter, we call documents which will be retained
in the consensus ranking, candidate documents, and
documents that will be excluded from the consensus
ranking, excluded documents. We also call a candidate
document which is missing in one or more rankings, a
missing document.
3. Some candidate documents are missing documents in
some input rankings. Main reasons for a missing
document are that it was not indexed or it was indexed
but deemed irrelevant ; usually this information is not
available. We consider the following two working
hypotheses:
H3
yes: Each missing document in each j is assigned
a position.
H3
no: No assumption is made, that is each missing
document is considered neither better nor worse than any
other document.
4. When assumption H2
k holds, each input ranking may
contain documents which will not be considered in the
consensus ranking. Regarding the positions of the
candidate documents, we can consider the following
working hypotheses:
H4
init: The initial positions of candidate documents
are kept in each input ranking.
H4
new: Candidate documents receive new positions in
each input ranking, after discarding excluded ones.
In the IR context, rank aggregation methods need to
decide more or less explicitly which assumptions to retain
w.r.t. the above-mentioned difficulties.
4. OUTRANKING APPROACH FOR RANK
AGGREGATION
4.1 Presentation
Positional methods consider implicitly that the positions
of the documents in the input rankings are scores giving thus
a cardinal meaning to an ordinal information. This
constitutes a strong assumption that is questionable, especially
when the input rankings have different lengths. Moreover,
for positional methods, assumptions H3
and H4
, which are
often arbitrary, have a strong impact on the results. For
instance, let us consider an input ranking of 500 documents
out of 1000 candidate documents. Whether we assign to
each of the missing documents the position 1, 501, 750 or
1000 -corresponding to variations of H3
yes- will give rise to
very contrasted results, especially regarding the top of the
consensus ranking.
Majoritarian methods do not suffer from the
above-mentioned drawbacks of the positional methods since they build
consensus rankings exploiting only ordinal information
contained in the input rankings. Nevertheless, they suppose
that such rankings are complete orders, ignoring that they
may hide ties. Therefore, majoritarian methods base
consensus rankings on illusory discriminant information rather
than less discriminant but more robust information.
Trying to overcome the limits of current rank aggregation
methods, we found that outranking approaches, which were
initially used for multiple criteria aggregation problems [26],
can also be used for the rank aggregation purpose, where
each ranking plays the role of a criterion. Therefore, in
order to decide whether a document di should be ranked
better than di in the consensus ranking σ, the two following
conditions should be met:
• a concordance condition which ensures that a
majority of the input rankings are concordant with diσdi
(majority principle).
• a discordance condition which ensures that none of the
discordant input rankings strongly refutes dσd
(respect of minorities principle).
Formally, the concordance coalition with diσdi is
Csp (diσdi ) = { j∈ PR : rj
i ≤ rj
i − sp}
where sp is a preference threshold which is the variation
of document positions -whether it is absolute or relative to
the ranking length- which draws the boundaries between an
indifference and a preference situation between documents.
The discordance coalition with diσdi is
Dsv (diσdi ) = { j∈ PR : rj
i ≥ rj
i + sv}
where sv is a veto threshold which is the variation of
document positions -whether it is absolute or relative to the
ranking length- which draws the boundaries between a weak
and a strong opposition to diσdi .
Depending on the exact definition of the preceding
concordance and discordance coalitions leading to the definition
of some decision rules, several outranking relations can be
defined. They can be more or less demanding depending on
i) the values of the thresholds sp and sv, ii) the importance
or minimal size cmin required for the concordance coalition,
and iii) the importance or maximum size dmax of the
discordance coalition.
A generic outranking relation can thus be defined as
follows:
diS(sp,sv,cmin,dmax)di ⇔ |Csp (diσdi )| ≥ cmin
AND |Dsv (diσdi )| ≤ dmax
This expression defines a family of nested outranking
relations since S(sp,sv,cmin,dmax) ⊆ S(sp,sv,cmin,dmax) when
cmin ≥ cmin and/or dmax ≤ dmax and/or sp ≥ sp and/or
sv ≤ sv. This expression also generalizes the majority rule
which corresponds to the particular relation S(0,∞, n
2
,n). It
also satisfies important properties of rank aggregation
methods, called neutrality, Pareto-optimality, Condorcet
property and Extended Condorcet property, in the social choice
literature [29].
Outranking relations are not necessarily transitive and do
not necessarily correspond to rankings since directed cycles
may exist. Therefore, we need specific procedures in order to
derive a consensus ranking. We propose the following
procedure which finds its roots in [27]. It consists in partitioning
the set of documents into r ranked classes.
Each class Ch contains documents with the same relevance
and results from the application of all relations (if possible)
to the set of documents remaining after previous classes are
computed. Documents within the same equivalence class are
ranked arbitrarily.
Formally, let
• R be the set of candidate documents for a query,
• S1
, S2
, . . . be a family of nested outranking relations,
• Fk(di, E) = |{di ∈ E : diSk
di }| be the number of
documents in E(E ⊆ R) that could be considered
‘worse" than di according to relation Sk
,
• fk(di, E) = |{di ∈ E : di Sk
di}| be the number of
documents in E that could be considered ‘better" than
di according to Sk
,
• sk(di, E) = Fk(di, E) − fk(di, E) be the qualification
of di in E according to Sk
.
Each class Ch results from a distillation process. It
corresponds to the last distillate of a series of sets E0 ⊇ E1 ⊇ . . .
where E0 = R \ (C1 ∪ . . . ∪ Ch−1) and Ek is a reduced
subset of Ek−1 resulting from the application of the following
procedure:
1. compute for each di ∈ Ek−1 its qualification according
to Sk
, i.e. sk(di, Ek−1),
2. define smax = maxdi∈Ek−1 {sk(di, Ek−1)}, then
3. Ek = {di ∈ Ek−1 : sk(di, Ek−1) = smax}
When one outranking relation is used, the distillation
process stops after the first application of the previous
procedure, i.e., Ch corresponds to distillate E1. When different
outranking relations are used, the distillation process stops
when all the pre-defined outranking relations have been used
or when |Ek| = 1.
4.2 Illustrative Example
This section illustrates the concepts and procedures of
section 4.1. Let us consider a set of candidate documents
R = {d1, d2, d3, d4, d5}. The following table gives a profile
PR of different rankings of the documents of R: PR = ( 1
, 2, 3, 4).
Table 1: Rankings of documents
rj
i 1 2 3 4
d1 1 3 1 5
d2 2 1 3 3
d3 3 2 2 1
d4 4 4 5 2
d5 5 5 4 4
Let us suppose that the preference and veto thresholds
are set to values 1 and 4 respectively, and that the
concordance and discordance thresholds are set to values 2 and 1
respectively. The following tables give the concordance,
discordance and outranking matrices. Each entry csp (di, di )
(dsv (di, di )) in the concordance (discordance) matrix gives
the number of rankings that are concordant (discordant)
with diσdi , i.e. csp (di, di ) = |Csp (diσdi )| and dsv (di, di ) =
|Dsv (diσdi )|.
Table 2: Computation of the outranking relation
d1 d2 d3 d4 d5
d1 - 2 2 3 3
d2 2 - 2 3 4
d3 2 2 - 4 4
d4 1 1 0 - 3
d5 1 0 0
1Concordance Matrix
d1 d2 d3 d4 d5
d1 - 0 1 0 0
d2 0 - 0 0 0
d3 0 0 - 0 0
d4 1 0 0 - 0
d5 1 1 0
0Discordance Matrix
d1 d2 d3 d4 d5
d1 - 1 1 1 1
d2 1 - 1 1 1
d3 1 1 - 1 1
d4 0 0 0 - 1
d5 0 0 0
0Outranking Matrix (S1)
For instance, the concordance coalition for the assertion
d1σd4 is C1(d1σd4) = { 1, 2, 3} and the discordance
coalition for the same assertion is D4(d1σd4) = ∅.
Therefore, c1(d1, d4) = 3, d4(d1, d4) = 0 and d1S1
d4 holds.
Notice that Fk(di, R) (fk(di, R)) is given by summing the
values of the ith
row (column) of the outranking matrix. The
consensus ranking is obtained as follows: to get the first class
C1, we compute the qualifications of all the documents of
E0 = R with respect to S1
. They are respectively 2, 2, 2, -2
and -4. Therefore smax equals 2 and C1 = E1 = {d1, d2, d3}.
Observe that, if we had used a second outranking relation
S2(⊇ S1), these three documents could have been
possibly discriminated. At this stage, we remove documents of
C1 from the outranking matrix and compute the next class
C2: we compute the new qualifications of the documents of
E0 = R \ C1 = {d4, d5}. They are respectively 1 and -1. So
C3 = E1 = {d4}. The last document d5 is the only
document of the last class C3. Thus, the consensus ranking is
{d1, d2, d3} → {d4} → {d5}.
5. EXPERIMENTS AND RESULTS
5.1 Test Setting
To facilitate empirical investigation of the proposed
methodology, we developed a prototype metasearch engine that
implements a version of our outranking approach for rank
aggregation. In this paper, we apply our approach to the
Topic Distillation (TD) task of TREC-2004 Web track [10].
In this task, there are 75 topics where only a short
description of each is given. For each query, we retained the
rankings of the 10 best runs of the TD task which are provided
by TREC-2004 participating teams. The performances of
these runs are reported in table 3.
Table 3: Performances of the 10 best runs of the TD
task of TREC-2004
Run Id MAP P@10 S@1 S@5 S@10
uogWebCAU150 17.9% 24.9% 50.7% 77.3% 89.3%
MSRAmixed1 17.8% 25.1% 38.7% 72.0% 88.0%
MSRC04C12 16.5% 23.1% 38.7% 74.7% 80.0%
humW04rdpl 16.3% 23.1% 37.3% 78.7% 90.7%
THUIRmix042 14.7% 20.5% 21.3% 58.7% 74.7%
UAmsT04MWScb 14.6% 20.9% 36.0% 66.7% 76.0%
ICT04CIIS1AT 14.1% 20.8% 33.3% 64.0% 78.7%
SJTUINCMIX5 12.9% 18.9% 29.3% 57.3% 72.0%
MU04web1 11.5% 19.9% 33.3% 64.0% 76.0%
MeijiHILw3 11.5% 15.3% 30.7% 54.7% 64.0%
Average 14.7% 21.2% 34.9% 66.8% 78.94%
For each query, each run provides a ranking of about 1000
documents. The number of documents retrieved by all these
runs ranges from 543 to 5769. Their average (median)
number is 3340 (3386). It is worth noting that we found similar
distributions of the documents among the rankings as in
[11].
For evaluation, we used the ‘trec eval" standard tool which
is used by the TREC community to calculate the standard
measures of system effectiveness which are Mean Average
Precision (MAP) and Success@n (S@n) for n=1, 5 and 10.
Our approach effectiveness is compared against some high
performing official results from TREC-2004 as well as against
some standard rank aggregation algorithms. In the
experiments, significance testing is mainly based on the t-student
statistic which is computed on the basis of the MAP values of
the compared runs. In the tables of the following section,
statistically significant differences are marked with an
asterisk. Values between brackets of the first column of each
table, indicate the parameter value of the corresponding run.
5.2 Results
We carried out several series of runs in order to i) study
performance variations of the outranking approach when
tuning the parameters and working assumptions, ii)
compare performances of the outranking approach vs standard
rank aggregation strategies , and iii) check whether rank
aggregation performs better than the best input rankings.
We set our basic run mcm with the following parameters.
We considered that each input ranking is a complete
order (sp = 0) and that an input ranking strongly refutes
diσdi when the difference of both document positions is
large enough (sv = 75%). Preference and veto thresholds
are computed proportionally to the number of documents
retained in each input ranking. They consequently may vary
from one ranking to another. In addition, to accept the
assertion diσdi , we supposed that the majority of the
rankings must be concordant (cmin = 50%) and that every input
ranking can impose its veto (dmax = 0). Concordance and
discordance thresholds are computed for each tuple (di, di )
as the percentage of the input rankings of PRi ∩PRi . Thus,
our choice of parameters leads to the definition of the
outranking relation S(0,75%,50%,0).
To test the run mcm, we had chosen the following
assumptions. We retained the top 100 best documents from each
input ranking (H1
100), only considered documents which are
present in at least half of the input rankings (H2
5 ) and
assumed H3
no and H4
new. In these conditions, the number of
successful documents was about 100 on average, and the
computation time per query was less than one second.
Obviously, modifying the working assumptions should have
deeper impact on the performances than tuning our model
parameters. This was validated by preliminary experiments.
Thus, we hereafter begin by studying performance variation
when different sets of assumptions are considered.
Afterwards, we study the impact of tuning parameters. Finally,
we compare our model performances w.r.t. the input
rankings as well as some standard data fusion algorithms.
5.2.1 Impact of the Working Assumptions
Table 4 summarizes the performance variation of the
outranking approach under different working hypotheses. In
Table 4: Impact of the working assumptions
Run Id MAP S@1 S@5 S@10
mcm 18.47% 41.33% 81.33% 86.67%
mcm22 (H3
yes) 17.72% (-4.06%) 34.67% 81.33% 86.67%
mcm23 (H4
init) 18.26% (-1.14%) 41.33% 81.33% 86.67%
mcm24 (H1
all) 20.67% (+11.91%*) 38.66% 80.00% 86.66%
mcm25 (H2
all) 21.68% (+17.38%*) 40.00% 78.66% 89.33%
this table, we first show that run mcm22, in which missing
documents are all put in the same last position of each input
ranking, leads to performance drop w.r.t. run mcm.
Moreover, S@1 moves from 41.33% to 34.67% (-16.11%). This
shows that several relevant documents which were initially
put at the first position of the consensus ranking in mcm, lose
this first position but remain ranked in the top 5 documents
since S@5 did not change. We also conclude that documents
which have rather good positions in some input rankings are
more likely to be relevant, even though they are missing in
some other rankings. Consequently, when they are missing
in some rankings, assigning worse ranks to these documents
is harmful for performance.
Also, from Table 4, we found that the performances of
runs mcm and mcm23 are similar. Therefore, the outranking
approach is not sensitive to keeping the initial positions of
candidate documents or recomputing them by discarding
excluded ones.
From the same Table 4, performance of the outranking
approach increases significantly for runs mcm24 and mcm25.
Therefore, whether we consider all the documents which are
present in half of the rankings (mcm24) or we consider all
the documents which are ranked in the first 100 positions in
one or more rankings (mcm25), increases performances. This
result was predictable since in both cases we have more
detailed information on the relative importance of documents.
Tables 5 and 6 confirm this evidence. Table 5, where
values between brackets of the first column give the number
of documents which are retained from each input ranking,
shows that selecting more documents from each input
ranking leads to performance increase. It is worth mentioning
that selecting more than 600 documents from each input
ranking does not improve performance.
Table 5: Impact of the number of retained
documents
Run Id MAP S@1 S@5 S@10
mcm (100) 18.47% 41.33% 81.33% 86.67%
mcm24-1 (200) 19.32% (+4.60%) 42.67% 78.67% 88.00%
mcm24-2 (400) 19.88% (+7.63%*) 37.33% 80.00% 88.00%
mcm24-3 (600) 20.80% (+12.62%*) 40.00% 80.00% 88.00%
mcm24-4 (800) 20.66% (+11.86%*) 40.00% 78.67% 86.67%
mcm24 (1000) 20.67% (+11.91%*) 38.66% 80.00% 86.66%
Table 6 reports runs corresponding to variations of H2
k .
Values between brackets are rank hits. For instance, in
the run mcm32, only documents which are present in 3 or
more input rankings, were considered successful. This
table shows that performance is significantly better when rare
documents are considered, whereas it decreases significantly
when these documents are discarded. Therefore, we
conclude that many of the relevant documents are retrieved by
a rather small set of IR models.
Table 6: Performance considering different rank hits
Run Id MAP S@1 S@5 S@10
mcm25 (1) 21.68% (+17.38%*) 40.00% 78.67% 89.33%
mcm32 (3) 18.98% (+2.76%) 38.67% 80.00% 85.33%
mcm (5) 18.47% 41.33% 81.33% 86.67%
mcm33 (7) 15.83% (-14.29%*) 37.33% 78.67% 85.33%
mcm34 (9) 10.96% (-40.66%*) 36.11% 66.67% 70.83%
mcm35 (10) 7.42% (-59.83%*) 39.22% 62.75% 64.70%
For both runs mcm24 and mcm25, the number of successful
documents was about 1000 and therefore, the computation
time per query increased and became around 5 seconds.
5.2.2 Impact of the Variation of the Parameters
Table 7 shows performance variation of the outranking
approach when different preference thresholds are considered.
We found performance improvement up to threshold values
of about 5%, then there is a decrease in the performance
which becomes significant for threshold values greater than
10%. Moreover, S@1 improves from 41.33% to 46.67% when
preference threshold changes from 0 to 5%. We can thus
conclude that the input rankings are semi orders rather than
complete orders.
Table 8 shows the evolution of the performance measures
w.r.t. the concordance threshold. We can conclude that in
order to put document di before di in the consensus ranking,
Table 7: Impact of the variation of the preference
threshold from 0 to 12.5%
Run Id MAP S@1 S@5 S@10
mcm (0%) 18.47% 41.33% 81.33% 86.67%
mcm1 (1%) 18.57% (+0.54%) 41.33% 81.33% 86.67%
mcm2 (2.5%) 18.63% (+0.87%) 42.67% 78.67% 86.67%
mcm3 (5%) 18.69% (+1.19%) 46.67% 81.33% 86.67%
mcm4 (7.5%) 18.24% (-1.25%) 46.67% 81.33% 86.67%
mcm5 (10%) 17.93% (-2.92%) 40.00% 82.67% 86.67%
mcm5b (12.5%) 17.51% (-5.20%*) 41.33% 80.00% 86.67%
at least half of the input rankings of PRi ∩ PRi should be
concordant. Performance drops significantly for very low
and very high values of the concordance threshold. In fact,
for such values, the concordance condition is either fulfilled
rather always by too many document pairs or not fulfilled at
all, respectively. Therefore, the outranking relation becomes
either too weak or too strong respectively.
Table 8: Impact of the variation of cmin
Run Id MAP S@1 S@5 S@10
mcm11 (20%) 17.63% (-4.55%*) 41.33% 76.00% 85.33%
mcm12 (40%) 18.37% (-0.54%) 42.67% 76.00% 86.67%
mcm (50%) 18.47% 41.33% 81.33% 86.67%
mcm13 (60%) 18.42% (-0.27%) 40.00% 78.67% 86.67%
mcm14 (80%) 17.43% (-5.63%*) 40.00% 78.67% 86.67%
mcm15 (100%) 16.12% (-12.72%*) 41.33% 70.67% 85.33%
In the experiments, varying the veto threshold as well as
the discordance threshold within reasonable intervals does
not have significant impact on performance measures. In
fact, runs with different veto thresholds (sv ∈ [50%; 100%])
had similar performances even though there is a slight
advantage for runs with high threshold values which means
that it is better not to allow the input rankings to put their
veto easily. Also, tuning the discordance threshold was
carried out for values 50% and 75% of the veto threshold. For
these runs we did not get any noticeable performance
variation, although for low discordance thresholds (dmax < 20%),
performance slightly decreased.
5.2.3 Impact of the Variation of the Number of Input
Rankings
To study performance evolution when different sets of
input rankings are considered, we carried three more runs
where 2, 4, and 6 of the best performing sets of the
input rankings are considered. Results reported in Table 9
are seemingly counter-intuitive and also do not support
previous findings regarding rank aggregation research [3].
Nevertheless, this result shows that low performing rankings
bring more noise than information to the establishment of
the consensus ranking. Therefore, when they are considered,
performance decreases.
Table 9: Performance considering different best
performing sets of input rankings
Run Id MAP S@1 S@5 S@10
mcm (10) 18.47% 41.33% 81.33% 86.67%
mcm27 (6) 18.60% (+0.70%) 41.33% 80.00% 85.33%
mcm28 (4) 19.02% (+2.98%) 40.00% 86.67% 88.00%
mcm29 (2) 18.33% (-0.76%) 44.00% 76.00% 88.00%
5.2.4 Comparison of the Performance of Different
Rank Aggregation Methods
In this set of runs, we compare the outranking approach
with some standard rank aggregation methods which were
proven to have acceptable performance in previous studies:
we considered two positional methods which are the
CombSUM and the CombMNZ strategies. We also examined
the performance of one majoritarian method which is the
Markov chain method (MC4). For the comparisons, we
considered a specific outranking relation S∗
= S(5%,50%,50%,30%)
which results in good overall performances when tuning all
the parameters.
The first row of Table 10 gives performances of the rank
aggregation methods w.r.t. a basic assumption set A1 =
(H1
100, H2
5 , H4
new): we only consider the 100 first documents
from each ranking, then retain documents present in 5 or
more rankings and update ranks of successful documents.
For positional methods, we place missing documents at the
queue of the ranking (H3
yes) whereas for our method as well
as for MC4, we retained hypothesis H3
no. The three
following rows of Table 10 report performances when changing
one element from the basic assumption set: the second row
corresponds to the assumption set A2 = (H1
1000, H2
5 , H4
new),
i.e. changing the number of retained documents from 100
to 1000. The third row corresponds to the assumption set
A3 = (H1
100, H2
all, H4
new), i.e. considering the documents
present in at least one ranking. The fourth row corresponds
to the assumption set A4 = (H1
100, H2
5 , H4
init), i.e. keeping
the original ranks of successful documents.
The fifth row of Table 10, labeled A5, gives performance
when all the 225 queries of the Web track of TREC-2004 are
considered. Obviously, performance level cannot be
compared with previous lines since the additional queries are
different from the TD queries and correspond to other tasks
(Home Page and Named Page tasks [10]) of TREC-2004
Web track. This set of runs aims to show whether relative
performance of the various methods is task-dependent.
The last row of Table 10, labeled A6, reports performance
of the various methods considering the TD task of
TREC2002 instead of TREC-2004: we fused the results of input
rankings of the 10 best official runs for each of the 50 TD
queries [9] considering the set of assumptions A1 of the first
row. This aims to show whether relative performance of the
various methods changes from year to year.
Values between brackets of Table 10 are variations of
performance of each rank aggregation method w.r.t.
performance of the outranking approach.
Table 10: Performance (MAP) of different rank
aggregation methods under 3 different test collections
mcm combSUM combMNZ markov
A1 18.79% 17.54% (-6.65%*) 17.08% (-9.10%*) 18.63% (-0.85%)
A2 21.36% 19.18% (-10.21%*) 18.61% (-12.87%*) 21.33% (-0.14%)
A3 21.92% 21.38% (-2.46%) 20.88% (-4.74%) 19.35% (-11.72%*)
A4 18.64% 17.58% (-5.69%*) 17.18% (-7.83%*) 18.63% (-0.05%)
A5 55.39% 52.16% (-5.83%*) 49.70% (-10.27%*) 53.30% (-3.77%)
A6 16.95% 15.65% (-7.67%*) 14.57% (-14.04%*) 16.39% (-3.30%)
From the analysis of table 10 the following can be
established:
• for all the runs, considering all the documents in each
input ranking (A2) significantly improves performance
(MAP increases by 11.62% on average). This is
predictable since some initially unreported relevant
documents would receive better positions in the consensus
ranking.
• for all the runs, considering documents even those
present in only one input ranking (A3) significantly
improves performance. For mcm, combSUM and combMNZ,
performance improvement is more important (MAP
increases by 20.27% on average) than for the markov run
(MAP increases by 3.86%).
• preserving the initial positions of documents (A4) or
recomputing them (A1) does not have a noticeable
influence on performance for both positional and
majoritarian methods.
• considering all the queries of the Web track of
TREC2004 (A5) as well as the TD queries of the Web track
of TREC-2002 (A6) does not alter the relative
performance of the different data fusion methods.
• considering the TD queries of the Web track of
TREC2002, performances of all the data fusion methods are
lower than that of the best performing input ranking
for which the MAP value equals 18.58%. This is because
most of the fused input rankings have very low
performances compared to the best one, which brings more
noise to the consensus ranking.
• performances of the data fusion methods mcm and markov
are significantly better than that of the best input
ranking uogWebCAU150. This remains true for runs
combSUM and combMNZ only under assumptions H1
all or
H2
all. This shows that majoritarian methods are less
sensitive to assumptions than positional methods.
• outranking approach always performs significantly
better than positional methods combSUM and combMNZ. It
has also better performances than the Markov chain
method, especially under assumption H2
all where
difference of performances becomes significant.
6. CONCLUSIONS
In this paper, we address the rank aggregation problem
where different, but not disjoint, lists of documents are to
be fused. We noticed that the input rankings can hide ties,
so they should not be considered as complete orders. Only
robust information should be used from each input ranking.
Current rank aggregation methods, and especially
positional methods (e.g. combSUM [15]), are not initially
designed to work with such rankings. They should be adapted
by considering specific working assumptions.
We propose a new outranking method for rank
aggregation which is well adapted to the IR context. Indeed, it
ranks two documents w.r.t. the intensity of their positions
difference in each input ranking and also considering the
number of the input rankings that are concordant and
discordant in favor of a specific document. There is also no
need to make specific assumptions on the positions of the
missing documents. This is an important feature since the
absence of a document from a ranking should not be
necessarily interpreted negatively.
Experimental results show that the outranking method
significantly out-performs popular classical positional data
fusion methods like combSUM and combMNZ strategies. It
also out-performs a good performing majoritarian methods
which is the Markov chain method. These results are tested
against different test collections and queries. From the
experiments, we can also conclude that in order to improve the
performances, we should fuse result lists of well performing
IR models, and that majoritarian data fusion methods
perform better than positional methods.
The proposed method can have a real impact on Web
metasearch performances since only ranks are available from
most primary search engines, whereas most of the current
approaches need scores to merge result lists into one single
list.
Further work involves investigating whether the
outranking approach performs well in various other contexts, e.g.
using the document scores or some combination of
document ranks and scores.
Acknowledgments
The authors would like to thank Jacques Savoy for his
valuable comments on a preliminary version of this paper.
7. REFERENCES
[1] A. Aronson, D. Demner-Fushman, S. Humphrey,
J. Lin, H. Liu, P. Ruch, M. Ruiz, L. Smith, L. Tanabe,
and W. Wilbur. Fusion of knowledge-intensive and
statistical approaches for retrieving and annotating
textual genomics documents. In Proceedings
TREC"2005. NIST Publication, 2005.
[2] R. A. Baeza-Yates and B. A. Ribeiro-Neto. Modern
Information Retrieval. ACM Press , 1999.
[3] B. T. Bartell, G. W. Cottrell, and R. K. Belew.
Automatic combination of multiple ranked retrieval
systems. In Proceedings ACM-SIGIR"94, pages
173-181. Springer-Verlag, 1994.
[4] N. J. Belkin, P. Kantor, E. A. Fox, and J. A. Shaw.
Combining evidence of multiple query representations
for information retrieval. IPM, 31(3):431-448, 1995.
[5] J. Borda. M´emoire sur les ´elections au scrutin.
Histoire de l"Acad´emie des Sciences, 1781.
[6] J. P. Callan, Z. Lu, and W. B. Croft. Searching
distributed collections with inference networks. In
Proceedings ACM-SIGIR"95, pages 21-28, 1995.
[7] M. Condorcet. Essai sur l"application de l"analyse `a la
probabilit´e des d´ecisions rendues `a la pluralit´e des
voix. Imprimerie Royale, Paris, 1785.
[8] W. D. Cook and M. Kress. Ordinal ranking with
intensity of preference. Management Science,
31(1):26-32, 1985.
[9] N. Craswell and D. Hawking. Overview of the
TREC-2002 Web Track. In Proceedings TREC"2002.
NIST Publication, 2002.
[10] N. Craswell and D. Hawking. Overview of the
TREC-2004 Web Track. In Proceedings of
TREC"2004. NIST Publication, 2004.
[11] C. Dwork, S. R. Kumar, M. Naor, and D. Sivakumar.
Rank aggregation methods for the Web. In
Proceedings WWW"2001, pages 613-622, 2001.
[12] R. Fagin. Combining fuzzy information from multiple
systems. JCSS, 58(1):83-99, 1999.
[13] R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, and
E. Vee. Comparing and aggregating rankings with
ties. In PODS, pages 47-58, 2004.
[14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing
top k lists. SIAM J. on Discrete Mathematics,
17(1):134-160, 2003.
[15] E. A. Fox and J. A. Shaw. Combination of multiple
searches. In Proceedings of TREC"3. NIST
Publication, 1994.
[16] J. Katzer, M. McGill, J. Tessier, W. Frakes, and
P. DasGupta. A study of the overlap among document
representations. Information Technology: Research
and Development, 1(4):261-274, 1982.
[17] L. S. Larkey, M. E. Connell, and J. Callan. Collection
selection and results merging with topically organized
U.S. patents and TREC data. In Proceedings
ACM-CIKM"2000, pages 282-289. ACM Press, 2000.
[18] A. Le Calv´e and J. Savoy. Database merging strategy
based on logistic regression. IPM, 36(3):341-359, 2000.
[19] J. H. Lee. Analyses of multiple evidence combination.
In Proceedings ACM-SIGIR"97, pages 267-276, 1997.
[20] D. Lillis, F. Toolan, R. Collier, and J. Dunnion.
Probfuse: a probabilistic approach to data fusion. In
Proceedings ACM-SIGIR"2006, pages 139-146. ACM
Press, 2006.
[21] J. I. Marden. Analyzing and Modeling Rank Data.
Number 64 in Monographs on Statistics and Applied
Probability. Chapman & Hall, 1995.
[22] M. Montague and J. A. Aslam. Metasearch
consistency. In Proceedings ACM-SIGIR"2001, pages
386-387. ACM Press, 2001.
[23] D. M. Pennock and E. Horvitz. Analysis of the
axiomatic foundations of collaborative filtering. In
Workshop on AI for Electronic Commerce at the 16th
National Conference on Artificial Intelligence, 1999.
[24] M. E. Renda and U. Straccia. Web metasearch: rank
vs. score based rank aggregation methods. In
Proceedings ACM-SAC"2003, pages 841-846. ACM
Press, 2003.
[25] W. H. Riker. Liberalism against populism. Waveland
Press, 1982.
[26] B. Roy. The outranking approach and the foundations
of ELECTRE methods. Theory and Decision,
31:49-73, 1991.
[27] B. Roy and J. Hugonnard. Ranking of suburban line
extension projects on the Paris metro system by a
multicriteria method. Transportation Research,
16A(4):301-312, 1982.
[28] L. Si and J. Callan. Using sampled data and regression
to merge search engine results. In Proceedings
ACM-SIGIR"2002, pages 19-26. ACM Press, 2002.
[29] M. Truchon. An extension of the Condorcet criterion
and Kemeny orders. Cahier 9813, Centre de Recherche
en Economie et Finance Appliqu´ees, Oct. 1998.
[30] H. Turtle and W. B. Croft. Inference networks for
document retrieval. In Proceedings of ACM-SIGIR"90,
pages 1-24. ACM Press, 1990.
[31] C. C. Vogt and G. W. Cottrell. Fusion via a linear
combination of scores. Information Retrieval,
1(3):151-173, 1999. | rank aggregation;multiple criterion framework;metasearch engine;combsum and combmnz strategy;ir model;datum fusion;decision rule;multiple criterium approach;outrank method;information retrieval;datum fusion problem;outranking approach;majoritarian method |
train_H-52 | Vocabulary Independent Spoken Term Detection | We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer"s vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1]. | 1. INTRODUCTION
The rapidly increasing amount of spoken data calls for
solutions to index and search this data.
The classical approach consists of converting the speech to
word transcripts using a large vocabulary continuous speech
recognition (LVCSR) tool. In the past decade, most of the
research efforts on spoken data retrieval have focused on
extending classical IR techniques to word transcripts. Some of
these works have been done in the framework of the NIST
TREC Spoken Document Retrieval tracks and are described
by Garofolo et al. [12]. These tracks focused on retrieval
from a corpus of broadcast news stories spoken by
professionals. One of the conclusions of those tracks was that
the effectiveness of retrieval mostly depends on the
accuracy of the transcripts. While the accuracy of automatic
speech recognition (ASR) systems depends on the scenario
and environment, state-of-the-art systems achieved better
than 90% accuracy in transcription of such data. In 2000,
Garofolo et al. concluded that Spoken document retrieval
is a solved problem [12].
However, a significant drawback of such approaches is that
search on queries containing out-of-vocabulary (OOV) terms
will not return any results. OOV terms are missing words
from the ASR system vocabulary and are replaced in the
output transcript by alternatives that are probable, given
the recognition acoustic model and the language model. It
has been experimentally observed that over 10% of user
queries can contain OOV terms [16], as queries often
relate to named entities that typically have a poor coverage
in the ASR vocabulary. The effects of OOV query terms in
spoken data retrieval are discussed by Woodland et al. [28].
In many applications the OOV rate may get worse over time
unless the recognizer"s vocabulary is periodically updated.
Another approach consists of converting the speech to
phonetic transcripts and representing the query as a
sequence of phones. The retrieval is based on searching the
sequence of phones representing the query in the phonetic
transcripts. The main drawback of this approach is the
inherent high error rate of the transcripts. Therefore, such
approach cannot be an alternative to word transcripts,
especially for in-vocabulary (IV) query terms that are part of
the vocabulary of the ASR system.
A solution would be to combine the two different
approaches presented above: we index both word transcripts
and phonetic transcripts; during query processing, the
information is retrieved from the word index for IV terms and
from the phonetic index for OOV terms. We would like to
be able to process also hybrid queries, i.e, queries that
include both IV and OOV terms. Consequently, we need to
merge pieces of information retrieved from word index and
phonetic index. Proximity information on the occurrences
of the query terms is required for phrase search and for
proximity-based ranking. In classical IR, the index stores for
each occurrence of a term, its offset. Therefore, we cannot
merge posting lists retrieved by phonetic index with those
retrieved by word index since the offset of the occurrences
retrieved from the two different indices are not comparable.
The only element of comparison between phonetic and word
transcripts are the timestamps. No previous work
combining word and phonetic approach has been done on phrase
search. We present a novel scheme for information retrieval
that consists of storing, during the indexing process, for each
unit of indexing (phone or word) its timestamp. We search
queries by merging the information retrieved from the two
different indices, word index and phonetic index, according
to the timestamps of the query terms. We analyze the
retrieval effectiveness of this approach on the NIST Spoken
Term Detection 2006 evaluation data [1].
The paper is organized as follows. We describe the audio
processing in Section 2. The indexing and retrieval methods
are presented in section 3. Experimental setup and results
are given in Section 4. In Section 5, we give an overview of
related work. Finally, we conclude in Section 6.
2. AUTOMATIC SPEECH RECOGNITION
SYSTEM
We use an ASR system for transcribing speech data. It
works in speaker-independent mode. For best recognition
results, a speaker-independent acoustic model and a
language model are trained in advance on data with similar
characteristics.
Typically, ASR generates lattices that can be considered
as directed acyclic graphs. Each vertex in a lattice is
associated with a timestamp and each edge (u, v) is labeled with
a word or phone hypothesis and its prior probability, which
is the probability of the signal delimited by the timestamps
of the vertices u and v, given the hypothesis. The 1-best
path transcript is obtained from the lattice using dynamic
programming techniques.
Mangu et al. [18] and Hakkani-Tur et al. [13] propose a
compact representation of a word lattice called word
confusion network (WCN). Each edge (u, v) is labeled with a word
hypothesis and its posterior probability, i.e., the probability
of the word given the signal. One of the main advantages
of WCN is that it also provides an alignment for all of the
words in the lattice. As explained in [13], the three main
steps for building a WCN from a word lattice are as follows:
1. Compute the posterior probabilities for all edges in the
word lattice.
2. Extract a path from the word lattice (which can be
the 1-best, the longest or any random path), and call
it the pivot path of the alignment.
3. Traverse the word lattice, and align all the transitions
with the pivot, merging the transitions that
correspond to the same word (or label) and occur in the
same time interval by summing their posterior
probabilities.
The 1-best path of a WCN is obtained from the path
containing the best hypotheses. As stated in [18], although
WCNs are more compact than word lattices, in general the
1-best path obtained from WCN has a better word accuracy
than the 1-best path obtained from the corresponding word
lattice.
Typical structures of a lattice and a WCN are given in
Figure 1.
Figure 1: Typical structures of a lattice and a WCN.
3. RETRIEVAL MODEL
The main problem with retrieving information from
spoken data is the low accuracy of the transcription
particularly on terms of interest such as named entities and
content words. Generally, the accuracy of a word transcript
is characterized by its word error rate (WER). There are
three kinds of errors that can occur in a transcript:
substitution of a term that is part of the speech by another
term, deletion of a spoken term that is part of the speech
and insertion of a term that is not part of the speech.
Substitutions and deletions reflect the fact that an
occurrence of a term in the speech signal is not recognized. These
misses reduce the recall of the search. Substitutions and
insertions reflect the fact that a term which is not part of the
speech signal appears in the transcript. These misses reduce
the precision of the search.
Search recall can be enhanced by expanding the transcript
with extra words. These words can be taken from the other
alternatives provided by the WCN; these alternatives may
have been spoken but were not the top choice of the ASR.
Such an expansion tends to correct the substitutions and
the deletions and consequently, might improve recall but
will probably reduce precision. Using an appropriate
ranking model, we can avoid the decrease in precision. Mamou et
al. have presented in [17] the enhancement in the recall and
the MAP by searching on WCN instead of considering only
the 1-best path word transcript in the context of spoken
document retrieval. We have adapted this model of IV search to
term detection. In word transcripts, OOV terms are deleted
or substituted. Therefore, the usage of phonetic transcripts
is more desirable. However, due to their low accuracy, we
have preferred to use only the 1-best path extracted from the
phonetic lattices. We will show that the usage of phonetic
transcripts tends to improve the recall without affecting the
precision too much, using an appropriate ranking.
3.1 Spoken document detection task
As stated in the STD 2006 evaluation plan [2], the task
consists in finding all the exact matches of a specific query
in a given corpus of speech data. A query is a phrase
containing several words. The queries are text and not speech.
Note that this task is different from the more classical task of
spoken document retrieval. Manual transcripts of the speech
are not provided but are used by the evaluators to find true
occurrences. By definition, true occurrences of a query are
found automatically by searching the manual transcripts
using the following rule: the gap between adjacent words in
a query must be less than 0.5 seconds in the corresponding
speech. For evaluating the results, each system output
occurrence is judged as correct or not according to whether it
is close in time to a true occurrence of the query retrieved
from manual transcripts; it is judged as correct if the
midpoint of the system output occurrence is less than or equal
to 0.5 seconds from the time span of a true occurrence of
the query.
3.2 Indexing
We have used the same indexing process for WCN and
phonetic transcripts. Each occurrence of a unit of indexing
(word or phone) u in a transcript D is indexed with the
following information:
• the begin time t of the occurrence of u,
• the duration d of the occurrence of u.
In addition, for WCN indexing, we store
• the confidence level of the occurrence of u at the
time t that is evaluated by its posterior probability
Pr(u|t, D),
• the rank of the occurrence of u among the other
hypotheses beginning at the same time t, rank(u|t, D).
Note that since the task is to find exact matches of the
phrase queries, we have not filtered stopwords and the
corpus is not stemmed before indexing.
3.3 Search
In the following, we present our approach for
accomplishing the STD task using the indices described above. The
terms are extracted from the query. The vocabulary of the
ASR system building word transcripts is given. Terms that
are part of this vocabulary are IV terms; the other terms
are OOV. For an IV query term, the posting list is extracted
from the word index. For an OOV query term, the term is
converted to a sequence of phones using a joint maximum
entropy N-gram model [10]. For example, the term prosody
is converted to the sequence of phones (p, r, aa, z, ih,
d, iy). The posting list of each phone is extracted from the
phonetic index.
The next step consists of merging the different posting
lists according to the timestamp of the occurrences in order
to create results matching the query. First, we check that
the words and phones appear in the right order according to
their begin times. Second, we check that the gap in time
between adjacent words and phones is reasonable.
Conforming to the requirements of the STD evaluation, the distance
in time between two adjacent query terms must be less than
0.5 seconds. For OOV search, we check that the distance
in time between two adjacent phones of a query term is less
that 0.2 seconds; this value has been determined empirically.
In such a way, we can reduce the effect of insertion errors
since we allow insertions between the adjacent words and
phones. Our query processing does not allow substitutions
and deletions.
Example: Let us consider the phrase query prosody
research. The term prosody is OOV and the term research
is IV. The term prosody is converted to the sequence of
phones (p, r, aa, z, ih, d, iy). The posting list of each
phone is extracted from the phonetic index. We merge the
posting lists of the phones such that the sequence of phones
appears in the right order and the gap in time between the
pairs of phones (p, r), (r, aa), (aa, z), (z, ih), (ih, d), (d, iy) is
less than 0.2 seconds. We obtain occurrences of the term
prosody. The posting list of research is extracted from the
word index and we merge it with the occurrences found for
prosody such that they appear in the right order and the
distance in time between prosody and research is less than
0.5 seconds.
Note that our indexing model allows to search for different
types of queries:
1. queries containing only IV terms using the word index.
2. queries containing only OOV terms using the phonetic
index.
3. keyword queries containing both IV and OOV terms
using the word index for IV terms and the phonetic
index for OOV terms; for query processing, the
different sets of matches are unified if the query terms have
OR semantics and intersected if the query terms have
AND semantics.
4. phrase queries containing both IV and OOV terms; for
query processing, the posting lists of the IV terms
retrieved from the word index are merged with the
posting lists of the OOV terms retrieved from the phonetic
index. The merging is possible since we have stored
the timestamps for each unit of indexing (word and
phone) in both indices.
The STD evaluation has focused on the fourth query type.
It is the hardest task since we need to combine posting lists
retrieved from phonetic and word indices.
3.4 Ranking
Since IV terms and OOV terms are retrieved from two
different indices, we propose two different functions for scoring
an occurrence of a term; afterward, an aggregate score is
assigned to the query based on the scores of the query terms.
Because the task is term detection, we do not use a
document frequency criterion for ranking the occurrences.
Let us consider a query Q = (k0, ..., kn), associated with
a boosting vector B = (B1, ..., Bj). This vector associates
a boosting factor to each rank of the different hypotheses;
the boosting factors are normalized between 0 and 1. If the
rank r is larger than j, we assume Br = 0.
3.4.1 In vocabulary term ranking
For IV term ranking, we extend the work of Mamou et
al. [17] on spoken document retrieval to term detection. We
use the information provided by the word index. We define
the score score(k, t, D) of a keyword k occurring at a time t
in the transcript D, by the following formula:
score(k, t, D) = Brank(k|t,D) × Pr(k|t, D)
Note that 0 ≤ score(k, t, D) ≤ 1.
3.4.2 Out of vocabulary term ranking
For OOV term ranking, we use the information provided
by the phonetic index. We give a higher rank to occurrences
of OOV terms that contain phones close (in time) to each
other. We define a scoring function that is related to the
average gap in time between the different phones. Let us
consider a keyword k converted to the sequence of phones
(pk
0 , ..., pk
l ). We define the normalized score score(k, tk
0 , D)
of a keyword k = (pk
0 , ..., pk
l ), where each pk
i occurs at time
tk
i with a duration of dk
i in the transcript D, by the following
formula:
score(k, tk
0 , D) = 1 −
l
i=1 5 × (tk
i − (tk
i−1 + dk
i−1))
l
Note that according to what we have ex-plained in
Section 3.3, we have ∀1 ≤ i ≤ l, 0 < tk
i − (tk
i−1 + dk
i−1) <
0.2 sec, 0 < 5 × (tk
i − (tk
i−1 + dk
i−1)) < 1, and consequently,
0 < score(k, tk
0 , D) ≤ 1. The duration of the keyword
occurrence is tk
l − tk
0 + dk
l .
Example: let us consider the sequence (p, r, aa, z,
ih, d, iy) and two different occurrences of the sequence.
For each phone, we give the begin time and the duration in
second.
Occurrence 1: (p, 0.25, 0.01), (r, 0.36, 0.01), (aa, 0.37, 0.01),
(z, 0.38, 0.01), (ih, 0.39, 0.01), (d, 0.4, 0.01), (iy, 0.52, 0.01).
Occurrence 2: (p, 0.45, 0.01), (r, 0.46, 0.01), (aa, 0.47, 0.01),
(z, 0.48, 0.01), (ih, 0.49, 0.01), (d, 0.5, 0.01), (iy, 0.51, 0.01).
According to our formula, the score of the first occurrence
is 0.83 and the score of the second occurrence is 1. In the
first occurrence, there are probably some insertion or silence
between the phone p and r, and between the phone d and iy.
The silence can be due to the fact that the phones belongs
to two different words ans therefore, it is not an occurrence
of the term prosody.
3.4.3 Combination
The score of an occurrence of a query Q at time t0 in the
document D is determined by the multiplication of the score
of each keyword ki, where each ki occurs at time ti with a
duration di in the transcript D:
score(Q, t0, D) =
n
i=0
score(ki, ti, D)γn
Note that according to what we have ex-plained in
Section 3.3, we have ∀1 ≤ i ≤ n, 0 < ti −(ti−1 +di−1) < 0.5 sec.
Our goal is to estimate for each found occurrence how
likely the query appears. It is different from classical IR
that aims to rank the results and not to score them. Since
the probability to have a false alarm is inversely proportional
to the length of the phrase query, we have boosted the score
of queries by a γn exponent, that is related to the number
of keywords in the phrase. We have determined empirically
the value of γn = 1/n.
The begin time of the query occurrence is determined by
the begin time t0 of the first query term and the duration
of the query occurrence by tn − t0 + dn.
4. EXPERIMENTS
4.1 Experimental setup
Our corpus consists of the evaluation set provided by NIST
for the STD 2006 evaluation [1]. It includes three
different source types in US English: three hours of broadcast
news (BNEWS), three hours of conversational telephony
speech (CTS) and two hours of conference room meetings
(CONFMTG). As shown in Section 4.2, these different
collections have different accuracies. CTS and CONFMTG are
spontaneous speech. For the experiments, we have processed
the query set provided by NIST that includes 1100 queries.
Each query is a phrase containing between one to five terms,
common and rare terms, terms that are in the manual
transcripts and those that are not. Testing and determination
of empirical values have been achieved on another set of
speech data and queries, the development set, also provided
by NIST.
We have used the IBM research prototype ASR system,
described in [26], for transcribing speech data. We have
produced WCNs for the three different source types. 1-best
phonetic transcripts were generated only for BNEWS and
CTS, since CONFMTG phonetic transcripts have too low
accuracy. We have adapted Juru [7], a full-text search
library written in Java, to index the transcripts and to store
the timestamps of the words and phones; search results have
been retrieved as described in Section 3.
For each found occurrence of the given query, our system
outputs: the location of the term in the audio recording
(begin time and duration), the score indicating how likely
is the occurrence of query, (as defined in Section 3.4) and a
hard (binary) decision as to whether the detection is
correct. We measure precision and recall by comparing the
results obtained over the automatic transcripts (only the
results having true hard decision) to the results obtained over
the reference manual transcripts. Our aim is to evaluate the
ability of the suggested retrieval approach to handle
transcribed speech data. Thus, the closer the automatic results
to the manual results is, the better the search effectiveness
over the automatic transcripts will be. The results returned
from the manual transcription for a given query are
considered relevant and are expected to be retrieved with highest
scores. This approach for measuring search effectiveness
using manual data as a reference is very common in speech
retrieval research [25, 22, 8, 9, 17].
Beside the recall and the precision, we use the evaluation
measures defined by NIST for the 2006 STD evaluation [2]:
the Actual Term-Weighted Value (ATWV) and the
Maximum Term-Weighted Value (MTWV). The term-weighted
value (TWV) is computed by first computing the miss and
false alarm probabilities for each query separately, then
using these and an (arbitrarily chosen) prior probability to
compute query-specific values, and finally averaging these
query-specific values over all queries q to produce an overall
system value:
TWV (θ) = 1 − averageq{Pmiss(q, θ) + β × PF A(q, θ)}
where β = C
V
(Pr−1
q − 1). θ is the detection threshold. For
the evaluation, the cost/value ratio, C/V , has been
determined to 0.1 and the prior probability of a query Prq to
10−4
. Therefore, β = 999.9.
Miss and false alarm probabilities for a given query q are
functions of θ:
Pmiss(q, θ) = 1 −
Ncorrect(q, θ)
Ntrue(q)
PF A(q, θ) =
Nspurious(q, θ)
NNT (q)
corpus WER(%) SUBR(%) DELR(%) INSR(%)
BNEWS WCN 12.7 49 42 9
CTS WCN 19.6 51 38 11
CONFMTG WCN 47.4 47 49 3
Table 1: WER and distribution of the error types over word 1-best path extracted from WCNs for the
different source types.
where:
• Ncorrect(q, θ) is the number of correct detections
(retrieved by the system) of the query q with a score
greater than or equal to θ.
• Nspurious(q, θ) is the number of spurious detections of
the query q with a score greater than or equal to θ.
• Ntrue(q) is the number of true occurrences of the query
q in the corpus.
• NNT (q) is the number of opportunities for incorrect
detection of the query q in the corpus; it is the
NonTarget query trials. It has been defined by the
following formula: NNT (q) = Tspeech − Ntrue(q). Tspeech
is the total amount of speech in the collection (in
seconds).
ATWV is the actual term-weighted value; it is the
detection value attained by the system as a result of the system
output and the binary decision output for each putative
occurrence. It ranges from −∞ to +1. MTWV is the
maximum term-weighted value over the range of all possible
values of θ. It ranges from 0 to +1.
We have also provided the detection error tradeoff (DET)
curve [19] of miss probability (Pmiss) vs. false alarm
probability (PF A).
We have used the STDEval tool to extract the relevant
results from the manual transcripts and to compute ATWV,
MTWV and the DET curve.
We have determined empirically the following values for
the boosting vector defined in Section 3.4: Bi = 1
i
.
4.2 WER analysis
We use the word error rate (WER) in order to characterize
the accuracy of the transcripts. WER is defined as follows:
S + D + I
N
× 100
where N is the total number of words in the corpus, and
S, I, and D are the total number of substitution, insertion,
and deletion errors, respectively. The substitution error rate
(SUBR) is defined by
S
S + D + I
× 100.
Deletion error rate (DELR) and insertion error rate (INSR)
are defined in a similar manner.
Table 1 gives the WER and the distribution of the error
types over 1-best path transcripts extracted from WCNs.
The WER of the 1-best path phonetic transcripts is
approximately two times worse than the WER of word transcripts.
That is the reason why we have not retrieved from phonetic
transcripts on CONFMTG speech data.
4.3 Theta threshold
We have determined empirically a detection threshold θ
per source type and the hard decision of the occurrences
having a score less than θ is set to false; false occurrences
returned by the system are not considered as retrieved and
therefore, are not used for computing ATWV, precision and
recall.
The value of the threshold θ per source type is reported in
Table 2. It is correlated to the accuracy of the transcripts.
Basically, setting a threshold aims to eliminate from the
retrieved occurrences, false alarms without adding misses.
The higher the WER is, the higher the θ threshold should
be.
BNEWS CTS CONFMTG
0.4 0.61 0.91
Table 2: Values of the θ threshold per source type.
4.4 Processing resource profile
We report in Table 3 the processing resource profile.
Concerning the index size, note that our index is compressed
using IR index compression techniques. The indexing time
includes both audio processing (generation of word and
phonetic transcripts) and building of the searchable indices.
Index size 0.3267 MB/HS
Indexing time 7.5627 HP/HS
Index Memory Usage 1653.4297 MB
Search speed 0.0041 sec.P/HS
Search Memory Usage 269.1250 MB
Table 3: Processing resource profile. (HS: Hours of
Speech. HP: Processing Hours. sec.P: Processing
seconds)
4.5 Retrieval measures
We compare our approach (WCN phonetic) presented in
Section 4.1 with another approach (1-best-WCN phonetic).
The only difference between these two approaches is that,
in 1-best-WCN phonetic, we index only the 1-best path
extracted from the WCN instead of indexing all the WCN.
WCN phonetic was our primary system for the evaluation
and 1-best-WCN phonetic was one of our contrastive
systems. Average precision and recall, MTWV and ATWV on
the 1100 queries are given in Table 4. We provide also the
DET curve for WCN phonetic approach in Figure 2. The
point that maximizes the TWV, the MTWV, is specified on
each curve. Note that retrieval performance has been
evaluated separately for each source type since the accuracy of
the speech differs per source type as shown in Section 4.2.
As expected, we can see that MTWV and ATWV decrease
in higher WER. The retrieval performance is improved when
measure BNEWS CTS CONFMTG
WCN phonetic ATWV 0.8485 0.7392 0.2365
MTWV 0.8532 0.7408 0.2508
precision 0.94 0.90 0.65
recall 0.89 0.81 0.37
1-best-WCN phonetic ATWV 0.8279 0.7102 0.2381
MTWV 0.8319 0.7117 0.2512
precision 0.95 0.91 0.66
recall 0.84 0.75 0.37
Table 4: ATWV, MTWV, precision and recall per source type.
Figure 2: DET curve for WCN phonetic approach.
using WCNs relatively to 1-best path. It is due to the fact
that miss probability is improved by indexing all the
hypotheses provided by the WCNs. This observation confirms
the results shown by Mamou et al. [17] in the context of
spoken document retrieval. The ATWV that we have obtained
is close to the MTWV; we have combined our ranking model
with appropriate threshold θ to eliminate results with lower
score. Therefore, the effect of false alarms added by WCNs
is reduced.
WCN phonetic approach was used in the recent NIST STD
evaluation and received the highest overall ranking among
eleven participants. For comparison, the system that ranked
at the third place, obtained an ATWV of 0.8238 for BNEWS,
0.6652 for CTS and 0.1103 for CONFMTG.
4.6 Influence of the duration of the query on
the retrieval performance
We have analysed the retrieval performance according to
the average duration of the occurrences in the manual
transcripts. The query set was divided into three different
quantiles according to the duration; we have reported in Table 5
ATWV and MTWV according to the duration. We can see
that we performed better on longer queries. One of the
reasons is the fact that the ASR system is more accurate on
long words. Hence, it was justified to boost the score of the
results with the exponent γn, as explained in Section 3.4.3,
according to the length of the query.
quantile 0-33 33-66 66-100
BNEWS ATWV 0.7655 0.8794 0.9088
MTWV 0.7819 0.8914 0.9124
CTS ATWV 0.6545 0.8308 0.8378
MTWV 0.6551 0.8727 0.8479
CONFMTG ATWV 0.1677 0.3493 0.3651
MTWV 0.1955 0.4109 0.3880
Table 5: ATWV, MTWV according to the duration
of the query occurrences per source type.
4.7 OOV vs. IV query processing
We have randomly chosen three sets of queries from the
query sets provided by NIST: 50 queries containing only IV
terms; 50 queries containing only OOV terms; and 50 hybrid
queries containing both IV and OOV terms. The following
experiment has been achieved on the BNEWS collection and
IV and OOV terms has been determined according to the
vocabulary of BNEWS ASR system.
We would like to compare three different approaches of
retrieval: using only word index; using only phonetic index;
combining word and phonetic indices. Table 6 summarizes
the retrieval performance according to each approach and
to each type of queries. Using a word-based approach for
dealing with OOV and hybrid queries affects drastically the
performance of the retrieval; precision and recall are null.
Using a phone-based approach for dealing with IV queries
affects also the performance of the retrieval relatively to the
word-based approach.
As expected, the approach combining word and phonetic
indices presented in Section 3 leads to the same retrieval
performance as the word approach for IV queries and to
the same retrieval performance as the phonetic approach for
OOV queries. This approach always outperforms the others
and it justifies the fact that we need to combine word and
phonetic search.
5. RELATED WORK
In the past decade, the research efforts on spoken data
retrieval have focused on extending classical IR techniques
to spoken documents. Some of these works have been done
in the context of the TREC Spoken Document Retrieval
evaluations and are described by Garofolo et al. [12]. An
LVCSR system is used to transcribe the speech into 1-best
path word transcripts. The transcripts are indexed as clean
text: for each occurrence, its document, its word offset and
additional information are stored in the index. A generic IR
system over the text is used for word spotting and search
as described by Brown et al. [6] and James [14]. This
stratindex word phonetic word and phonetic
precision recall precision recall precision recall
IV queries 0.8 0.96 0.11 0.77 0.8 0.96
OOV queries 0 0 0.13 0.79 0.13 0.79
hybrid queries 0 0 0.15 0.71 0.89 0.83
Table 6: Comparison of word and phonetic approach on IV and OOV queries
egy works well for transcripts like broadcast news collections
that have a low WER (in the range of 15%-30%) and are
redundant by nature (the same piece of information is
spoken several times in different manners). Moreover, the
algorithms have been mostly tested over long queries stated in
plain English and retrieval for such queries is more robust
against speech recognition errors.
An alternative approach consists of using word lattices in
order to improve the effectiveness of SDR. Singhal et al. [24,
25] propose to add some terms to the transcript in order
to alleviate the retrieval failures due to ASR errors. From
an IR perspective, a classical way to bring new terms is
document expansion using a similar corpus. Their approach
consists in using word lattices in order to determine which
words returned by a document expansion algorithm should
be added to the original transcript. The necessity to use a
document expansion algorithm was justified by the fact that
the word lattices they worked with, lack information about
word probabilities.
Chelba and Acero in [8, 9] propose a more compact word
lattice, the position specific posterior lattice (PSPL). This
data structure is similar to WCN and leads to a more
compact index. The offset of the terms in the speech documents
is also stored in the index. However, the evaluation
framework is carried out on lectures that are relatively planned,
in contrast to conversational speech. Their ranking model
is based on the term confidence level but does not take into
consideration the rank of the term among the other
hypotheses. Mamou et al. [17] propose a model for spoken document
retrieval using WCNs in order to improve the recall and the
MAP of the search. However, in the above works, the
problem of queries containing OOV terms is not addressed.
Popular approaches to deal with OOV queries are based
on sub-words transcripts, where the sub-words are typically
phones, syllables or word fragments (sequences of phones)
[11, 20, 23]. The classical approach consists of using
phonetic transcripts. The transcripts are indexed in the same
manner as words in using classical text retrieval techniques;
during query processing, the query is represented as a
sequence of phones. The retrieval is based on searching the
string of phones representing the query in the phonetic
transcript. To account for the high recognition error rates, some
other systems use richer transcripts like phonetic lattices.
They are attractive as they accommodate high error rate
conditions as well as allow for OOV queries to be used [15,
3, 20, 23, 21, 27]. However, phonetic lattices contain many
edges that overlap in time with the same phonetic label, and
are difficult to index. Moreover, beside the improvement in
the recall of the search, the precision is affected since
phonetic lattices are often inaccurate. Consequently, phonetic
approaches should be used only for OOV search; for
searching queries containing also IV terms, this technique affects
the performance of the retrieval in comparison to the word
based approach.
Saraclar and Sproat in [22] show improvement in word
spotting accuracy for both IV and OOV queries, using
phonetic and word lattices, where a confidence measure of a
word or a phone can be derived. They propose three
different retrieval strategies: search both the word and the
phonetic indices and unify the two different sets of results;
search the word index for IV queries, search the phonetic
index for OOV queries; search the word index and if no result
is returned, search the phonetic index. However, no strategy
is proposed to deal with phrase queries containing both IV
and OOV terms. Amir et al. in [5, 4] propose to merge a
word approach with a phonetic approach in the context of
video retrieval. However, the phonetic transcript is obtained
from a text to phonetic conversion of the 1-best path of the
word transcript and is not based on a phonetic decoding of
the speech data.
An important issue to be considered when looking at the
state-of-the-art in retrieval of spoken data, is the lack of a
common test set and appropriate query terms. This paper
uses such a task and the STD evaluation is a good summary
of the performance of different approaches on the same test
conditions.
6. CONCLUSIONS
This work studies how vocabulary independent spoken
term detection can be performed efficiently over different
data sources. Previously, phonetic-based and word-based
approaches have been used for IR on speech data. The
former suffers from low accuracy and the latter from limited
vocabulary of the recognition system. In this paper, we have
presented a vocabulary independent model of indexing and
search that combines both the approaches. The system can
deal with all kinds of queries although the phrases that need
to combine for the retrieval, information extracted from two
different indices, a word index and a phonetic index. The
scoring of OOV terms is based on the proximity (in time)
between the different phones. The scoring of IV terms is based
on information provided by the WCNs. We have shown an
improvement in the retrieval performance when using all the
WCN and not only the 1-best path and when using phonetic
index for search of OOV query terms. This approach always
outperforms the other approaches using only word index or
phonetic index.
As a future work, we will compare our model for OOV
search on phonetic transcripts with a retrieval model based
on the edit distance.
7. ACKNOWLEDGEMENTS
Jonathan Mamou is grateful to David Carmel and Ron
Hoory for helpful and interesting discussions.
8. REFERENCES
[1] NIST Spoken Term Detection 2006 Evaluation
Website, http://www.nist.gov/speech/tests/std/.
[2] NIST Spoken Term Detection (STD) 2006 Evaluation
Plan,
http://www.nist.gov/speech/tests/std/docs/std06-evalplan-v10.pdf.
[3] C. Allauzen, M. Mohri, and M. Saraclar. General
indexation of weighted automata - application to
spoken utterance retrieval. In Proceedings of the
HLT-NAACL 2004 Workshop on Interdiciplinary
Approaches to Speech Indexing and Retrieval, Boston,
MA, USA, 2004.
[4] A. Amir, M. Berg, and H. Permuter. Mutual relevance
feedback for multimodal query formulation in video
retrieval. In MIR "05: Proceedings of the 7th ACM
SIGMM international workshop on Multimedia
information retrieval, pages 17-24, New York, NY,
USA, 2005. ACM Press.
[5] A. Amir, A. Efrat, and S. Srinivasan. Advances in
phonetic word spotting. In CIKM "01: Proceedings of
the tenth international conference on Information and
knowledge management, pages 580-582, New York,
NY, USA, 2001. ACM Press.
[6] M. Brown, J. Foote, G. Jones, K. Jones, and S. Young.
Open-vocabulary speech indexing for voice and video
mail retrieval. In Proceedings ACM Multimedia 96,
pages 307-316, Hong-Kong, November 1996.
[7] D. Carmel, E. Amitay, M. Herscovici, Y. S. Maarek,
Y. Petruschka, and A. Soffer. Juru at TREC
10Experiments with Index Pruning. In Proceedings of the
Tenth Text Retrieval Conference (TREC-10). National
Institute of Standards and Technology. NIST, 2001.
[8] C. Chelba and A. Acero. Indexing uncertainty for
spoken document search. In Interspeech 2005, pages
61-64, Lisbon, Portugal, 2005.
[9] C. Chelba and A. Acero. Position specific posterior
lattices for indexing speech. In Proceedings of the 43rd
Annual Conference of the Association for
Computational Linguistics (ACL), Ann Arbor, MI,
2005.
[10] S. Chen. Conditional and joint models for
grapheme-to-phoneme conversion. In Eurospeech 2003,
Geneva, Switzerland, 2003.
[11] M. Clements, S. Robertson, and M. Miller. Phonetic
searching applied to on-line distance learning modules.
In Digital Signal Processing Workshop, 2002 and the
2nd Signal Processing Education Workshop.
Proceedings of 2002 IEEE 10th, pages 186-191, 2002.
[12] J. Garofolo, G. Auzanne, and E. Voorhees. The TREC
spoken document retrieval track: A success story. In
Proceedings of the Ninth Text Retrieval Conference
(TREC-9). National Institute of Standards and
Technology. NIST, 2000.
[13] D. Hakkani-Tur and G. Riccardi. A general algorithm
for word graph matrix decomposition. In Proceedings
of the IEEE Internation Conference on Acoustics,
Speech and Signal Processing (ICASSP), pages
596-599, Hong-Kong, 2003.
[14] D. James. The application of classical information
retrieval techniques to spoken documents. PhD thesis,
University of Cambridge, Downing College, 1995.
[15] D. A. James. A system for unrestricted topic retrieval
from radio news broadcasts. In Proc. ICASSP "96,
pages 279-282, Atlanta, GA, 1996.
[16] B. Logan, P. Moreno, J. V. Thong, and E. Whittaker.
An experimental study of an audio indexing system
for the web. In Proceedings of ICSLP, 1996.
[17] J. Mamou, D. Carmel, and R. Hoory. Spoken
document retrieval from call-center conversations. In
SIGIR "06: Proceedings of the 29th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 51-58,
New York, NY, USA, 2006. ACM Press.
[18] L. Mangu, E. Brill, and A. Stolcke. Finding consensus
in speech recognition: word error minimization and
other applications of confusion networks. Computer
Speech and Language, 14(4):373-400, 2000.
[19] A. Martin, G. Doddington, T. Kamm, M. Ordowski,
and M. Przybocki. The DET curve in assessment of
detection task performance. In Proc. Eurospeech "97,
pages 1895-1898, Rhodes, Greece, 1997.
[20] K. Ng and V. W. Zue. Subword-based approaches for
spoken document retrieval. Speech Commun.,
32(3):157-186, 2000.
[21] Y. Peng and F. Seide. Fast two-stage
vocabulary-independent search in spontaneous speech.
In Acoustics, Speech, and Signal Processing.
Proceedings. (ICASSP). IEEE International
Conference, volume 1, pages 481-484, 2005.
[22] M. Saraclar and R. Sproat. Lattice-based search for
spoken utterance retrieval. In HLT-NAACL 2004:
Main Proceedings, pages 129-136, Boston,
Massachusetts, USA, 2004.
[23] F. Seide, P. Yu, C. Ma, and E. Chang.
Vocabulary-independent search in spontaneous speech.
In ICASSP-2004, IEEE International Conference on
Acoustics, Speech, and Signal Processing, 2004.
[24] A. Singhal, J. Choi, D. Hindle, D. Lewis, and
F. Pereira. AT&T at TREC-7. In Proceedings of the
Seventh Text Retrieval Conference (TREC-7).
National Institute of Standards and Technology.
NIST, 1999.
[25] A. Singhal and F. Pereira. Document expansion for
speech retrieval. In SIGIR "99: Proceedings of the
22nd annual international ACM SIGIR conference on
research and development in information retrieval,
pages 34-41, New York, NY, USA, 1999. ACM Press.
[26] H. Soltau, B. Kingsbury, L. Mangu, D. Povey,
G. Saon, and G. Zweig. The IBM 2004 conversational
telephony system for rich transcription. In Proceedings
of the IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), March 2005.
[27] K. Thambiratnam and S. Sridharan. Dynamic match
phone-lattice searches for very fast and accurate
unrestricted vocabulary keyword spotting. In
Acoustics, Speech, and Signal Processing. Proceedings.
(ICASSP). IEEE International Conference, 2005.
[28] P. C. Woodland, S. E. Johnson, P. Jourlin, and K. S.
Jones. Effects of out of vocabulary words in spoken
document retrieval (poster session). In SIGIR "00:
Proceedings of the 23rd annual international ACM
SIGIR conference on Research and development in
information retrieval, pages 372-374, New York, NY,
USA, 2000. ACM Press. | vocabulary independent system;out-of-vocabulary;vocabulary;indexing timestamp;phonetic index;oov search;speech retrieval;speak term detection;speech datum retrieval;index merging;automatic speech recognition;speech recognizer;word index;phonetic transcript;spoken term detection |
train_H-53 | Context Sensitive Stemming for Web Search | Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic. | 1. INTRODUCTION
Web search has now become a major tool in our daily lives
for information seeking. One of the important issues in Web
search is that user queries are often not best formulated to
get optimal results. For example, running shoe is a query
that occurs frequently in query logs. However, the query
running shoes is much more likely to give better search
results than the original query because documents matching
the intent of this query usually contain the words running
shoes.
Correctly formulating a query requires the user to
accurately predict which word form is used in the documents
that best satisfy his or her information needs. This is
difficult even for experienced users, and especially difficult for
non-native speakers. One traditional solution is to use
stemming [16, 18], the process of transforming inflected or
derived words to their root form so that a search term will
match and retrieve documents containing all forms of the
term. Thus, the word run will match running, ran,
runs, and shoe will match shoes and shoeing.
Stemming can be done either on the terms in a document
during indexing (and applying the same transformation to the
query terms during query processing) or by expanding the
query with the variants during query processing. Stemming
during indexing allows very little flexibility during query
processing, while stemming by query expansion allows
handling each query differently, and hence is preferred.
Although traditional stemming increases recall by
matching word variants [13], it can reduce precision by retrieving
too many documents that have been incorrectly matched.
When examining the results of applying stemming to a large
number of queries, one usually finds that nearly equal
numbers of queries are helped and hurt by the technique [6]. In
addition, it reduces system performance because the search
engine has to match all the word variants. As we will show
in the experiments, this is true even if we simplify stemming
to pluralization handling, which is the process of converting
a word from its plural to singular form, or vice versa. Thus,
one needs to be very cautious when using stemming in Web
search engines.
One problem of traditional stemming is its blind
transformation of all query terms, that is, it always performs
the same transformation for the same query word without
considering the context of the word. For example, the word
book has four forms book, books, booking, booked, and
store has four forms store, stores, storing, stored. For
the query book store, expanding both words to all of their
variants significantly increases computation cost and hurts
precision, since not all of the variants are useful for this
query. Transforming book store to match book stores
is fine, but matching book storing or booking store is
not. A weighting method that gives variant words smaller
weights alleviates the problems to a certain extent if the
weights accurately reflect the importance of the variant in
this particular query. However uniform weighting is not
going to work and a query dependent weighting is still a
challenging unsolved problem [20].
A second problem of traditional stemming is its blind
matching of all occurrences in documents. For the query
book store, a transformation that allows the variant stores
to be matched will cause every occurrence of stores in the
document to be treated equivalent to the query term store.
Thus, a document containing the fragment reading a book
in coffee stores will be matched, causing many wrong
documents to be selected. Although we hope the ranking
function can correctly handle these, with many more candidates
to rank, the risk of making mistakes increases.
To alleviate these two problems, we propose a context
sensitive stemming approach for Web search. Our solution
consists of two context sensitive analysis, one on the query
side and the other on the document side. On the query side,
we propose a statistical language modeling based approach
to predict which word variants are better forms than the
original word for search purpose and expanding the query
with only those forms. On the document side, we propose a
conservative context sensitive matching for the transformed
word variants, only matching document occurrences in the
context of other terms in the query. Our model is simple yet
effective and efficient, making it feasible to be used in real
commercial Web search engines.
We use pluralization handling as a running example for
our stemming approach. The motivation for using
pluralization handling as an example is to show that even such
simple stemming, if handled correctly, can give significant
benefits to search relevance. As far as we know, no
previous research has systematically investigated the usage of
pluralization in Web search. As we have to point out, the
method we propose is not limited to pluralization handling,
it is a general stemming technique, and can also be applied
to general query expansion. Experiments on general
stemming yield additional significant improvements over
pluralization handling for long queries, although details will not
be reported in this paper.
In the rest of the paper, we first present the related work
and distinguish our method from previous work in Section 2.
We describe the details of the context sensitive stemming
approach in Section 3. We then perform extensive
experiments on a major Web search engine to support our claims
in Section 4, followed by discussions in Section 5. Finally,
we conclude the paper in Section 6.
2. RELATED WORK
Stemming is a long studied technology. Many stemmers
have been developed, such as the Lovins stemmer [16] and
the Porter stemmer [18]. The Porter stemmer is widely used
due to its simplicity and effectiveness in many applications.
However, the Porter stemming makes many mistakes
because its simple rules cannot fully describe English
morphology. Corpus analysis is used to improve Porter stemmer [26]
by creating equivalence classes for words that are
morphologically similar and occur in similar context as measured by
expected mutual information [23]. We use a similar corpus
based approach for stemming by computing the similarity
between two words based on their distributional context
features which can be more than just adjacent words [15], and
then only keep the morphologically similar words as
candidates.
Using stemming in information retrieval is also a well
known technique [8, 10]. However, the effectiveness of
stemming for English query systems was previously reported to
be rather limited. Lennon et al. [17] compared the Lovins
and Porter algorithms and found little improvement in
retrieval performance. Later, Harman [9] compares three
general stemming techniques in text retrieval experiments
including pluralization handing (called S stemmer in the
paper). They also proposed selective stemming based on query
length and term importance, but no positive results were
reported. On the other hand, Krovetz [14] performed
comparisons over small numbers of documents (from 400 to 12k)
and showed dramatic precision improvement (up to 45%).
However, due to the limited number of tested queries (less
than 100) and the small size of the collection, the results
are hard to generalize to Web search. These mixed results,
mostly failures, led early IR researchers to deem stemming
irrelevant in general for English [4], although recent research
has shown stemming has greater benefits for retrieval in
other languages [2]. We suspect the previous failures were
mainly due to the two problems we mentioned in the
introduction. Blind stemming, or a simple query length based
selective stemming as used in [9] is not enough. Stemming
has to be decided on case by case basis, not only at the query
level but also at the document level. As we will show, if
handled correctly, significant improvement can be achieved.
A more general problem related to stemming is query
reformulation [3, 12] and query expansion which expands
words not only with word variants [7, 22, 24, 25]. To
decide which expanded words to use, people often use
pseudorelevance feedback techniquesthat send the original query to
a search engine and retrieve the top documents, extract
relevant words from these top documents as additional query
words, and resubmit the expanded query again [21]. This
normally requires sending a query multiple times to search
engine and it is not cost effective for processing the huge
amount of queries involved in Web search. In addition,
query expansion, including query reformulation [3, 12], has
a high risk of changing the user intent (called query drift).
Since the expanded words may have different meanings, adding
them to the query could potentially change the intent of
the original query. Thus query expansion based on
pseudorelevance and query reformulation can provide suggestions
to users for interactive refinement but can hardly be directly
used for Web search. On the other hand, stemming is much
more conservative since most of the time, stemming
preserves the original search intent. While most work on query
expansion focuses on recall enhancement, our work focuses
on increasing both recall and precision. The increase on
recall is obvious. With quality stemming, good documents
which were not selected before stemming will be pushed up
and those low quality documents will be degraded.
On selective query expansion, Cronen-Townsend et al. [6]
proposed a method for selective query expansion based on
comparing the Kullback-Leibler divergence of the results
from the unexpanded query and the results from the
expanded query. This is similar to the relevance feedback in
the sense that it requires multiple passes retrieval. If a word
can be expanded into several words, it requires running this
process multiple times to decide which expanded word is
useful. It is expensive to deploy this in production Web
search engines. Our method predicts the quality of
expansion based on offline information without sending the query
to a search engine.
In summary, we propose a novel approach to attack an old,
yet still important and challenging problem for Web search
- stemming. Our approach is unique in that it performs
predictive stemming on a per query basis without relevance
feedback from the Web, using the context of the variants in
documents to preserve precision. It"s simple, yet very
efficient and effective, making real time stemming feasible for
Web search. Our results will affirm researchers that
stemming is indeed very important to large scale information
retrieval.
3. CONTEXT SENSITIVE STEMMING
3.1 Overview
Our system has four components as illustrated in
Figure 1: candidate generation, query segmentation and head
word detection, context sensitive query stemming and
context sensitive document matching. Candidate generation
(component 1) is performed offline and generated candidates
are stored in a dictionary. For an input query, we first
segment query into concepts and detect the head word for each
concept (component 2). We then use statistical language
modeling to decide whether a particular variant is useful
(component 3), and finally for the expanded variants, we
perform context sensitive document matching (component
4). Below we discuss each of the components in more detail.
Component 4: context sensitive document matching
Input Query:
and head word detection
Component 2: segment Component 1: candidate generation
comparisons −> comparison
Component 3: selective word expansion
decision: comparisons −> comparison
example: hotel price comparisons
output: "hotel" "comparisons"
hotel −> hotels
Figure 1: System Components
3.2 Expansion candidate generation
One of the ways to generate candidates is using the Porter
stemmer [18]. The Porter stemmer simply uses
morphological rules to convert a word to its base form. It has no
knowledge of the semantic meaning of the words and sometimes
makes serious mistakes, such as executive to execution,
news to new, and paste to past. A more
conservative way is based on using corpus analysis to improve the
Porter stemmer results [26]. The corpus analysis we do is
based on word distributional similarity [15]. The rationale
of using distributional word similarity is that true variants
tend to be used in similar contexts. In the distributional
word similarity calculation, each word is represented with a
vector of features derived from the context of the word. We
use the bigrams to the left and right of the word as its
context features, by mining a huge Web corpus. The similarity
between two words is the cosine similarity between the two
corresponding feature vectors. The top 20 similar words to
develop is shown in the following table.
rank candidate score rank candidate score
0 develop 1 10 berts 0.119
1 developing 0.339 11 wads 0.116
2 developed 0.176 12 developer 0.107
3 incubator 0.160 13 promoting 0.100
4 develops 0.150 14 developmental 0.091
5 development 0.148 15 reengineering 0.090
6 tutoring 0.138 16 build 0.083
7 analyzing 0.128 17 construct 0.081
8 developement 0.128 18 educational 0.081
9 automation 0.126 19 institute 0.077
Table 1: Top 20 most similar candidates to word
develop. Column score is the similarity score.
To determine the stemming candidates, we apply a few
Porter stemmer [18] morphological rules to the similarity
list. After applying these rules, for the word develop,
the stemming candidates are developing, developed,
develops, development, developement, developer,
developmental. For the pluralization handling purpose, only the
candidate develops is retained.
One thing we note from observing the distributionally
similar words is that they are closely related semantically.
These words might serve as candidates for general query
expansion, a topic we will investigate in the future.
3.3 Segmentation and headword identification
For long queries, it is quite important to detect the
concepts in the query and the most important words for those
concepts. We first break a query into segments, each
segment representing a concept which normally is a noun phrase.
For each of the noun phrases, we then detect the most
important word which we call the head word. Segmentation
is also used in document sensitive matching (section 3.5) to
enforce proximity.
To break a query into segments, we have to define a
criteria to measure the strength of the relation between words.
One effective method is to use mutual information as an
indicator on whether or not to split two words [19]. We use
a log of 25M queries and collect the bigram and unigram
frequencies from it. For every incoming query, we compute
the mutual information of two adjacent words; if it passes
a predefined threshold, we do not split the query between
those two words and move on to next word. We continue
this process until the mutual information between two words
is below the threshold, then create a concept boundary here.
Table 2 shows some examples of query segmentation.
The ideal way of finding the head word of a concept is to
do syntactic parsing to determine the dependency structure
of the query. Query parsing is more difficult than sentence
[running shoe]
[best] [new york] [medical schools]
[pictures] [of] [white house]
[cookies] [in] [san francisco]
[hotel] [price comparison]
Table 2: Query segmentation: a segment is
bracketed.
parsing since many queries are not grammatical and are very
short. Applying a parser trained on sentences from
documents to queries will have poor performance. In our
solution, we just use simple heuristics rules, and it works very
well in practice for English. For an English noun phrase,
the head word is typically the last nonstop word, unless the
phrase is of a particular pattern, like XYZ of/in/at/from
UVW. In such cases, the head word is typically the last
nonstop word of XYZ.
3.4 Context sensitive word expansion
After detecting which words are the most important words
to expand, we have to decide whether the expansions will
be useful.
Our statistics show that about half of the queries can be
transformed by pluralization via naive stemming. Among
this half, about 25% of the queries improve relevance when
transformed, the majority (about 50%) do not change their
top 5 results, and the remaining 25% perform worse. Thus,
it is extremely important to identify which queries should
not be stemmed for the purpose of maximizing relevance
improvement and minimizing stemming cost. In addition,
for a query with multiple words that can be transformed,
or a word with multiple variants, not all of the expansions
are useful. Taking query hotel price comparison as an
example, we decide that hotel and price comparison are two
concepts. Head words hotel and comparison can be
expanded to hotels and comparisons. Are both
transformations useful?
To test whether an expansion is useful, we have to know
whether the expanded query is likely to get more relevant
documents from the Web, which can be quantified by the
probability of the query occurring as a string on the Web.
The more likely a query to occur on the Web, the more
relevant documents this query is able to return. Now the
whole problem becomes how to calculate the probability of
query to occur on the Web.
Calculating the probability of string occurring in a
corpus is a well known language modeling problem. The goal
of language modeling is to predict the probability of
naturally occurring word sequences, s = w1w2...wN ; or more
simply, to put high probability on word sequences that
actually occur (and low probability on word sequences that
never occur). The simplest and most successful approach to
language modeling is still based on the n-gram model. By
the chain rule of probability one can write the probability
of any word sequence as
Pr(w1w2...wN ) =
NY
i=1
Pr(wi|w1...wi−1) (1)
An n-gram model approximates this probability by
assuming that the only words relevant to predicting Pr(wi|w1...wi−1)
are the previous n − 1 words; i.e.
Pr(wi|w1...wi−1) = Pr(wi|wi−n+1...wi−1)
A straightforward maximum likelihood estimate of n-gram
probabilities from a corpus is given by the observed
frequency of each of the patterns
Pr(wi|wi−n+1...wi−1) =
#(wi−n+1...wi)
#(wi−n+1...wi−1)
(2)
where #(.) denotes the number of occurrences of a specified
gram in the training corpus. Although one could attempt to
use simple n-gram models to capture long range
dependencies in language, attempting to do so directly immediately
creates sparse data problems: Using grams of length up to
n entails estimating the probability of Wn
events, where W
is the size of the word vocabulary. This quickly overwhelms
modern computational and data resources for even modest
choices of n (beyond 3 to 6). Also, because of the heavy
tailed nature of language (i.e. Zipf"s law) one is likely to
encounter novel n-grams that were never witnessed during
training in any test corpus, and therefore some mechanism
for assigning non-zero probability to novel n-grams is a
central and unavoidable issue in statistical language modeling.
One standard approach to smoothing probability estimates
to cope with sparse data problems (and to cope with
potentially missing n-grams) is to use some sort of back-off
estimator.
Pr(wi|wi−n+1...wi−1)
=
8
>><
>>:
ˆPr(wi|wi−n+1...wi−1),
if #(wi−n+1...wi) > 0
β(wi−n+1...wi−1) × Pr(wi|wi−n+2...wi−1),
otherwise
(3)
where
ˆPr(wi|wi−n+1...wi−1) =
discount #(wi−n+1...wi)
#(wi−n+1...wi−1)
(4)
is the discounted probability and β(wi−n+1...wi−1) is a
normalization constant
β(wi−n+1...wi−1) =
1 −
X
x∈(wi−n+1...wi−1x)
ˆPr(x|wi−n+1...wi−1)
1 −
X
x∈(wi−n+1...wi−1x)
ˆPr(x|wi−n+2...wi−1)
(5)
The discounted probability (4) can be computed with
different smoothing techniques, including absolute smoothing,
Good-Turing smoothing, linear smoothing, and Witten-Bell
smoothing [5]. We used absolute smoothing in our
experiments.
Since the likelihood of a string, Pr(w1w2...wN ), is a very
small number and hard to interpret, we use entropy as
defined below to score the string.
Entropy = −
1
N
log2 Pr(w1w2...wN ) (6)
Now getting back to the example of the query hotel price
comparisons, there are four variants of this query, and the
entropy of these four candidates are shown in Table 3. We
can see that all alternatives are less likely than the input
query. It is therefore not useful to make an expansion for this
query. On the other hand, if the input query is hotel price
comparisons which is the second alternative in the table,
then there is a better alternative than the input query, and
it should therefore be expanded. To tolerate the variations
in probability estimation, we relax the selection criterion to
those query alternatives if their scores are within a certain
distance (10% in our experiments) to the best score.
Query variations Entropy
hotel price comparison 6.177
hotel price comparisons 6.597
hotels price comparison 6.937
hotels price comparisons 7.360
Table 3: Variations of query hotel price
comparison ranked by entropy score, with the original
query in bold face.
3.5 Context sensitive document matching
Even after we know which word variants are likely to be
useful, we have to be conservative in document matching
for the expanded variants. For the query hotel price
comparisons, we decided that word comparisons is expanded
to include comparison. However, not every occurrence of
comparison in the document is of interest. A page which
is about comparing customer service can contain all of the
words hotel price comparisons comparison. This page is not
a good page for the query.
If we accept matches of every occurrence of comparison,
it will hurt retrieval precision and this is one of the main
reasons why most stemming approaches do not work well
for information retrieval. To address this problem, we have
a proximity constraint that considers the context around
the expanded variant in the document. A variant match
is considered valid only if the variant occurs in the same
context as the original word does. The context is the left or
the right non-stop segments 1
of the original word. Taking
the same query as an example, the context of comparisons
is price. The expanded word comparison is only valid if
it is in the same context of comparisons, which is after the
word price. Thus, we should only match those occurrences
of comparison in the document if they occur after the word
price. Considering the fact that queries and documents
may not represent the intent in exactly the same way, we
relax this proximity constraint to allow variant occurrences
within a window of some fixed size. If the expanded word
comparison occurs within the context of price within
a window, it is considered valid. The smaller the window
size is, the more restrictive the matching. We use a window
size of 4, which typically captures contexts that include the
containing and adjacent noun phrases.
4. EXPERIMENTAL EVALUATION
4.1 Evaluation metrics
We will measure both relevance improvement and the
stemming cost required to achieve the relevance.
1
a context segment can not be a single stop word.
4.1.1 Relevance measurement
We use a variant of the average Discounted Cumulative
Gain (DCG), a recently popularized scheme to measure search
engine relevance [1, 11]. Given a query and a ranked list of K
documents (K is set to 5 in our experiments), the DCG(K)
score for this query is calculated as follows:
DCG(K) =
KX
k=1
gk
log2(1 + k)
. (7)
where gk is the weight for the document at rank k. Higher
degree of relevance corresponds to a higher weight. A page is
graded into one of the five scales: Perfect, Excellent, Good,
Fair, Bad, with corresponding weights. We use dcg to
represent the average DCG(5) over a set of test queries.
4.1.2 Stemming cost
Another metric is to measure the additional cost incurred
by stemming. Given the same level of relevance
improvement, we prefer a stemming method that has less additional
cost. We measure this by the percentage of queries that are
actually stemmed, over all the queries that could possibly
be stemmed.
4.2 Data preparation
We randomly sample 870 queries from a three month
query log, with 290 from each month. Among all these 870
queries, we remove all misspelled queries since misspelled
queries are not of interest to stemming. We also remove all
one word queries since stemming one word queries without
context has a high risk of changing query intent, especially
for short words. In the end, we have 529 correctly spelled
queries with at least 2 words.
4.3 Naive stemming for Web search
Before explaining the experiments and results in detail,
we"d like to describe the traditional way of using stemming
for Web search, referred as the naive model. This is to treat
every word variant equivalent for all possible words in the
query. The query book store will be transformed into
(book OR books)(store OR stores) when limiting stemming
to pluralization handling only, where OR is an operator that
denotes the equivalence of the left and right arguments.
4.4 Experimental setup
The baseline model is the model without stemming. We
first run the naive model to see how well it performs over
the baseline. Then we improve the naive stemming model
by document sensitive matching, referred as document
sensitive matching model. This model makes the same stemming
as the naive model on the query side, but performs
conservative matching on the document side using the strategy
described in section 3.5. The naive model and document
sensitive matching model stem the most queries. Out of the
529 queries, there are 408 queries that they stem,
corresponding to 46.7% query traffic (out of a total of 870). We
then further improve the document sensitive matching model
from the query side with selective word stemming based on
statistical language modeling (section 3.4), referred as
selective stemming model. Based on language modeling
prediction, this model stems only a subset of the 408 queries
stemmed by the document sensitive matching model. We
experiment with unigram language model and bigram
language model. Since we only care how much we can improve
the naive model, we will only use these 408 queries (all the
queries that are affected by the naive stemming model) in
the experiments.
To get a sense of how these models perform, we also have
an oracle model that gives the upper-bound performance a
stemmer can achieve on this data. The oracle model only
expands a word if the stemming will give better results.
To analyze the pluralization handling influence on
different query categories, we divide queries into short queries
and long queries. Among the 408 queries stemmed by the
naive model, there are 272 short queries with 2 or 3 words,
and 136 long queries with at least 4 words.
4.5 Results
We summarize the overall results in Table 4, and present
the results on short queries and long queries separately in
Table 5. Each row in Table 4 is a stemming strategy
described in section 4.4. The first column is the name of the
strategy. The second column is the number of queries
affected by this strategy; this column measures the stemming
cost, and the numbers should be low for the same level of
dcg. The third column is the average dcg score over all
tested queries in this category (including the ones that were
not stemmed by the strategy). The fourth column is the
relative improvement over the baseline, and the last column
is the p-value of Wilcoxon significance test.
There are several observations about the results. We can
see the naively stemming only obtains a statistically
insignificant improvement of 1.5%. Looking at Table 5, it gives an
improvement of 2.7% on short queries. However, it also
hurts long queries by -2.4%. Overall, the improvement is
canceled out. The reason that it improves short queries is
that most short queries only have one word that can be
stemmed. Thus, blindly pluralizing short queries is
relatively safe. However for long queries, most queries can have
multiple words that can be pluralized. Expanding all of
them without selection will significantly hurt precision.
Document context sensitive stemming gives a significant
lift to the performance, from 2.7% to 4.2% for short queries
and from -2.4% to -1.6% for long queries, with an overall
lift from 1.5% to 2.8%. The improvement comes from the
conservative context sensitive document matching. An
expanded word is valid only if it occurs within the context of
original query in the document. This reduces many spurious
matches. However, we still notice that for long queries,
context sensitive stemming is not able to improve performance
because it still selects too many documents and gives the
ranking function a hard problem. While the chosen window
size of 4 works the best amongst all the choices, it still
allows spurious matches. It is possible that the window size
needs to be chosen on a per query basis to ensure tighter
proximity constraints for different types of noun phrases.
Selective word pluralization further helps resolving the
problem faced by document context sensitive stemming. It
does not stem every word that places all the burden on the
ranking algorithm, but tries to eliminate unnecessary
stemming in the first place. By predicting which word variants
are going to be useful, we can dramatically reduce the
number of stemmed words, thus improving both the recall and
the precision. With the unigram language model, we can
reduce the stemming cost by 26.7% (from 408/408 to 300/408)
and lift the overall dcg improvement from 2.8% to 3.4%. In
particular, it gives significant improvements on long queries.
The dcg gain is turned from negative to positive, from −1.6%
to 1.1%. This confirms our hypothesis that reducing
unnecessary word expansion leads to precision improvement. For
short queries too, we observe both dcg improvement and
stemming cost reduction with the unigram language model.
The advantages of predictive word expansion with a
language model is further boosted with a better bigram
language model. The overall dcg gain is lifted from 3.4%
to 3.9%, and stemming cost is dramatically reduced from
408/408 to 250/408, corresponding to only 29% of query
traffic (250 out of 870) and an overall 1.8% dcg
improvement overall all query traffic. For short queries, bigram
language model improves the dcg gain from 4.4% to 4.7%,
and reduces stemming cost from 272/272 to 150/272. For
long queries, bigram language model improves dcg gain from
1.1% to 2.5%, and reduces stemming cost from 136/136 to
100/136. We observe that the bigram language model gives
a larger lift for long queries. This is because the uncertainty
in long queries is larger and a more powerful language model
is needed. We hypothesize that a trigram language model
would give a further lift for long queries and leave this for
future investigation.
Considering the tight upper-bound 2
on the improvement
to be gained from pluralization handling (via the oracle
model), the current performance on short queries is very
satisfying. For short queries, the dcg gain upper-bound is 6.3%
for perfect pluralization handling, our current gain is 4.7%
with a bigram language model. For long queries, the dcg
gain upper-bound is 4.6% for perfect pluralization handling,
our current gain is 2.5% with a bigram language model. We
may gain additional benefit with a more powerful language
model for long queries. However, the difficulties of long
queries come from many other aspects including the
proximity and the segmentation problem. These problems have
to be addressed separately. Looking at the the upper-bound
of overhead reduction for oracle stemming, 75% (308/408)
of the naive stemmings are wasteful. We currently capture
about half of them. Further reduction of the overhead
requires sacrificing the dcg gain.
Now we can compare the stemming strategies from a
different aspect. Instead of looking at the influence over all
queries as we described above, Table 6 summarizes the dcg
improvements over the affected queries only. We can see
that the number of affected queries decreases as the
stemming strategy becomes more accurate (dcg improvement).
For the bigram language model, over the 250/408 stemmed
queries, the dcg improvement is 6.1%. An interesting
observation is the average dcg decreases with a better model,
which indicates a better stemming strategy stems more
difficult queries (low dcg queries).
5. DISCUSSIONS
5.1 Language models from query vs. from Web
As we mentioned in Section 1, we are trying to predict
the probability of a string occurring on the Web. The
language model should describe the occurrence of the string on
the Web. However, the query log is also a good resource.
2
Note that this upperbound is for pluralization handling
only, not for general stemming. General stemming gives a
8% upperbound, which is quite substantial in terms of our
metrics.
Affected Queries dcg dcg Improvement p-value
baseline 0/408 7.102 N/A N/A
naive model 408/408 7.206 1.5% 0.22
document context sensitive model 408/408 7.302 2.8% 0.014
selective model: unigram LM 300/408 7.321 3.4% 0.001
selective model: bigram LM 250/408 7.381 3.9% 0.001
oracle model 100/408 7.519 5.9% 0.001
Table 4: Results comparison of different stemming strategies over all queries affected by naive stemming
Short Query Results
Affected Queries dcg Improvement p-value
baseline 0/272 N/A N/A
naive model 272/272 2.7% 0.48
document context sensitive model 272/272 4.2% 0.002
selective model: unigram LM 185/272 4.4% 0.001
selective model: bigram LM 150/272 4.7% 0.001
oracle model 71/272 6.3% 0.001
Long Query Results
Affected Queries dcg Improvement p-value
baseline 0/136 N/A N/A
naive model 136/136 -2.4% 0.25
document context sensitive model 136/136 -1.6% 0.27
selective model: unigram LM 115/136 1.1% 0.001
selective model: bigram LM 100/136 2.5% 0.001
oracle model 29/136 4.6% 0.001
Table 5: Results comparison of different stemming strategies overall short queries and long queries
Users reformulate a query using many different variants to
get good results.
To test the hypothesis that we can learn reliable
transformation probabilities from the query log, we trained a
language model from the same query top 25M queries as used
to learn segmentation, and use that for prediction. We
observed a slight performance decrease compared to the model
trained on Web frequencies. In particular, the performance
for unigram LM was not affected, but the dcg gain for bigram
LM changed from 4.7% to 4.5% for short queries. Thus, the
query log can serve as a good approximation of the Web
frequencies.
5.2 How linguistics helps
Some linguistic knowledge is useful in stemming. For the
pluralization handling case, pluralization and de-pluralization
is not symmetric. A plural word used in a query indicates
a special intent. For example, the query new york hotels
is looking for a list of hotels in new york, not the specific
new york hotel which might be a hotel located in
California. A simple equivalence of hotel to hotels might boost
a particular page about new york hotel to top rank. To
capture this intent, we have to make sure the document is a
general page about hotels in new york. We do this by
requiring that the plural word hotels appears in the document.
On the other hand, converting a singular word to plural is
safer since a general purpose page normally contains
specific information. We observed a slight overall dcg decrease,
although not statistically significant, for document context
sensitive stemming if we do not consider this asymmetric
property.
5.3 Error analysis
One type of mistakes we noticed, though rare but
seriously hurting relevance, is the search intent change after
stemming. Generally speaking, pluralization or
depluralization keeps the original intent. However, the intent could
change in a few cases. For one example of such a query,
job at apple, we pluralize job to jobs. This
stemming makes the original query ambiguous. The query job
OR jobs at apple has two intents. One is the employment
opportunities at apple, and another is a person working at
Apple, Steve Jobs, who is the CEO and co-founder of the
company. Thus, the results after query stemming returns
Steve Jobs as one of the results in top 5. One solution is
performing results set based analysis to check if the intent is
changed. This is similar to relevance feedback and requires
second phase ranking.
A second type of mistakes is the entity/concept
recognition problem, These include two kinds. One is that the
stemmed word variant now matches part of an entity or
concept. For example, query cookies in san francisco is
pluralized to cookies OR cookie in san francisco. The
results will match cookie jar in san francisco. Although
cookie still means the same thing as cookies, cookie
jar is a different concept. Another kind is the unstemmed
word matches an entity or concept because of the stemming
of the other words. For example, quote ICE is
pluralized to quote OR quotes ICE. The original intent for this
query is searching for stock quote for ticker ICE. However,
we noticed that among the top results, one of the results
is Food quotes: Ice cream. This is matched because of
Affected Queries old dcg new dcg dcg Improvement
naive model 408/408 7.102 7.206 1.5%
document context sensitive model 408/408 7.102 7.302 2.8%
selective model: unigram LM 300/408 5.904 6.187 4.8%
selective model: bigram LM 250/408 5.551 5.891 6.1%
Table 6: Results comparison over the stemmed queries only: column old/new dcg is the dcg score over the
affected queries before/after applying stemming
the pluralized word quotes. The unchanged word ICE
matches part of the noun phrase ice cream here. To solve
this kind of problem, we have to analyze the documents and
recognize cookie jar and ice cream as concepts instead
of two independent words.
A third type of mistakes occurs in long queries. For the
query bar code reader software, two words are pluralized.
code to codes and reader to readers. In fact, bar
code reader in the original query is a strong concept and
the internal words should not be changed. This is the
segmentation and entity and noun phrase detection problem in
queries, which we actively are attacking. For long queries,
we should correctly identify the concepts in the query, and
boost the proximity for the words within a concept.
6. CONCLUSIONS AND FUTURE WORK
We have presented a simple yet elegant way of stemming
for Web search. It improves naive stemming in two aspects:
selective word expansion on the query side and
conservative word occurrence matching on the document side. Using
pluralization handling as an example, experiments on a
major Web search engine data show it significantly improves
the Web relevance and reduces the stemming cost. It also
significantly improves Web click through rate (details not
reported in the paper).
For the future work, we are investigating the problems
we identified in the error analysis section. These include:
entity and noun phrase matching mistakes, and improved
segmentation.
7. REFERENCES
[1] E. Agichtein, E. Brill, and S. T. Dumais. Improving
Web Search Ranking by Incorporating User Behavior
Information. In SIGIR, 2006.
[2] E. Airio. Word Normalization and Decompounding in
Mono- and Bilingual IR. Information Retrieval,
9:249-271, 2006.
[3] P. Anick. Using Terminological Feedback for Web
Search Refinement: a Log-based Study. In SIGIR,
2003.
[4] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. ACM Press/Addison Wesley,
1999.
[5] S. Chen and J. Goodman. An Empirical Study of
Smoothing Techniques for Language Modeling.
Technical Report TR-10-98, Harvard University, 1998.
[6] S. Cronen-Townsend, Y. Zhou, and B. Croft. A
Framework for Selective Query Expansion. In CIKM,
2004.
[7] H. Fang and C. Zhai. Semantic Term Matching in
Axiomatic Approaches to Information Retrieval. In
SIGIR, 2006.
[8] W. B. Frakes. Term Conflation for Information
Retrieval. In C. J. Rijsbergen, editor, Research and
Development in Information Retrieval, pages 383-389.
Cambridge University Press, 1984.
[9] D. Harman. How Effective is Suffixing? JASIS,
42(1):7-15, 1991.
[10] D. Hull. Stemming Algorithms - A Case Study for
Detailed Evaluation. JASIS, 47(1):70-84, 1996.
[11] K. Jarvelin and J. Kekalainen. Cumulated Gain-Based
Evaluation Evaluation of IR Techniques. ACM TOIS,
20:422-446, 2002.
[12] R. Jones, B. Rey, O. Madani, and W. Greiner.
Generating Query Substitutions. In WWW, 2006.
[13] W. Kraaij and R. Pohlmann. Viewing Stemming as
Recall Enhancement. In SIGIR, 1996.
[14] R. Krovetz. Viewing Morphology as an Inference
Process. In SIGIR, 1993.
[15] D. Lin. Automatic Retrieval and Clustering of Similar
Words. In COLING-ACL, 1998.
[16] J. B. Lovins. Development of a Stemming Algorithm.
Mechanical Translation and Computational
Linguistics, II:22-31, 1968.
[17] M. Lennon and D. Peirce and B. Tarry and P. Willett.
An Evaluation of Some Conflation Algorithms for
Information Retrieval. Journal of Information Science,
3:177-188, 1981.
[18] M. Porter. An Algorithm for Suffix Stripping.
Program, 14(3):130-137, 1980.
[19] K. M. Risvik, T. Mikolajewski, and P. Boros. Query
Segmentation for Web Search. In WWW, 2003.
[20] S. E. Robertson. On Term Selection for Query
Expansion. Journal of Documentation, 46(4):359-364,
1990.
[21] G. Salton and C. Buckley. Improving Retrieval
Performance by Relevance Feedback. JASIS, 41(4):288
- 297, 1999.
[22] R. Sun, C.-H. Ong, and T.-S. Chua. Mining
Dependency Relations for Query Expansion in
Passage Retrieval. In SIGIR, 2006.
[23] C. Van Rijsbergen. Information Retrieval.
Butterworths, second version, 1979.
[24] B. V´elez, R. Weiss, M. A. Sheldon, and D. K. Gifford.
Fast and Effective Query Refinement. In SIGIR, 1997.
[25] J. Xu and B. Croft. Query Expansion using Local and
Global Document Analysis. In SIGIR, 1996.
[26] J. Xu and B. Croft. Corpus-based Stemming using
Cooccurrence of Word Variants. ACM TOIS, 16
(1):61-81, 1998. | stem;language model;bigram language model;head word detection;context sensitive document matching;lovin stemmer;porter stemmer;web search;candidate generation;query segmentation;unigram language model;context sensitive query stemming;stemming |
train_H-54 | Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature | This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006. | 1. INTRODUCTION
Biologists search for literature on a daily basis. For most
biologists, PubMed, an online service of U.S. National Library of
Medicine (NLM), is the most commonly used tool for searching
the biomedical literature. PubMed allows for keyword search by
using Boolean operators. For example, if one desires documents on
the use of the drug propanolol in the disease hypertension, a
typical PubMed query might be propanolol AND hypertension,
which will return all the documents having the two keywords.
Keyword search in PubMed is effective if the query is well-crafted
by the users using their expertise. However, information needs of
biologists, in some cases, are expressed as complex questions
[8][9], which PubMed is not designed to handle. While NLM does
maintain an experimental tool for free-text queries [6], it is still
based on PubMed keyword search.
The Genomics track of the 2006 Text REtrieval Conference
(TREC) provides a common platform to assess the methods and
techniques proposed by various groups for biomedical information
retrieval. The queries were collected from real biologists and they
are expressed as complex questions, such as How do mutations in
the Huntingtin gene affect Huntington"s disease?. The document
collection contains 162,259 Highwire full-text documents in
HTML format. Systems from participating groups are expected to
find relevant passages within the full-text documents. A passage is
defined as any span of text that does not include the HTML
paragraph tag (i.e., <P> or </P>).
We approached the problem by utilizing domain-specific
knowledge in a conceptual retrieval model. Domain-specific
knowledge, in this paper, refers to information about concepts and
relationships between concepts in a certain domain. We assume
that appropriate use of domain-specific knowledge might improve
the effectiveness of retrieval. For example, given a query What is
the role of gene PRNP in the Mad Cow Disease?, expanding the
gene symbol PRNP with its synonyms Prp, PrPSc, and
prion protein, more relevant documents might be retrieved.
PubMed and many other biomedical systems [8][9][10][13] also
make use of domain-specific knowledge to improve retrieval
effectiveness.
Intuitively, retrieval on the level of concepts should outperform
bag-of-words approaches, since the semantic relationships
among words in a concept are utilized. In some recent studies
[13][15], positive results have been reported for this hypothesis. In
this paper, concepts are entry terms of the ontology Medical
Subject Headings (MeSH), a controlled vocabulary maintained by
NLM for indexing biomedical literature, or gene symbols in the
Entrez gene database also from NLM. A concept could be a word,
such as the gene symbol PRNP, or a phrase, such as Mad cow
diseases. In the conceptual retrieval model presented in this
paper, the similarity between a query and a document is measured
on both concept and word levels.
This paper makes two contributions:
1. We propose a conceptual approach to utilize domain-specific
knowledge in an IR system to improve its effectiveness in
retrieving biomedical literature. Based on this approach, our
system achieved significant improvement (23%) over the best
reported result in passage retrieval in the Genomics track of
TREC 2006.
2. We examine the effects of utilizing concepts and of different
types of domain-specific knowledge in performance
contribution.
This paper is organized as follows: problem statement is given in
the next section. The techniques are introduced in section 3. In
section 4, we present the experimental results. Related works are
given in section 5 and finally, we conclude the paper in section 6.
2. PROBLEM STATEMENT
We describe the queries, document collection and the system
output in this section.
The query set used in the Genomics track of TREC 2006 consists
of 28 questions collected from real biologists. As described in [8],
these questions all have the following general format:
Biological object (1..m)
Relationship
←⎯⎯⎯⎯→ Biological process (1..n) (1)
where a biological object might be a gene, protein, or gene
mutation and a biological process can be a physiological process
or disease. A question might involve multiple biological objects
(m) and multiple biological processes (n). These questions were
derived from four templates (Table 2).
Table 2 Query templates and examples in the Genomics track
of TREC 2006
Template Example
What is the role of gene in
disease?
What is the role of DRD4 in
alcoholism?
What effect does gene have
on biological process?
What effect does the insulin
receptor gene have on
tumorigenesis?
How do genes interact in
organ function?
How do HMG and HMGB1
interact in hepatitis?
How does a mutation in
gene influence biological
process?
How does a mutation in Ret
influence thyroid function?
Features of the queries: 1) They are different from the typical
Web queries and the PubMed queries, both of which usually
consist of 1 to 3 keywords; 2) They are generated from structural
templates which can be used by a system to identify the query
components, the biological object or process.
The document collection contains 162,259 Highwire full-text
documents in HTML format.
The output of the system is a list of passages ranked according to
their similarities with the query. A passage is defined as any span
of text that does not include the HTML paragraph tag (i.e., <P> or
</P>). A passage could be a part of a sentence, a sentence, a set of
consecutive sentences or a paragraph (i.e., the whole span of text
that are inside of <P> and </P> HTML tags).
This is a passage-level information retrieval problem with the
attempt to put biologists in contexts where relevant information is
provided.
3. TECHNIQUES AND METHODS
We approached the problem by first retrieving the top-k most
relevant paragraphs, then extracting passages from these
paragraphs, and finally ranking the passages. In this process, we
employed several techniques and methods, which will be
introduced in this section. First, we give two definitions:
Definition 3.1 A concept is 1) a entry term in the MeSH
ontology, or 2) a gene symbol in the Entrez gene database. This
definition of concept can be generalized to include other
biomedical dictionary terms.
Definition 3.2 A semantic type is a category defined in the
Semantic Network of the Unified Medical Language System
(UMLS) [14]. The current release of the UMLS Semantic Network
contains 135 semantic types such as Disease or Syndrome. Each
entry term in the MeSH ontology is assigned one or more semantic
types. Each gene symbol in the Entrez gene database maps to the
semantic type Gene or Genome. In addition, these semantic
types are linked by 54 relationships. For example, Antibiotic
prevents Disease or Syndrome. These relationships among
semantic types represent general biomedical knowledge. We
utilized these semantic types and their relationships to identify
related concepts.
The rest of this section is organized as follows: in section 3.1, we
explain how the concepts are identified within a query. In section
3.2, we specify five different types of domain-specific knowledge
and introduce how they are compiled. In section 3.3, we present
our conceptual IR model. Finally, our strategy for passage
extraction is described in section 3.4.
3.1 Identifying concepts within a query
A concept, defined in Definition 3.1, is a gene symbol or a MeSH
term. We make use of the query templates to identify gene
symbols. For example, the query How do HMG and HMGB1
interact in hepatitis? is derived from the template How do genes
interact in organ function?. In this case, HMG and HMGB1
will be identified as gene symbols. In cases where the query
templates are not provided, programs for recognition of gene
symbols within texts are needed.
We use the query translation functionality of PubMed to extract
MeSH terms in a query. This is done by submitting the whole
query to PubMed, which will then return a file in which the MeSH
terms in the query are labeled. In Table 3.1, three MeSH terms
within the query What is the role of gene PRNP in the Mad cow
disease? are found in the PubMed translation: "encephalopathy,
bovine spongiform" for Mad cow disease, genes for gene,
and role for role.
Table 3.1 The PubMed translation of the query "What is the
role of gene PRNP in the Mad cow disease?".
Term PubMed translation
Mad cow
disease
"bovine spongiform encephalopathy"[Text Word]
OR "encephalopathy, bovine spongiform"[MeSH
Terms] OR Mad cow disease[Text Word]
gene
("genes"[TIAB] NOT Medline[SB]) OR
"genes"[MeSH Terms] OR gene[Text Word]
role "role"[MeSH Terms] OR role[Text Word]
3.2 Compiling domain-specific knowledge
In this paper, domain-specific knowledge refers to information
about concepts and their relationships in a certain domain. We
used five types of domain-specific knowledge in the domain of
genomics:
Type 1. Synonyms (terms listed in the thesauruses that refer to
the same meaning)
Type 2. Hypernyms (more generic terms, one level only)
Type 3. Hyponyms (more specific terms, one level only)
Type 4. Lexical variants (different forms of the same concept,
such as abbreviations. They are commonly used in the
literature, but might not be listed in the thesauruses)
Type 5. Implicitly related concepts (terms that are semantically
related and also co-occur more frequently than being
independent in the biomedical texts)
Knowledge of type 1-3 is retrieved from the following two
thesauruses: 1) MeSH, a controlled vocabulary maintained by
NLM for indexing biomedical literature. The 2007 version of
MeSH contains information about 190,000 concepts. These
concepts are organized in a tree hierarchy; 2) Entrez Gene, one of
the most widely used searchable databases of genes. The current
version of Entrez Gene contains information about 1.7 million
genes. It does not have a hierarchy. Only synonyms are retrieved
from Entrez Gene. The compiling of type 4-5 knowledge is
introduced in section 3.2.1 and 3.2.2, respectively.
3.2.1 Lexical variants
Lexical variants of gene symbols
New gene symbols and their lexical variants are regularly
introduced into the biomedical literature [7]. However, many
reference databases, such as UMLS and Entrez Gene, may not be
able to keep track of all this kind of variants. For example, for the
gene symbol "NF-kappa B", at least 5 different lexical variants can
be found in the biomedical literature: "NF-kappaB", "NFkappaB",
"NFkappa B", "NF-kB", and "NFkB", three of which are not in the
current UMLS and two not in the Entrez Gene. [3][21] have shown
that expanding gene symbols with their lexical variants improved
the retrieval effectiveness of their biomedical IR systems. In our
system, we employed the following two strategies to retrieve
lexical variants of gene symbols.
Strategy I: This strategy is to automatically generate lexical
variants according to a set of manually crafted heuristics [3][21].
For example, given a gene symbol PLA2, a variant PLAII is
generated according to the heuristic that Roman numerals and
Arabic numerals are convertible when naming gene symbols.
Another variant, PLA 2, is also generated since a hyphen or a
space could be inserted at the transition between alphabetic and
numerical characters in a gene symbol.
Strategy II: This strategy is to retrieve lexical variants from an
abbreviation database. ADAM [22] is an abbreviation database
which covers frequently used abbreviations and their definitions
(or long-forms) within MEDLINE, the authoritative repository of
citations from the biomedical literature maintained by the NLM.
Given a query How does nucleoside diphosphate kinase (NM23)
contribute to tumor progression?, we first identify the
abbreviation NM23 and its long-form nucleoside diphosphate
kinase using the abbreviation identification program from [4].
Searching the long-form nucleoside diphosphate kinase in
ADAM, other abbreviations, such as NDPK or NDK, are
retrieved. These abbreviations are considered as the lexical
variants of NM23.
Lexical variants of MeSH concepts
ADAM is used to obtain the lexical variants of MeSH concepts as
well. All the abbreviations of a MeSH concept in ADAM are
considered as lexical variants to each other. In addition, those
long-forms that share the same abbreviation with the MeSH
concept and are different by an edit distance of 1 or 2 are also
considered as its lexical variants. As an example, "human
papilloma viruses" and "human papillomaviruses" have the same
abbreviation HPV in ADAM and their edit distance is 1. Thus
they are considered as lexical variants to each other. The edit
distance between two strings is measured by the minimum number
of insertions, deletions, and substitutions of a single character
required to transform one string into the other [12].
3.2.2 Implicitly related concepts
Motivation: In some cases, there are few documents in the
literature that directly answer a given query. In this situation, those
documents that implicitly answer their questions or provide
supporting information would be very helpful. For example, there
are few documents in PubMed that directly answer the query
"What is the role of the genes HNF4 and COUP-tf I in the
suppression in the function of the liver?". However, there exist
some documents about the role of "HNF4" and "COUP-tf I" in
regulating "hepatitis B virus" transcription. It is very likely that the
biologists would be interested in these documents because
"hepatitis B virus" is known as a virus that could cause serious
damage to the function of liver. In the given example, "hepatitis B
virus" is not a synonym, hypernym, hyponym, nor a lexical variant
of any of the query concepts, but it is semantically related to the
query concepts according to the UMLS Semantic Network. We
call this type of concepts implicitly related concepts of the
query. This notion is similar to the B-term used in [19] for
relating two disjoint literatures for biomedical hypothesis
generation. The difference is that we utilize the semantic
relationships among query concepts to exclusively focus on
concepts of certain semantic types.
A query q in format (1) of section 2 can be represented by
q = (A, C)
where A is the set of biological objects and C is the set of
biological processes. Those concepts that are semantically related
to both A and C according to the UMLS Semantic Network are
considered as the implicitly related concepts of the query. In the
above example, A = {HNF4, COUP-tf I}, C = {function of
liver}, and "hepatitis B virus" is one of the implicitly related
concepts.
We make use of the MEDLINE database to extract the implicitly
related concepts. The 2006 version of MEDLINE database
contains citations (i.e., abstracts, titles, and etc.) of over 15 million
biomedical articles. Each document in MEDLINE is manually
indexed by a list of MeSH terms to describe the topics covered by
that document. Implicitly related concepts are extracted and
ranked in the following steps:
Step 1. Let list_A be the set of MeSH terms that are 1) used for
indexing those MEDLINE citations having A, and 2) semantically
related to A according to the UMLS Semantic Network. Similarly,
list_C is created for C. Concepts in B = list_A ∩ list_C are
considered as implicitly related concepts of the query.
Step 2. For each concept b∈B, compute the association between
b and A using the mutual information measure [5]:
P( , )
( , ) log
P( )P( )
b A
I b A
b A
=
where P(x) = n/N, n is the number of MEDLINE citations having x
and N is the size of MEDLINE. A large value for I(b, A) means
that b and A co-occur much more often than being independent.
I(b, C) is computed similarly.
Step 3. Let r(b) = (I(b, A), I(b, C)), for b∈ B. Given b1, b2 ∈ B,
we say r(b1) ≤ r(b2) if I(b1, A) ≤ I(b2, A) and I(b1, C) ≤ I(b2, C).
Then the association between b and the query q is measured by:
{ : and ( ) ( )}
( , )
{ : and ( ) ( )}
x x B r x r b
score b q
x x B r b r x
∈ ≤
=
∈ ≤
(2)
The numerator in Formula 2 is the number of the concepts in B
that are associated with both A and C equally with or less than b.
The denominator is the number of the concepts in B that are
associated with both A and C equally with or more than b. Figure
3.2.2 shows the top 4 implicitly related concepts for the sample
query.
Figure 3.2.2 Top 4 implicitly related concepts for the query
"How do interactions between HNF4 and COUP-TF1 suppress
liver function?".
In Figure 3.2.2, the top 4 implicitly related concepts are all highly
associated with liver: Hepatocytes are liver cells;
Hepatoblastoma is a malignant liver neoplasm occurring in
young children; the vast majority of Gluconeogenesis takes
place in the liver; and Hepatitis B virus is a virus that could
cause serious damage to the function of liver.
The top-k ranked concepts in B are used for query expansion: if
I(b, A) ≥ I(b, C), then b is considered as an implicit related
concept of A. A document having b but not A will receive a partial
weight of A. The expansion is similar for C when I(b, A) < I(b, C).
3.3 Conceptual IR model
We now discuss our conceptual IR model. We first give the basic
conceptual IR model in section 3.3.1. Then we explain how the
domain-specific knowledge is incorporated in the model using
query expansion in section 3.3.2. A pseudo-feedback strategy is
introduced in section 3.3.3. In section 3.3.4, we give a strategy to
improve the ranking by avoiding incorrect match of abbreviations.
3.3.1 Basic model
Given a query q and a document d, our model measures two
similarities, concept similarity and word similarity:
( , ) ( , )( , ) ( , )
concept word
sim q d sim q d sim q d=
Concept similarity
Two vectors are derived from a query q,
1 2
1 11 12 1
2 21 22 2
( , )
( , ,..., )
( , ,..., )
m
n
q v v
v c c c
v c c c
=
=
=
where v1 is a vector of concepts describing the biological object(s)
and v2 is a vector of concepts describing the biological process(es).
Given a vector of concepts v, let s(v) be the set of concepts in v.
The weight of vi is then measured by:
( ) max{log : ( ) ( ) and 0}i i v
v
N
w v s v s v n
n
= ⊆ >
where v is a vector that contains a subset of concepts in vi and nv is
the number of documents having all the concepts in v.
The concept similarity between q and d is then computed by
2
1
( )( , ) i i
concept i
w vsim q d α
=
= ×∑
where αi is a parameter to indicate the completeness of vi that
document d has covered. αi is measured by:
and i
i
i
c
c d c v
c
c v
idf
idf
α
∈ ∈
∈
=
∑
∑
(3)
where idfc is the inverse document frequency of concept c.
An example: suppose we have a query How does Nurr-77 delete
T cells before they migrate to the spleen or lymph nodes and how
does this impact autoimmunity?. After identifying the concepts in
the query, we have:
1
2
('Nurr-77')
('T cells', 'spleen', 'autoimmunity', 'lymph nodes')
v
v
=
=
Suppose that some document frequencies of different
combinations of concepts are as follows:
25 df('Nurr-77')
0 df('T cells', 'spleen', 'autoimmunity', 'lymph nodes')
326 df('T cells', 'spleen', 'autoimmunity')
82 df('spleen', 'autoimmunity', 'lymph nodes')
147 df('T cells', 'autoimmunity', 'lymph nodes')
2332 df('T cells', 'spleen', 'lymph nodes')
The weight of vi is then computed by (note that there does not exist
a document having all the concepts in v2):
1
2
( ) log( / 25)
( ) log( /82)
w v N
w v N
=
=
.
Now suppose a document d contains concepts ‘Nurr-77", 'T cells',
'spleen', and 'lymph nodes', but not ‘autoimmunity", then the value
of parameter αi is computed as follows:
1
2
1
('T cells')+ ('spleen')+ ('lymph nodes')
('T cells')+ ('spleen')+ ('lymph nodes')+ ('autoimmunity')
idf idf idf
idf idf idf idf
α
α
=
=
Word similarity
The similarity between q and d on the word level is computed
using Okapi [17]:
10.5 ( 1)
log( )( , )
0.5word w q
N n k tf
sim q d
n K tf∈
− + +
=
+ +
∑ (4)
where N is the size of the document collection; n is the number of
documents containing w; K=k1 × ((1-b)+b × dl/avdl) and k1=1.2,
C
Function
of Liver
Implicitly related concepts (B)
Hepatocytes
Hepatoblastoma
Gluconeogenesis
Hepatitis B virus
HNF4 and
COUP-tf I
A
b=0.75 are constants. dl is the document length of d and avdl is the
average document length; tf is the term frequency of w within d.
The model
Given two documents d1 and d2, we say 1 2( , ) ( , )sim q d sim q d> or
d1 will be ranked higher than d2, with respect to the same query q,
if either
1) 1 2( , ) ( , )
concept concept
sim q d sim q d> or
2) 1 2 1 2and( , ) ( , ) ( , ) ( , )
concept concept word word
sim q d sim q d sim q d sim q d= >
This conceptual IR model emphasizes the similarity on the concept
level. A similar model but applied to non-biomedical domain has
been given in [15].
3.3.2 Incorporating domain-specific knowledge
Given a concept c, a vector u is derived by incorporating its
domain-specific knowledge:
1 2 3( , , , )u c u u u=
where u1 is a vector of its synonyms, hyponyms, and lexical
variants; u2 is a vector of its hypernyms; and u3 is a vector of its
implicitly related concepts. An occurrence of any term in u1 will
be counted as an occurrence of c. idfc in Formula 3 is updated as:
1,
logc
c u
N
D
idf =
1,c uD is the set of documents having c or any term in 1u . The
weight that a document d receives from u is given by:
max{ : and }tw t u t d∈ ∈
where wt = β .cidf× The weighting factor β is an empirical tuning
parameter determined as:
1. β = 1 if t is the original concept, its synonym, its hyponym, or
its lexical variant;
2. β = 0.95 if t is a hypernym;
3. β = 0.90× (k-i+1)/k if t is an implicitly related concept. k is
the number of selected top ranked implicitly related concepts
(see section 3.2.2); i is the position of t in the ranking of
implicitly related concepts.
3.3.3 Pseudo-feedback
Pseudo-feedback is a technique commonly used to improve
retrieval performance by adding new terms into the original query.
We used a modified pseudo-feedback strategy described in [2].
Step 1. Let C be the set of concepts in the top 15 ranked
documents. For each concept c in C, compute the similarity
between c and the query q, the computation of sim(q,c) can be
found in [2].
Step 2. The top-k ranked concepts by sim(q,c) are selected.
Step 3. Associate each selected concept c' with the concept cq in
q that 1) has the same semantic type as c', and 2) is most related to
c' among all the concepts in q. The association between c' and cq
is computed by:
P( ', )
( ', ) log
P( ')P( )
q
q
q
c c
I c c
c c
=
where P(x) = n/N, n is the number of documents having x and N is
the size of the document collection. A document having c' but not
cq receives a weight given by: (0.5× (k-i+1)/k) ,qcidf× where i is the
position of c' in the ranking of step 2.
3.3.4 Avoid incorrect match of abbreviations
Some gene symbols are very short and thus ambiguous. For
example, the gene symbol APC could be the abbreviation for
many non-gene long-forms, such as air pollution control,
aerobic plate count, or argon plasma coagulation. This step is
to avoid incorrect match of abbreviations in the top ranked
documents.
Given an abbreviation X with the long-form L in the query, we
scan the top-k ranked (k=1000) documents and when a document
is found with X, we compare L with all the long-forms of X in that
document. If none of these long-forms is equal or close to L (i.e.,
the edit distance between L and the long-form of X in that
document is 1 or 2), then the concept similarity of X is subtracted.
3.4 Passage extraction
The goal of passage extraction is to highlight the most relevant
fragments of text in paragraphs. A passage is defined as any span
of text that does not include the HTML paragraph tag (i.e., <P> or
</P>). A passage could be a part of a sentence, a sentence, a set of
consecutive sentences or a paragraph (i.e., the whole span of text
that are inside of <P> and </P> HTML tags). It is also possible to
have more than one relevant passage in a single paragraph. Our
strategy for passage extraction assumes that the optimal passage(s)
in a paragraph should have all the query concepts that the whole
paragraph has. Also they should have higher density of query
concepts than other fragments of text in the paragraph.
Suppose we have a query q and a paragraph p represented by a
sequence of sentences 1 2... .np s s s= Let C be the set of concepts in
q that occur in p and S = Φ.
Step 1. For each sequence of consecutive sentences 1... ,i i js s s+ 1 ≤
i ≤ j ≤ n, let S = S 1{ ... }i i js s s+∪ if 1...i i js s s+ satisfies that:
1) Every query concept in C occurs in 1...i i js s s+ and
2) There does not exist k, such that i < k < j and every query
concept in C occurs in 1...i i ks s s+ or 1 2... .k k js s s+ +
Condition 1 requires 1...i i js s s+ having all the query concepts in p
and condition 2 requires 1...i i js s s+ be the minimal.
Step 2. Let 1min{ 1: ... }i i jL j i s s s S+= − + ∈ . For every
1...i i js s s+ in S, let 1{ ... }i i jS S s s s+= − if (j - i + 1) > L. This step is to
remove those sequences of sentences in S that have lower density
of query concepts.
Step 3. For every two sequences of consecutive
sentences 1 1 1 2 2 21 1... , and ...i i j i i js s s S s s s S+ +∈ ∈ , if
1 2 1 2
2 1
, and
1
i i j j
i j
≤ ≤
≤ +
(5)
then do
Repeat this step until for every two sequences of consecutive
sentences in S, condition (5) does not apply. This step is to merge
those sequences of sentences in S that are adjacent or overlapped.
Finally the remaining sequences of sentences in S are returned as
the optimal passages in the paragraph p with respect to the query.
1 1 2
1 1
2 2 2
1
1 1
1
{ ... }
{ ... }
{ ... }
i i j
i i j
i i j
S S s s s
S S s s s
S S s s s
+
+
+
= ∪
= −
= −
4. EXPERIMENTAL RESULTS
The evaluation of our techniques and the experimental results are
given in this section. We first describe the datasets and evaluation
metrics used in our experiments and then present the results.
4.1 Data sets and evaluation metrics
Our experiments were performed on the platform of the Genomics
track of TREC 2006. The document collection contains 162,259
full-text documents from 49 Highwire biomedical journals. The set
of queries consists of 28 queries collected from real biologists.
The performance is measured on three different levels (passage,
aspect, and document) to provide better insight on how the
question is answered from different perspectives. Passage MAP:
As described in [8], this is a character-based precision calculated
as follows: At each relevant retrieved passage, precision will be
computed as the fraction of characters overlapping with the gold
standard passages divided by the total number of characters
included in all nominated passages from this system for the topic
up until that point. Similar to regular MAP, relevant passages that
were not retrieved will be added into the calculation as well, with
precision set to 0 for relevant passages not retrieved. Then the
mean of these average precisions over all topics will be calculated
to compute the mean average passage precision. Aspect MAP: A
question could be addressed from different aspects. For example,
the question what is the role of gene PRNP in the Mad cow
disease? could be answered from aspects like Diagnosis,
Neurologic manifestations, or Prions/Genetics. This measure
indicates how comprehensive the question is answered. Document
MAP: This is the standard IR measure. The precision is measured
at every point where a relevant document is obtained and then
averaged over all relevant documents to obtain the average
precision for a given query. For a set of queries, the mean of the
average precision for all queries is the MAP of that IR system.
The output of the system is a list of passages ranked according to
their similarities with the query. The performances on the three
levels are then calculated based on the ranking of the passages.
4.2 Results
The Wilcoxon signed-rank test was employed to determine the
statistical significance of the results. In the tables of the following
sections, statistically significant improvements (at the 5% level)
are marked with an asterisk.
4.2.1 Conceptual IR model vs. term-based model
The initial baseline was established using word similarity only
computed by the Okapi (Formula 4). Another run based on our
basic conceptual IR model was performed without using query
expansion, pseudo-feedback, or abbreviation correction. The
experimental result is shown in Table 4.2.1. Our basic conceptual
IR model significantly outperforms the Okapi on all three levels,
which suggests that, although it requires additional efforts to
identify concepts, retrieval on the concept level can achieve
substantial improvements over purely term-based retrieval model.
4.2.2 Contribution of different types of knowledge
A series of experiments were performed to examine how each type
of domain-specific knowledge contributes to the retrieval
performance. A new baseline was established using the basic
conceptual IR model without incorporating any type of
domainspecific knowledge. Then five runs were conducted by adding
each individual type of domain-specific knowledge. We also
conducted a run by adding all types of domain-specific knowledge.
Results of these experiments are shown in Table 4.2.2.
We found that any available type of domain-specific
knowledge improved the performance in passage retrieval. The
biggest improvement comes from the lexical variants, which is
consistent with the result reported in [3]. This result also indicates
that biologists are likely to use different variants of the same
concept according to their own writing preferences and these
variants might not be collected in the existing biomedical
thesauruses. It also suggests that the biomedical IR systems can
benefit from the domain-specific knowledge extracted from the
literature by text mining systems.
Synonyms provided the second biggest improvement.
Hypernyms, hyponyms, and implicitly related concepts provided
similar degrees of improvement. The overall performance is an
accumulative result of adding different types of domain-specific
knowledge and it is better than any individual addition. It is clearly
shown that the performance is significantly improved (107% on
passage level, 63.1% on aspect level, and 49.6% on document
level) when the domain-specific knowledge is appropriately
incorporated. Although it is not explicitly shown in Table 4.2.3,
different types of domain-specific knowledge affect different
subsets of queries. More specifically, each of these types (with the
exception of the lexical variants which affects a large number of
queries) affects only a few queries. But for those affected queries,
their improvement is significant. As a consequence, the
accumulative improvement is very significant.
4.2.3 Pseudo-feedback and abbreviation correction
Using the Baseline+All in Table 4.2.2 as a new baseline, the
contribution of abbreviation correction and pseudo-feedback is
given in Table 4.2.3. There is little improvement by avoiding
incorrect matching of abbreviations. The pseudo-feedback
contributed about 4.6% improvement in passage retrieval.
4.2.4 Performance compared with best-reported results
We compared our result with the results reported in the Genomics
track of TREC 2006 [8] on the conditions that 1) systems are
automatic systems and 2) passages are extracted from paragraphs.
The performance of our system relative to the best reported results
is shown in Table 4.2.4 (in TREC 2006, some systems returned the
whole paragraphs as passages. As a consequence, excellent
retrieval results were obtained on document and aspect levels at
the expense of performance on the passage level. We do not
include the results of such systems here).
Table 4.2.4 Performance compared with best-reported results.
Passage MAP Aspect MAP Document MAP
Best reported results 0.1486 0.3492 0.5320
Our results 0.1823 0.3811 0.5391
Improvement 22.68% 9.14% 1.33%
The best reported results in the first row of Table 4.2.4 on three
levels (passage, aspect, and document) are from different systems.
Our result is from a single run on passage retrieval in which it is
better than the best reported result by 22.68% in passage retrieval
and at the same time, 9.14% better in aspect retrieval, and 1.33%
better in document retrieval (Since the average precision of each
individual query was not reported, we can not apply the Wilcoxon
signed-rank test to calculate the significance of difference between
our performance and the best reported result.).
Table 4.2.1 Basic conceptual IR model vs. term-based model
Run Passage Aspect Document
MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%)
Okapi 0.064 N/A 0.175 N/A 0.285 N/A
Basic conceptual IR model 0.084* (+31.3%) 17 (65.4%) 0.233* (+33.1%) 12 (46.2%) 0.359* (+26.0%) 15 (57.7%)
Table 4.2.2 Contribution of different types of domain-specific knowledge
Run Passage Aspect Document
MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%)
Baseline
= Basic conceptual IR model
0.084 N/A 0.233 N/A 0.359 N/A
Baseline+Synonyms 0.105 (+25%) 11 (42.3%) 0.246 (+5.6%) 9 (34.6%) 0.420 (+17%) 13 (50%)
Baseline+Hypernyms 0.088 (+4.8%) 11 (42.3%) 0.225 (-3.4%) 9 (34.6%) 0.390 (+8.6%) 16 (61.5%)
Baseline+Hyponyms 0.087 (+3.6%) 10 (38.5%) 0.217 (-6.9%) 7 (26.9%) 0.389 (+8.4%) 10 (38.5%)
Baseline+Variants 0.150* (+78.6%) 16 (61.5%) 0.348* (+49.4%) 13 (50%) 0.495* (+37.9%) 10 (38.5%)
Baseline+Related 0.086 (+2.4%) 9 (34.6%) 0.220 (-5.6%) 9 (34.6%) 0.387 (+7.8%) 13 (50%)
Baseline+All 0.174* (107%) 25 (96.2%) 0.380* (+63.1%) 19 (73.1%) 0.537* (+49.6%) 14 (53.8%)
Table 4.2.3 Contribution of abbreviation correction and pseudo-feedback
Run Passage Aspect Document
MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%)
Baseline+All 0.174 N/A 0.380 N/A 0.537 N/A
Baseline+All+Abbr 0.175 (+0.6%) 5 (19.2%) 0.375 (-1.3%) 4 (15.4%) 0.535 (-0.4%) 4 (15.4%)
Baseline+All+Abbr+PF 0.182 (+4.6%) 10 (38.5%) 0.381 (+0.3%) 6 (23.1%) 0.539 (+0.4%) 9 (34.6%)
A separate experiment has been done using a second testbed, the
ad-hoc Task of TREC Genomics 2005, to evaluate our
knowledge-intensive conceptual IR model for document retrieval
of biomedical literature. The overall performance in terms of MAP
is 35.50%, which is about 22.92% above the best reported result
[9]. Notice that the performance was only measured on the
document level for the ad-hoc Task of TREC Genomics 2005.
5. RELATED WORKS
Many studies used manually-crafted thesauruses or knowledge
databases created by text mining systems to improve retrieval
effectiveness based on either word-statistical retrieval systems or
conceptual retrieval systems.
[11][1] assessed query expansion using the UMLS
Metathesaurus. Based on a word-statistical retrieval system, [11]
used definitions and different types of thesaurus relationships for
query expansion and a deteriorated performance was reported. [1]
expanded queries with phrases and UMLS concepts determined by
the MetaMap, a program which maps biomedical text to UMLS
concepts, and no significant improvement was shown. We used
MeSH, Entrez gene, and other non-thesaurus knowledge resources
such as an abbreviation database for query expansion. A critical
difference between our work and those in [11][1] is that our
retrieval model is based on concepts, not on individual words.
The Genomics track in TREC provides a common platform to
evaluate methods and techniques proposed by various groups for
biomedical information retrieval. As summarized in [8][9][10],
many groups utilized domain-specific knowledge to improve
retrieval effectiveness. Among these groups, [3] assessed both
thesaurus-based knowledge, such as gene information, and non
thesaurus-based knowledge, such as lexical variants of gene
symbols, for query expansion. They have shown that query
expansion with acronyms and lexical variants of gene symbols
produced the biggest improvement, whereas, the query expansion
with gene information from gene databases deteriorated the
performance. [21] used a similar approach for generating lexical
variants of gene symbols and reported significant improvements.
Our system utilized more types of domain-specific knowledge,
including hyponyms, hypernyms and implicitly related concepts.
In addition, under the conceptual retrieval framework, we
examined more comprehensively the effects of different types of
domain-specific knowledge in performance contribution.
[20][15] utilized WordNet, a database of English words and
their lexical relationships developed by Princeton University, for
query expansion in the non-biomedical domain. In their studies,
queries were expanded using the lexical semantic relations such as
synonyms, hypernyms, or hyponyms. Little benefit has been
shown in [20]. This has been due to ambiguity of the query terms
which have different meanings in different contexts. When these
synonyms having multiple meanings are added to the query,
substantial irrelevant documents are retrieved. In the biomedical
domain, this kind of ambiguity of query terms is relatively less
frequent, because, although the abbreviations are highly
ambiguous, general biomedical concepts usually have only one
meaning in the thesaurus, such as UMLS, whereas a term in
WordNet usually have multiple meanings (represented as synsets
in WordNet). Besides, we have implemented a post-ranking step
to reduce the number of incorrect matches of abbreviations, which
will hopefully decrease the negative impact caused by the
abbreviation ambiguity. Besides, we have implemented a
postranking step to reduce the number of incorrect matches of
abbreviations, which will hopefully decrease the negative impact
caused by the abbreviation ambiguity. The retrieval model in [15]
emphasized the similarity between a query and a document on the
phrase level assuming that phrases are more important than
individual words when retrieving documents. Although the
assumption is similar, our conceptual model is based on the
biomedical concepts, not phrases.
[13] presented a good study of the role of knowledge in the
document retrieval of clinical medicine. They have shown that
appropriate use of semantic knowledge in a conceptual retrieval
framework can yield substantial improvements. Although the
retrieval model is similar, we made a study in the domain of
genomics, in which the problem structure and task knowledge is
not as well-defined as in the domain of clinical medicine [18].
Also, our similarity function is very different from that in [13].
In summary, our approach differs from previous works in four
important ways: First, we present a case study of conceptual
retrieval in the domain of genomics, where many knowledge
resources can be used to improve the performance of biomedical
IR systems. Second, we have studied more types of
domainspecific knowledge than previous researchers and carried out more
comprehensive experiments to look into the effects of different
types of domain-specific knowledge in performance contribution.
Third, although some of the techniques seem similar to previously
published ones, they are actually quite different in details. For
example, in our pseudo-feedback process, we require that the unit
of feedback is a concept and the concept has to be of the same
semantic type as a query concept. This is to ensure that our
conceptual model of retrieval can be applied. As another example,
the way in which implicitly related concepts are extracted in this
paper is significantly different from that given in [19]. Finally, our
conceptual IR model is actually based on complex concepts
because some biomedical meanings, such as biological processes,
are represented by multiple simple concepts.
6. CONCLUSION
This paper proposed a conceptual approach to utilize
domainspecific knowledge in an IR system to improve its effectiveness in
retrieving biomedical literature. We specified five different types
of domain-specific knowledge (i.e., synonyms, hyponyms,
hypernyms, lexical variants, and implicitly related concepts) and
examined their effects in performance contribution. We also
evaluated other two techniques, pseudo-feedback and abbreviation
correction. Experimental results have shown that appropriate use
of domain-specific knowledge in a conceptual IR model yields
significant improvements (23%) in passage retrieval over the best
known results. In our future work, we will explore the use of other
existing knowledge resources, such as UMLS and the Wikipedia,
and evaluate techniques such as disambiguation of gene symbols
for improving retrieval effectiveness. The application of our
conceptual IR model in other domains such as clinical medicine
will be investigated.
7. ACKNOWLEDGMENTS
insightful discussion.
8. REFERENCES
[1] Aronson A.R., Rindflesch T.C. Query expansion using the
UMLS Metathesaurus. Proc AMIA Annu Fall Symp. 1997.
485-9.
[2] Baeza-Yates R., Ribeiro-Neto B. Modern Information
Retrieval. Addison-Wesley, 1999, 129-131.
[3] Buttcher S., Clarke C.L.A., Cormack G.V. Domain-specific
synonym expansion and validation for biomedical
information retrieval (MultiText experiments for TREC
2004). TREC"04.
[4] Chang J.T., Schutze H., Altman R.B. Creating an online
dictionary of abbreviations from MEDLINE. Journal of the
American Medical Informatics Association. 2002 9(6).
[5] Church K.W., Hanks P. Word association norms, mutual
information and lexicography. Computational Linguistics.
1990;16:22, C29.
[6] Fontelo P., Liu F., Ackerman M. askMEDLINE: a free-text,
natural language query tool for MEDLINE/PubMed. BMC
Med Inform Decis Mak. 2005 Mar 10;5(1):5.
[7] Fukuda K., Tamura A., Tsunoda T., Takagi T. Toward
information extraction: identifying protein names from
biological papers. Pac Symp Biocomput. 1998;:707-18.
[8] Hersh W.R., and etc. TREC 2006 Genomics Track Overview.
TREC"06.
[9] Hersh W.R., and etc. TREC 2005 Genomics Track Overview.
In TREC"05.
[10] Hersh W.R., and etc. TREC 2004 Genomics Track Overview.
In TREC"04.
[11] Hersh W.R., Price S., Donohoe L. Assessing thesaurus-based
query expansion using the UMLS Metathesaurus. Proc AMIA
Symp. 344-8. 2000.
[12] Levenshtein, V. Binary codes capable of correcting deletions,
insertions, and reversals. Soviet Physics - Doklady 10, 10
(1996), 707-710.
[13] Lin J., Demner-Fushman D. The Role of Knowledge in
Conceptual Retrieval: A Study in the Domain of Clinical
Medicine. SIGIR"06. 99-06.
[14] Lindberg D., Humphreys B., and McCray A. The Unified
Medical Language System. Methods of Information in
Medicine. 32(4):281-291, 1993.
[15] Liu S., Liu F., Yu C., and Meng W.Y. An Effective
Approach to Document Retrieval via Utilizing WordNet and
Recognizing Phrases. SIGIR"04. 266-272
[16] Proux D., Rechenmann F., Julliard L., Pillet V.V., Jacq B.
Detecting Gene Symbols and Names in Biological Texts: A
First Step toward Pertinent Information Extraction. Genome
Inform Ser Workshop Genome Inform. 1998;9:72-80.
[17] Robertson S.E., Walker S. Okapi/Keenbow at TREC-8. NIST
Special Publication 500-246: TREC 8.
[18] Sackett D.L., and etc. Evidence-Based Medicine: How to
Practice and Teach EBM. Churchill Livingstone. Second
edition, 2000.
[19] Swanson,D.R., Smalheiser,N.R. An interactive system for
finding complemen-tary literatures: a stimulus to scientific
discovery. Artificial Intelligence, 1997; 91,183-203.
[20] Voorhees E. Query expansion using lexical-semantic
relations. SIGIR 1994. 61-9
[21] Zhong M., Huang X.J. Concept-based biomedical text
retrieval. SIGIR"06. 723-4
[22] Zhou W., Torvik V.I., Smalheiser N.R. ADAM: Another
Database of Abbreviations in MEDLINE. Bioinformatics.
2006; 22(22): 2813-2818. | keyword search;document collection;document map;domain-specific knowledge;retrieval model;biomedical document;document retrieval;query concept;conceptual ir model;passage-level information retrieval;passage map;passage extraction;aspect map |
train_H-60 | A Frequency-based and a Poisson-based Definition of the Probability of Being Informative | This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ). We show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. | 1. INTRODUCTION AND BACKGROUND
The inverse document frequency (idf ) is one of the most
successful parameters for a relevance-based ranking of
retrieved objects. With N being the total number of
documents, and n(t) being the number of documents in which
term t occurs, the idf is defined as follows:
idf(t) := − log
n(t)
N
, 0 <= idf(t) < ∞
Ranking based on the sum of the idf -values of the query
terms that occur in the retrieved documents works well, this
has been shown in numerous applications. Also, it is well
known that the combination of a document-specific term
weight and idf works better than idf alone. This approach
is known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is
the so-called term frequency of term t in document d. The
idf reflects the discriminating power (informativeness) of a
term, whereas the tf reflects the occurrence of a term.
The idf alone works better than the tf alone does. An
explanation might be the problem of tf with terms that occur
in many documents; let us refer to those terms as noisy
terms. We use the notion of noisy terms rather than
frequent terms since frequent terms leaves open whether we
refer to the document frequency of a term in a collection or
to the so-called term frequency (also referred to as
withindocument frequency) of a term in a document. We
associate noise with the document frequency of a term in a
collection, and we associate occurrence with the
withindocument frequency of a term. The tf of a noisy term might
be high in a document, but noisy terms are not good
candidates for representing a document. Therefore, the removal
of noisy terms (known as stopword removal) is essential
when applying tf . In a tf-idf approach, the removal of
stopwords is conceptually obsolete, if stopwords are just words
with a low idf .
From a probabilistic point of view, tf is a value with a
frequency-based probabilistic interpretation whereas idf has
an informative rather than a probabilistic interpretation.
The missing probabilistic interpretation of idf is a problem
in probabilistic retrieval models where we combine uncertain
knowledge of different dimensions (e.g.: informativeness of
terms, structure of documents, quality of documents, age
of documents, etc.) such that a good estimate of the
probability of relevance is achieved. An intuitive solution is a
normalisation of idf such that we obtain values in the
interval [0; 1]. For example, consider a normalisation based on
the maximal idf -value. Let T be the set of terms occurring
in a collection.
Pfreq (t is informative) :=
idf(t)
maxidf
maxidf := max({idf(t)|t ∈ T}), maxidf <= − log(1/N)
minidf := min({idf(t)|t ∈ T}), minidf >= 0
minidf
maxidf
≤ Pfreq (t is informative) ≤ 1.0
This frequency-based probability function covers the interval
[0; 1] if the minimal idf is equal to zero, which is the case
if we have at least one term that occurs in all documents.
Can we interpret Pfreq , the normalised idf , as the probability
that the term is informative?
When investigating the probabilistic interpretation of the
227
normalised idf , we made several observations related to
disjointness and independence of document events. These
observations are reported in section 3. We show in section 3.1
that the frequency-based noise probability n(t)
N
used in the
classic idf -definition can be explained by three assumptions:
binary term occurrence, constant document containment and
disjointness of document containment events. In section 3.2
we show that by assuming independence of documents, we
obtain 1 − e−1
≈ 1 − 0.37 as the upper bound of the noise
probability of a term. The value e−1
is related to the
logarithm and we investigate in section 3.3 the link to
information theory. In section 4, we link the results of the previous
sections to probability theory. We show the steps from
possible worlds to binomial distribution and Poisson distribution.
In section 5, we emphasise that the theoretical framework
of this paper is applicable for both idf and tf . Finally, in
section 6, we base the definition of the probability of
being informative on the results of the previous sections and
compare frequency-based and Poisson-based definitions.
2. BACKGROUND
The relationship between frequencies, probabilities and
information theory (entropy) has been the focus of many
researchers. In this background section, we focus on work
that investigates the application of the Poisson distribution
in IR since a main part of the work presented in this paper
addresses the underlying assumptions of Poisson.
[4] proposes a 2-Poisson model that takes into account
the different nature of relevant and non-relevant documents,
rare terms (content words) and frequent terms (noisy terms,
function words, stopwords). [9] shows experimentally that
most of the terms (words) in a collection are distributed
according to a low dimension n-Poisson model. [10] uses a
2-Poisson model for including term frequency-based
probabilities in the probabilistic retrieval model. The non-linear
scaling of the Poisson function showed significant
improvement compared to a linear frequency-based probability. The
Poisson model was here applied to the term frequency of a
term in a document. We will generalise the discussion by
pointing out that document frequency and term frequency
are dual parameters in the collection space and the
document space, respectively. Our discussion of the Poisson
distribution focuses on the document frequency in a collection
rather than on the term frequency in a document.
[7] and [6] address the deviation of idf and Poisson, and
apply Poisson mixtures to achieve better Poisson-based
estimates. The results proved again experimentally that a
onedimensional Poisson does not work for rare terms, therefore
Poisson mixtures and additional parameters are proposed.
[3], section 3.3, illustrates and summarises
comprehensively the relationships between frequencies, probabilities
and Poisson. Different definitions of idf are put into
context and a notion of noise is defined, where noise is viewed
as the complement of idf . We use in our paper a different
notion of noise: we consider a frequency-based noise that
corresponds to the document frequency, and we consider a
term noise that is based on the independence of document
events.
[11], [12], [8] and [1] link frequencies and probability
estimation to information theory. [12] establishes a framework
in which information retrieval models are formalised based
on probabilistic inference. A key component is the use of a
space of disjoint events, where the framework mainly uses
terms as disjoint events. The probability of being
informative defined in our paper can be viewed as the probability
of the disjoint terms in the term space of [12].
[8] address entropy and bibliometric distributions.
Entropy is maximal if all events are equiprobable and the
frequency-based Lotka law (N/iλ
is the number of scientists
that have written i publications, where N and λ are
distribution parameters), Zipf and the Pareto distribution are
related. The Pareto distribution is the continuous case of the
Lotka and Lotka and Zipf show equivalences. The Pareto
distribution is used by [2] for term frequency normalisation.
The Pareto distribution compares to the Poisson
distribution in the sense that Pareto is fat-tailed, i. e. Pareto
assigns larger probabilities to large numbers of events than
Poisson distributions do. This makes Pareto interesting
since Poisson is felt to be too radical on frequent events.
We restrict in this paper to the discussion of Poisson,
however, our results show that indeed a smoother distribution
than Poisson promises to be a good candidate for improving
the estimation of probabilities in information retrieval.
[1] establishes a theoretical link between tf-idf and
information theory and the theoretical research on the meaning
of tf-idf clarifies the statistical model on which the different
measures are commonly based. This motivation matches
the motivation of our paper: We investigate theoretically
the assumptions of classical idf and Poisson for a better
understanding of parameter estimation and combination.
3. FROM DISJOINT TO INDEPENDENT
We define and discuss in this section three probabilities:
The frequency-based noise probability (definition 1), the
total noise probability for disjoint documents (definition 2).
and the noise probability for independent documents
(definition 3).
3.1 Binary occurrence, constant containment
and disjointness of documents
We show in this section, that the frequency-based noise
probability n(t)
N
in the idf definition can be explained as
a total probability with binary term occurrence, constant
document containment and disjointness of document
containments.
We refer to a probability function as binary if for all events
the probability is either 1.0 or 0.0. The occurrence
probability P(t|d) is binary, if P(t|d) is equal to 1.0 if t ∈ d, and
P(t|d) is equal to 0.0, otherwise.
P(t|d) is binary : ⇐⇒ P(t|d) = 1.0 ∨ P(t|d) = 0.0
We refer to a probability function as constant if for all
events the probability is equal. The document containment
probability reflect the chance that a document occurs in a
collection. This containment probability is constant if we
have no information about the document containment or
we ignore that documents differ in containment.
Containment could be derived, for example, from the size, quality,
age, links, etc. of a document. For a constant containment
in a collection with N documents, 1
N
is often assumed as
the containment probability. We generalise this definition
and introduce the constant λ where 0 ≤ λ ≤ N. The
containment of a document d depends on the collection c, this
is reflected by the notation P(d|c) used for the containment
228
of a document.
P(d|c) is constant : ⇐⇒ ∀d : P(d|c) =
λ
N
For disjoint documents that cover the whole event space,
we set λ = 1 and obtain
Èd P(d|c) = 1.0. Next, we define
the frequency-based noise probability and the total noise
probability for disjoint documents. We introduce the event
notation t is noisy and t occurs for making the difference
between the noise probability P(t is noisy|c) in a collection
and the occurrence probability P(t occurs|d) in a document
more explicit, thereby keeping in mind that the noise
probability corresponds to the occurrence probability of a term
in a collection.
Definition 1. The frequency-based term noise
probability:
Pfreq (t is noisy|c) :=
n(t)
N
Definition 2. The total term noise probability for
disjoint documents:
Pdis (t is noisy|c) :=
d
P(t occurs|d) · P(d|c)
Now, we can formulate a theorem that makes assumptions
explicit that explain the classical idf .
Theorem 1. IDF assumptions: If the occurrence
probability P(t|d) of term t over documents d is binary, and
the containment probability P(d|c) of documents d is
constant, and document containments are disjoint events, then
the noise probability for disjoint documents is equal to the
frequency-based noise probability.
Pdis (t is noisy|c) = Pfreq (t is noisy|c)
Proof. The assumptions are:
∀d : (P(t occurs|d) = 1 ∨ P(t occurs|d) = 0) ∧
P(d|c) =
λ
N
∧
d
P(d|c) = 1.0
We obtain:
Pdis (t is noisy|c) =
d|t∈d
1
N
=
n(t)
N
= Pfreq (t is noisy|c)
The above result is not a surprise but it is a
mathematical formulation of assumptions that can be used to explain
the classical idf . The assumptions make explicit that the
different types of term occurrence in documents (frequency
of a term, importance of a term, position of a term,
document part where the term occurs, etc.) and the different
types of document containment (size, quality, age, etc.) are
ignored, and document containments are considered as
disjoint events.
From the assumptions, we can conclude that idf
(frequencybased noise, respectively) is a relatively simple but strict
estimate. Still, idf works well. This could be explained
by a leverage effect that justifies the binary occurrence and
constant containment: The term occurrence for small
documents tends to be larger than for large documents, whereas
the containment for small documents tends to be smaller
than for large documents. From that point of view, idf
means that P(t ∧ d|c) is constant for all d in which t occurs,
and P(t ∧ d|c) is zero otherwise. The occurrence and
containment can be term specific. For example, set P(t∧d|c) =
1/ND(c) if t occurs in d, where ND(c) is the number of
documents in collection c (we used before just N). We choose a
document-dependent occurrence P(t|d) := 1/NT (d), i. e. the
occurrence probability is equal to the inverse of NT (d), which
is the total number of terms in document d. Next, we choose
the containment P(d|c) := NT (d)/NT (c)·NT (c)/ND(c) where
NT (d)/NT (c) is a document length normalisation (number
of terms in document d divided by the number of terms in
collection c), and NT (c)/ND(c) is a constant factor of the
collection (number of terms in collection c divided by the
number of documents in collection c). We obtain P(t∧d|c) =
1/ND(c).
In a tf-idf -retrieval function, the tf -component reflects
the occurrence probability of a term in a document. This is
a further explanation why we can estimate the idf with a
simple P(t|d), since the combined tf-idf contains the
occurrence probability. The containment probability corresponds
to a document normalisation (document length
normalisation, pivoted document length) and is normally attached to
the tf -component or the tf-idf -product.
The disjointness assumption is typical for frequency-based
probabilities. From a probability theory point of view, we
can consider documents as disjoint events, in order to achieve
a sound theoretical model for explaining the classical idf .
But does disjointness reflect the real world where the
containment of a document appears to be independent of the
containment of another document? In the next section, we
replace the disjointness assumption by the independence
assumption.
3.2 The upper bound of the noise probability
for independent documents
For independent documents, we compute the probability
of a disjunction as usual, namely as the complement of the
probability of the conjunction of the negated events:
P(d1 ∨ . . . ∨ dN ) = 1 − P(¬d1 ∧ . . . ∧ ¬dN )
= 1 −
d
(1 − P(d))
The noise probability can be considered as the conjunction
of the term occurrence and the document containment.
P(t is noisy|c) := P(t occurs ∧ (d1 ∨ . . . ∨ dN )|c)
For disjoint documents, this view of the noise probability
led to definition 2. For independent documents, we use now
the conjunction of negated events.
Definition 3. The term noise probability for
independent documents:
Pin (t is noisy|c) :=
d
(1 − P(t occurs|d) · P(d|c))
With binary occurrence and a constant containment P(d|c) :=
λ/N, we obtain the term noise of a term t that occurs in n(t)
documents:
Pin (t is noisy|c) = 1 − 1 −
λ
N
n(t)
229
For binary occurrence and disjoint documents, the
containment probability was 1/N. Now, with independent
documents, we can use λ as a collection parameter that controls
the average containment probability. We show through the
next theorem that the upper bound of the noise probability
depends on λ.
Theorem 2. The upper bound of being noisy: If the
occurrence P(t|d) is binary, and the containment P(d|c)
is constant, and document containments are independent
events, then 1 − e−λ
is the upper bound of the noise
probability.
∀t : Pin (t is noisy|c) < 1 − e−λ
Proof. The upper bound of the independent noise
probability follows from the limit limN→∞(1 + x
N
)N
= ex
(see
any comprehensive math book, for example, [5], for the
convergence equation of the Euler function). With x = −λ, we
obtain:
lim
N→∞
1 −
λ
N
N
= e−λ
For the term noise, we have:
Pin (t is noisy|c) = 1 − 1 −
λ
N
n(t)
Pin (t is noisy|c) is strictly monotonous: The noise of a term
tn is less than the noise of a term tn+1, where tn occurs in
n documents and tn+1 occurs in n + 1 documents.
Therefore, a term with n = N has the largest noise probability.
For a collection with infinite many documents, the upper
bound of the noise probability for terms tN that occur in all
documents becomes:
lim
N→∞
Pin (tN is noisy) = lim
N→∞
1 − 1 −
λ
N
N
= 1 − e−λ
By applying an independence rather a disjointness
assumption, we obtain the probability e−1
that a term is not noisy
even if the term does occur in all documents. In the disjoint
case, the noise probability is one for a term that occurs in
all documents.
If we view P(d|c) := λ/N as the average containment,
then λ is large for a term that occurs mostly in large
documents, and λ is small for a term that occurs mostly in small
documents. Thus, the noise of a term t is large if t occurs in
n(t) large documents and the noise is smaller if t occurs in
small documents. Alternatively, we can assume a constant
containment and a term-dependent occurrence. If we
assume P(d|c) := 1, then P(t|d) := λ/N can be interpreted as
the average probability that t represents a document. The
common assumption is that the average containment or
occurrence probability is proportional to n(t). However, here
is additional potential: The statistical laws (see [3] on Luhn
and Zipf) indicate that the average probability could follow
a normal distribution, i. e. small probabilities for small n(t)
and large n(t), and larger probabilities for medium n(t).
For the monotonous case we investigate here, the noise of
a term with n(t) = 1 is equal to 1 − (1 − λ/N) = λ/N and
the noise of a term with n(t) = N is close to 1− e−λ
. In the
next section, we relate the value e−λ
to information theory.
3.3 The probability of a maximal informative
signal
The probability e−1
is special in the sense that a signal
with that probability is a signal with maximal information as
derived from the entropy definition. Consider the definition
of the entropy contribution H(t) of a signal t.
H(t) := P(t) · − ln P(t)
We form the first derivation for computing the optimum.
∂H(t)
∂P(t)
= − ln P(t) +
−1
P(t)
· P(t)
= −(1 + ln P(t))
For obtaining optima, we use:
0 = −(1 + ln P(t))
The entropy contribution H(t) is maximal for P(t) = e−1
.
This result does not depend on the base of the logarithm as
we see next:
∂H(t)
∂P(t)
= − logb P(t) +
−1
P(t) · ln b
· P(t)
= −
1
ln b
+ logb P(t) = −
1 + ln P(t)
ln b
We summarise this result in the following theorem:
Theorem 3. The probability of a maximal
informative signal: The probability Pmax = e−1
≈ 0.37 is the
probability of a maximal informative signal. The entropy of a
maximal informative signal is Hmax = e−1
.
Proof. The probability and entropy follow from the
derivation above.
The complement of the maximal noise probability is e−λ
and we are looking now for a generalisation of the entropy
definition such that e−λ
is the probability of a maximal
informative signal. We can generalise the entropy definition
by computing the integral of λ+ ln P(t), i. e. this derivation
is zero for e−λ
. We obtain a generalised entropy:
−(λ + ln P(t)) d(P(t)) = P(t) · (1 − λ − ln P(t))
The generalised entropy corresponds for λ = 1 to the
classical entropy. By moving from disjoint to independent
documents, we have established a link between the complement
of the noise probability of a term that occurs in all
documents and information theory. Next, we link independent
documents to probability theory.
4. THE LINK TO PROBABILITY THEORY
We review for independent documents three concepts of
probability theory: possible worlds, binomial distribution
and Poisson distribution.
4.1 Possible Worlds
Each conjunction of document events (for each document,
we consider two document events: the document can be
true or false) is associated with a so-called possible world.
For example, consider the eight possible worlds for three
documents (N = 3).
230
world w conjunction
w7 d1 ∧ d2 ∧ d3
w6 d1 ∧ d2 ∧ ¬d3
w5 d1 ∧ ¬d2 ∧ d3
w4 d1 ∧ ¬d2 ∧ ¬d3
w3 ¬d1 ∧ d2 ∧ d3
w2 ¬d1 ∧ d2 ∧ ¬d3
w1 ¬d1 ∧ ¬d2 ∧ d3
w0 ¬d1 ∧ ¬d2 ∧ ¬d3
With each world w, we associate a probability µ(w), which
is equal to the product of the single probabilities of the
document events.
world w probability µ(w)
w7
λ
N
¡3
·
1 − λ
N
¡0
w6
λ
N
¡2
·
1 − λ
N
¡1
w5
λ
N
¡2
·
1 − λ
N
¡1
w4
λ
N
¡1
·
1 − λ
N
¡2
w3
λ
N
¡2
·
1 − λ
N
¡1
w2
λ
N
¡1
·
1 − λ
N
¡2
w1
λ
N
¡1
·
1 − λ
N
¡2
w0
λ
N
¡0
·
1 − λ
N
¡3
The sum over the possible worlds in which k documents are
true and N −k documents are false is equal to the
probability function of the binomial distribution, since the binomial
coefficient yields the number of possible worlds in which k
documents are true.
4.2 Binomial distribution
The binomial probability function yields the probability
that k of N events are true where each event is true with
the single event probability p.
P(k) := binom(N, k, p) :=
N
k
pk
(1 − p)N −k
The single event probability is usually defined as p := λ/N,
i. e. p is inversely proportional to N, the total number of
events. With this definition of p, we obtain for an infinite
number of documents the following limit for the product of
the binomial coefficient and pk
:
lim
N→∞
N
k
pk
=
= lim
N→∞
N · (N −1) · . . . · (N −k +1)
k!
λ
N
k
=
λk
k!
The limit is close to the actual value for k << N. For large
k, the actual value is smaller than the limit.
The limit of (1−p)N −k follows from the limit limN→∞(1+
x
N
)N
= ex
.
lim
N→∞
(1 − p)N−k
= lim
N→∞
1 −
λ
N
N −k
= lim
N→∞
e−λ
· 1 −
λ
N
−k
= e−λ
Again, the limit is close to the actual value for k << N. For
large k, the actual value is larger than the limit.
4.3 Poisson distribution
For an infinite number of events, the Poisson probability
function is the limit of the binomial probability function.
lim
N→∞
binom(N, k, p) =
λk
k!
· e−λ
P(k) = poisson(k, λ) :=
λk
k!
· e−λ
The probability poisson(0, 1) is equal to e−1
, which is the
probability of a maximal informative signal. This shows
the relationship of the Poisson distribution and information
theory.
After seeing the convergence of the binomial distribution,
we can choose the Poisson distribution as an approximation
of the independent term noise probability. First, we define
the Poisson noise probability:
Definition 4. The Poisson term noise probability:
Ppoi (t is noisy|c) := e−λ
·
n(t)
k=1
λk
k!
For independent documents, the Poisson distribution
approximates the probability of the disjunction for large n(t),
since the independent term noise probability is equal to the
sum over the binomial probabilities where at least one of
n(t) document containment events is true.
Pin (t is noisy|c) =
n(t)
k=1
n(t)
k
pk
(1 − p)N −k
Pin (t is noisy|c) ≈ Ppoi (t is noisy|c)
We have defined a frequency-based and a Poisson-based
probability of being noisy, where the latter is the limit of the
independence-based probability of being noisy. Before we
present in the final section the usage of the noise
probability for defining the probability of being informative, we
emphasise in the next section that the results apply to the
collection space as well as to the the document space.
5. THE COLLECTION SPACE AND THE
DOCUMENT SPACE
Consider the dual definitions of retrieval parameters in
table 1. We associate a collection space D × T with a
collection c where D is the set of documents and T is the set
of terms in the collection. Let ND := |D| and NT := |T|
be the number of documents and terms, respectively. We
consider a document as a subset of T and a term as a subset
of D. Let nT (d) := |{t|d ∈ t}| be the number of terms that
occur in the document d, and let nD(t) := |{d|t ∈ d}| be the
number of documents that contain the term t.
In a dual way, we associate a document space L × T with
a document d where L is the set of locations (also referred
to as positions, however, we use the letters L and l and not
P and p for avoiding confusion with probabilities) and T is
the set of terms in the document. The document dimension
in a collection space corresponds to the location (position)
dimension in a document space.
The definition makes explicit that the classical notion of
term frequency of a term in a document (also referred to as
the within-document term frequency) actually corresponds
to the location frequency of a term in a document. For the
231
space collection document
dimensions documents and terms locations and terms
document/location
frequency
nD(t, c): Number of documents in which term t
occurs in collection c
nL(t, d): Number of locations (positions) at which
term t occurs in document d
ND(c): Number of documents in collection c NL(d): Number of locations (positions) in
document d
term frequency nT (d, c): Number of terms that document d
contains in collection c
nT (l, d): Number of terms that location l contains
in document d
NT (c): Number of terms in collection c NT (d): Number of terms in document d
noise/occurrence P(t|c) (term noise) P(t|d) (term occurrence)
containment P(d|c) (document) P(l|d) (location)
informativeness − ln P(t|c) − ln P(t|d)
conciseness − ln P(d|c) − ln P(l|d)
P(informative) ln(P(t|c))/ ln(P(tmin, c)) ln(P(t|d))/ ln(P(tmin, d))
P(concise) ln(P(d|c))/ ln(P(dmin|c)) ln(P(l|d))/ ln(P(lmin|d))
Table 1: Retrieval parameters
actual term frequency value, it is common to use the
maximal occurrence (number of locations; let lf be the location
frequency).
tf(t, d):=lf(t, d):=
Pfreq (t occurs|d)
Pfreq (tmax occurs|d)
=
nL(t, d)
nL(tmax , d)
A further duality is between informativeness and
conciseness (shortness of documents or locations): informativeness
is based on occurrence (noise), conciseness is based on
containment.
We have highlighted in this section the duality between
the collection space and the document space. We
concentrate in this paper on the probability of a term to be noisy
and informative. Those probabilities are defined in the
collection space. However, the results regarding the term noise
and informativeness apply to their dual counterparts: term
occurrence and informativeness in a document. Also, the
results can be applied to containment of documents and
locations.
6. THE PROBABILITY OF BEING
INFORMATIVE
We showed in the previous sections that the disjointness
assumption leads to frequency-based probabilities and that
the independence assumption leads to Poisson probabilities.
In this section, we formulate a frequency-based definition
and a Poisson-based definition of the probability of being
informative and then we compare the two definitions.
Definition 5. The frequency-based probability of
being informative:
Pfreq (t is informative|c) :=
− ln n(t)
N
− ln 1
N
= − logN
n(t)
N
= 1 − logN n(t) = 1 −
ln n(t)
ln N
We define the Poisson-based probability of being
informative analogously to the frequency-based probability of being
informative (see definition 5).
Definition 6. The Poisson-based probability of
being informative:
Ppoi (t is informative|c) :=
− ln e−λ
·
Èn(t)
k=1
λk
k!
− ln(e−λ · λ)
=
λ − ln
Èn(t)
k=1
λk
k!
λ − ln λ
For the sum expression, the following limit holds:
lim
n(t)→∞
n(t)
k=1
λk
k!
= eλ
− 1
For λ >> 1, we can alter the noise and informativeness
Poisson by starting the sum from 0, since eλ
>> 1. Then, the
minimal Poisson informativeness is poisson(0, λ) = e−λ
. We
obtain a simplified Poisson probability of being informative:
Ppoi (t is informative|c) ≈
λ − ln
Èn(t)
k=0
λk
k!
λ
= 1 −
ln
Èn(t)
k=0
λk
k!
λ
The computation of the Poisson sum requires an
optimisation for large n(t). The implementation for this paper
exploits the nature of the Poisson density: The Poisson
density yields only values significantly greater than zero in an
interval around λ.
Consider the illustration of the noise and
informativeness definitions in figure 1. The probability functions
displayed are summarised in figure 2 where the simplified
Poisson is used in the noise and informativeness graphs. The
frequency-based noise corresponds to the linear solid curve
in the noise figure. With an independence assumption, we
obtain the curve in the lower triangle of the noise figure. By
changing the parameter p := λ/N of the independence
probability, we can lift or lower the independence curve. The
noise figure shows the lifting for the value λ := ln N ≈
9.2. The setting λ = ln N is special in the sense that the
frequency-based and the Poisson-based informativeness have
the same denominator, namely ln N, and the Poisson sum
converges to λ. Whether we can draw more conclusions from
this setting is an open question.
We can conclude, that the lifting is desirable if we know
for a collection that terms that occur in relatively few
doc232
0
0.2
0.4
0.6
0.8
1
0 2000 4000 6000 8000 10000
Probabilityofbeingnoisy
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
0
0.2
0.4
0.6
0.8
1
0 2000 4000 6000 8000 10000
Probabilityofbeinginformative
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
Figure 1: Noise and Informativeness
Probability function Noise Informativeness
Frequency Pfreq Def n(t)/N ln(n(t)/N)/ ln(1/N)
Interval 1/N ≤ Pfreq ≤ 1.0 0.0 ≤ Pfreq ≤ 1.0
Independence Pin Def 1 − (1 − p)n(t)
ln(1 − (1 − p)n(t)
)/ ln(p)
Interval p ≤ Pin < 1 − e−λ
ln(p) ≤ Pin ≤ 1.0
Poisson Ppoi Def e−λ Èn(t)
k=1
λk
k!
(λ − ln
Èn(t)
k=1
λk
k!
)/(λ − ln λ)
Interval e−λ
· λ ≤ Ppoi < 1 − e−λ
(λ − ln(eλ
− 1))/(λ − ln λ) ≤ Ppoi ≤ 1.0
Poisson Ppoi simplified Def e−λ Èn(t)
k=0
λk
k!
(λ − ln
Èn(t)
k=0
λk
k!
)/λ
Interval e−λ
≤ Ppoi < 1.0 0.0 < Ppoi ≤ 1.0
Figure 2: Probability functions
uments are no guarantee for finding relevant documents,
i. e. we assume that rare terms are still relatively noisy. On
the opposite, we could lower the curve when assuming that
frequent terms are not too noisy, i. e. they are considered as
being still significantly discriminative.
The Poisson probabilities approximate the independence
probabilities for large n(t); the approximation is better for
larger λ. For n(t) < λ, the noise is zero whereas for n(t) > λ
the noise is one. This radical behaviour can be smoothened
by using a multi-dimensional Poisson distribution. Figure 1
shows a Poisson noise based on a two-dimensional Poisson:
poisson(k, λ1, λ2) := π · e−λ1
·
λk
1
k!
+ (1 − π) · e−λ2
·
λk
2
k!
The two dimensional Poisson shows a plateau between λ1 =
1000 and λ2 = 2000, we used here π = 0.5. The idea
behind this setting is that terms that occur in less than 1000
documents are considered to be not noisy (i.e. they are
informative), that terms between 1000 and 2000 are half noisy,
and that terms with more than 2000 are definitely noisy.
For the informativeness, we observe that the radical
behaviour of Poisson is preserved. The plateau here is
approximately at 1/6, and it is important to realise that this
plateau is not obtained with the multi-dimensional Poisson
noise using π = 0.5. The logarithm of the noise is
normalised by the logarithm of a very small number, namely
0.5 · e−1000
+ 0.5 · e−2000
. That is why the informativeness
will be only close to one for very little noise, whereas for a
bit of noise, informativeness will drop to zero. This effect
can be controlled by using small values for π such that the
noise in the interval [λ1; λ2] is still very little. The setting
π = e−2000/6
leads to noise values of approximately e−2000/6
in the interval [λ1; λ2], the logarithms lead then to 1/6 for
the informativeness.
The indepence-based and frequency-based informativeness
functions do not differ as much as the noise functions do.
However, for the indepence-based probability of being
informative, we can control the average informativeness by the
definition p := λ/N whereas the control on the
frequencybased is limited as we address next.
For the frequency-based idf , the gradient is monotonously
decreasing and we obtain for different collections the same
distances of idf -values, i. e. the parameter N does not affect
the distance. For an illustration, consider the distance
between the value idf(tn+1) of a term tn+1 that occurs in n+1
documents, and the value idf(tn) of a term tn that occurs in
n documents.
idf(tn+1) − idf(tn) = ln
n
n + 1
The first three values of the distance function are:
idf(t2) − idf(t1) = ln(1/(1 + 1)) = 0.69
idf(t3) − idf(t2) = ln(1/(2 + 1)) = 0.41
idf(t4) − idf(t3) = ln(1/(3 + 1)) = 0.29
For the Poisson-based informativeness, the gradient decreases
first slowly for small n(t), then rapidly near n(t) ≈ λ and
then it grows again slowly for large n(t).
In conclusion, we have seen that the Poisson-based
definition provides more control and parameter possibilities than
233
the frequency-based definition does. Whereas more control
and parameter promises to be positive for the
personalisation of retrieval systems, it bears at the same time the
danger of just too many parameters. The framework presented
in this paper raises the awareness about the probabilistic
and information-theoretic meanings of the parameters. The
parallel definitions of the frequency-based probability and
the Poisson-based probability of being informative made
the underlying assumptions explicit. The frequency-based
probability can be explained by binary occurrence, constant
containment and disjointness of documents. Independence
of documents leads to Poisson, where we have to be aware
that Poisson approximates the probability of a disjunction
for a large number of events, but not for a small number.
This theoretical result explains why experimental
investigations on Poisson (see [7]) show that a Poisson estimation
does work better for frequent (bad, noisy) terms than for
rare (good, informative) terms.
In addition to the collection-wide parameter setting, the
framework presented here allows for document-dependent
settings, as explained for the independence probability. This
is in particular interesting for heterogeneous and structured
collections, since documents are different in nature (size,
quality, root document, sub document), and therefore,
binary occurrence and constant containment are less
appropriate than in relatively homogeneous collections.
7. SUMMARY
The definition of the probability of being informative
transforms the informative interpretation of the idf into a
probabilistic interpretation, and we can use the idf -based
probability in probabilistic retrieval approaches. We showed that
the classical definition of the noise (document frequency) in
the inverse document frequency can be explained by three
assumptions: the term within-document occurrence
probability is binary, the document containment probability is
constant, and the document containment events are disjoint.
By explicitly and mathematically formulating the
assumptions, we showed that the classical definition of idf does not
take into account parameters such as the different nature
(size, quality, structure, etc.) of documents in a collection,
or the different nature of terms (coverage, importance,
position, etc.) in a document. We discussed that the absence
of those parameters is compensated by a leverage effect of
the within-document term occurrence probability and the
document containment probability.
By applying an independence rather a disjointness
assumption for the document containment, we could
establish a link between the noise probability (term occurrence
in a collection), information theory and Poisson. From the
frequency-based and the Poisson-based probabilities of
being noisy, we derived the frequency-based and Poisson-based
probabilities of being informative. The frequency-based
probability is relatively smooth whereas the Poisson probability
is radical in distinguishing between noisy or not noisy, and
informative or not informative, respectively. We showed how
to smoothen the radical behaviour of Poisson with a
multidimensional Poisson.
The explicit and mathematical formulation of idf - and
Poisson-assumptions is the main result of this paper. Also,
the paper emphasises the duality of idf and tf , collection
space and document space, respectively. Thus, the result
applies to term occurrence and document containment in a
collection, and it applies to term occurrence and position
containment in a document. This theoretical framework is
useful for understanding and deciding the parameter
estimation and combination in probabilistic retrieval models. The
links between indepence-based noise as document frequency,
probabilistic interpretation of idf , information theory and
Poisson described in this paper may lead to variable
probabilistic idf and tf definitions and combinations as required
in advanced and personalised information retrieval systems.
Acknowledgment: I would like to thank Mounia Lalmas,
Gabriella Kazai and Theodora Tsikrika for their comments
on the as they said heavy pieces. My thanks also go to the
meta-reviewer who advised me to improve the presentation
to make it less formidable and more accessible for those
without a theoretic bent. This work was funded by a
research fellowship from Queen Mary University of London.
8. REFERENCES
[1] A. Aizawa. An information-theoretic perspective of
tf-idf measures. Information Processing and
Management, 39:45-65, January 2003.
[2] G. Amati and C. J. Rijsbergen. Term frequency
normalization via Pareto distributions. In 24th
BCS-IRSG European Colloquium on IR Research,
Glasgow, Scotland, 2002.
[3] R. K. Belew. Finding out about. Cambridge University
Press, 2000.
[4] A. Bookstein and D. Swanson. Probabilistic models
for automatic indexing. Journal of the American
Society for Information Science, 25:312-318, 1974.
[5] I. N. Bronstein. Taschenbuch der Mathematik. Harri
Deutsch, Thun, Frankfurt am Main, 1987.
[6] K. Church and W. Gale. Poisson mixtures. Natural
Language Engineering, 1(2):163-190, 1995.
[7] K. W. Church and W. A. Gale. Inverse document
frequency: A measure of deviations from poisson. In
Third Workshop on Very Large Corpora, ACL
Anthology, 1995.
[8] T. Lafouge and C. Michel. Links between information
construction and information gain: Entropy and
bibliometric distribution. Journal of Information
Science, 27(1):39-49, 2001.
[9] E. Margulis. N-poisson document modelling. In
Proceedings of the 15th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 177-189, 1992.
[10] S. E. Robertson and S. Walker. Some simple effective
approximations to the 2-poisson model for
probabilistic weighted retrieval. In Proceedings of the
17th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 232-241, London, et al., 1994. Springer-Verlag.
[11] S. Wong and Y. Yao. An information-theoric measure
of term specificity. Journal of the American Society
for Information Science, 43(1):54-61, 1992.
[12] S. Wong and Y. Yao. On modeling information
retrieval with probabilistic inference. ACM
Transactions on Information Systems, 13(1):38-68,
1995.
234 | idf;informativeness;document disjointness;poisson distribution;probability theory;probabilistic information retrieval;information theory;independence assumption;inverse document frequency;information retrieval;poisson-based probability;collection space;frequency-based probability;noise probability;probability function;disjointness of document |
train_H-61 | Impedance Coupling in Content-targeted Advertising | The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser"s business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for ads and keywords) can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. | 1. INTRODUCTION
The emergence of the Internet has opened up new
marketing opportunities. In fact, a company has now the possibility
of showing its advertisements (ads) to millions of people at a
low cost. During the 90"s, many companies invested heavily
on advertising in the Internet with apparently no concerns
about their investment return [16]. This situation radically
changed in the following decade when the failure of many
Web companies led to a dropping in supply of cheap venture
capital and a considerable reduction in on-line advertising
investments [15,16].
It was clear then that more effective strategies for on-line
advertising were required. For that, it was necessary to take
into account short-term and long-term interests of the users
related to their information needs [9,14]. As a consequence,
many companies intensified the adoption of intrusive
techniques for gathering information of users mostly without
their consent [8]. This raised privacy issues which
stimulated the research for less invasive measures [16].
More recently, Internet information gatekeepers as, for
example, search engines, recommender systems, and
comparison shopping services, have employed what is called paid
placement strategies [3]. In such methods, an advertiser
company is given prominent positioning in advertisement
lists in return for a placement fee. Amongst these methods,
the most popular one is a non-intrusive technique called
keyword targeted marketing [16]. In this technique, keywords
extracted from the user"s search query are matched against
keywords associated with ads provided by advertisers. A
ranking of the ads, which also takes into consideration the
amount that each advertiser is willing to pay, is computed.
The top ranked ads are displayed in the search result page
together with the answers for the user query.
The success of keyword targeted marketing has motivated
information gatekeepers to offer their advertisement services
in different contexts. For example, as shown in Figure 1,
relevant ads could be shown to users directly in the pages of
information portals. The motivation is to take advantage of
496
the users immediate information interests at browsing time.
The problem of matching ads to a Web page that is browsed,
which we also refer to as content-targeted advertising [1],
is different from that of keyword marketing. In this case,
instead of dealing with users" keywords, we have to use the
contents of a Web page to decide which ads to display.
Figure 1: Example of content-based advertising in
the page of a newspaper. The middle slice of the
page shows the beginning of an article about the
launch of a DVD movie. At the bottom slice, we can
see advertisements picked for this page by Google"s
content-based advertising system, AdSense.
It is important to notice that paid placement
advertising strategies imply some risks to information gatekeepers.
For instance, there is the possibility of a negative impact
on their credibility which, at long term, can demise their
market share [3]. This makes investments in the quality of
ad recommendation systems even more important to
minimize the possibility of exhibiting ads unrelated to the user"s
interests. By investing in their ad systems, information
gatekeepers are investing in the maintenance of their credibility
and in the reinforcement of a positive user attitude towards
the advertisers and their ads [14]. Further, that can
translate into higher clickthrough rates that lead to an increase in
revenues for information gatekeepers and advertisers, with
gains to all parts [3].
In this work, we focus on the problem of content-targeted
advertising. We propose new strategies for associating ads
with a Web page. Five of these strategies are referred to as
matching strategies. They are based on the idea of matching
the text of the Web page directly to the text of the ads and
its associated keywords. Five other strategies, which we here
introduce, are referred to as impedance coupling strategies.
They are based on the idea of expanding the Web page with
new terms to facilitate the task of matching ads and Web
pages. This is motivated by the observation that there is
frequently a mismatch between the vocabulary of a Web page
and the vocabulary of an advertisement. We say that there
is a vocabulary impedance problem and that our technique
provides a positive effect of impedance coupling by reducing
the vocabulary impedance. Further, all our strategies rely
on information that is already available to information
gatekeepers that operate keyword targeted advertising systems.
Thus, no other data from the advertiser is required.
Using a sample of a real case database with over 93,000
ads and 100 Web pages selected for testing, we evaluate our
ad recommendation strategies. First, we evaluate the five
matching strategies. They match ads to a Web page
using a standard vector model and provide what we may call
trivial solutions. Our results indicate that a strategy that
matches the ad plus its keywords to a Web page, requiring
the keywords to appear in the Web page, provides
improvements in average precision figures of roughly 60% relative
to a strategy that simply matches the ads to the Web page.
Such strategy, which we call AAK (for ads and keywords),
is then taken as our baseline.
Following we evaluate the five impedance coupling
strategies. They are based on the idea of expanding the ad and
the Web page with new terms to reduce the vocabulary
impedance between their texts. Our results indicate that it
is possible to generate extra improvements in average
precision figures of roughly 50% relative to the AAK strategy.
The paper is organized as follows. In section 2, we
introduce five matching strategies to solve content-targeted
advertising. In section 3, we present our impedance
coupling strategies. In section 4, we describe our experimental
methodology and datasets and discuss our results. In
section 5 we discuss related work. In section 6 we present our
conclusions.
2. MATCHING STRATEGIES
Keyword advertising relies on matching search queries to
ads and its associated keywords. Context-based
advertising, which we address here, relies on matching ads and its
associated keywords to the text of a Web page.
Given a certain Web page p, which we call triggering page,
our task is to select advertisements related to the contents
of p. Without loss of generality, we consider that an
advertisement ai is composed of a title, a textual description,
and a hyperlink. To illustrate, for the first ad by Google
shown in Figure 1, the title is Star Wars Trilogy Full,
the description is Get this popular DVD free. Free w/ free
shopping. Sign up now, and the hyperlink points to the site
www.freegiftworld.com. Advertisements can be grouped
by advertisers in groups called campaigns, such that a
campaign can have one or more advertisements.
Given our triggering page p and a set A of ads, a simple
way of ranking ai ∈ A with regard to p is by matching the
contents of p to the contents of ai. For this, we use the vector
space model [2], as discussed in the immediately following.
In the vector space model, queries and documents are
represented as weighted vectors in an n-dimensional space. Let
wiq be the weight associated with term ti in the query q
and wij be the weight associated with term ti in the
document dj. Then, q = (w1q, w2q, ..., wiq, ..., wnq) and dj =
(w1j, w2j, ..., wij, ..., wnj) are the weighted vectors used to
represent the query q and the document dj. These weights
can be computed using classic tf-idf schemes. In such schemes,
weights are taken as the product between factors that
quantify the importance of a term in a document (given by the
term frequency, or tf, factor) and its rarity in the whole
collection (given by the inverse document factor, or idf, factor),
see [2] for details. The ranking of the query q with regard
to the document dj is computed by the cosine similarity
497
formula, that is, the cosine of the angle between the two
corresponding vectors:
sim(q, dj) =
q • dj
|q| × |dj|
=
Pn
i=1 wiq · wij
qPn
i=1 w2
iq
qPn
i=1 w2
ij
(1)
By considering p as the query and ai as the document, we
can rank the ads with regard to the Web page p. This is our
first matching strategy. It is represented by the function AD
given by:
AD(p, ai) = sim(p, ai)
where AD stands for direct match of the ad, composed by
title and description and sim(p, ai) is computed according
to Eq. (1).
In our second method, we use other source of evidence
provided by the advertisers: the keywords. With each
advertisement ai an advertiser associates a keyword ki, which
may be composed of one or more terms. We denote the
association between an advertisement ai and a keyword ki
as the pair (ai, ki) ∈ K, where K is the set of associations
made by the advertisers. In the case of keyword targeted
advertising, such keywords are used to match the ads to the
user queries. In here, we use them to match ads to the Web
page p. This provides our second method for ad matching
given by:
KW(p, ai) = sim(p, ki)
where (ai, ki) ∈ K and KW stands for match the ad
keywords.
We notice that most of the keywords selected by
advertisers are also present in the ads associated with those
keywords. For instance, in our advertisement test collection,
this is true for 90% of the ads. Thus, instead of using the
keywords as matching devices, we can use them to emphasize
the main concepts in an ad, in an attempt to improve our
AD strategy. This leads to our third method of ad matching
given by:
AD KW(p, ai) = sim(p, ai ∪ ki)
where (ai, ki) ∈ K and AD KW stands for match the ad and
its keywords.
Finally, it is important to notice that the keyword ki
associated with ai could not appear at all in the triggering page
p, even when ai is highly ranked. However, if we assume that
ki summarizes the main topic of ai according to an
advertiser viewpoint, it can be interesting to assure its presence
in p. This reasoning suggests that requiring the occurrence
of the keyword ki in the triggering page p as a condition
to associate ai with p might lead to improved results. This
leads to two extra matching strategies as follows:
ANDKW(p, ai) =
sim(p, ai) if ki p
0 if otherwise
AD ANDKW(p, ai) = AAK(p, ai) =
sim(p, ai ∪ ki) if ki p
0 if otherwise
where (ai, ki) ∈ K, ANDKW stands for match the ad keywords
and force their appearance, and AD ANDKW (or AAK for ads
and keywords) stands for match the ad, its keywords, and
force their appearance.
As we will see in our results, the best among these simple
methods is AAK. Thus, it will be used as baseline for our
impedance coupling strategies which we now discuss.
3. IMPEDANCE COUPLING STRATEGIES
Two key issues become clear as one plays with the
contenttargeted advertising problem. First, the triggering page
normally belongs to a broader contextual scope than that of the
advertisements. Second, the association between a good
advertisement and the triggering page might depend on a topic
that is not mentioned explicitly in the triggering page.
The first issue is due to the fact that Web pages can be
about any subject and that advertisements are concise in
nature. That is, ads tend to be more topic restricted than
Web pages. The second issue is related to the fact that, as
we later discuss, most advertisers place a small number of
advertisements. As a result, we have few terms describing
their interest areas. Consequently, these terms tend to be
of a more general nature. For instance, a car shop probably
would prefer to use car instead of super sport to describe
its core business topic. As a consequence, many specific
terms that appear in the triggering page find no match in
the advertisements. To make matters worst, a page might
refer to an entity or subject of the world through a label
that is distinct from the label selected by an advertiser to
refer to the same entity.
A consequence of these two issues is that vocabularies of
pages and ads have low intersection, even when an ad is
related to a page. We cite this problem from now on as
the vocabulary impedance problem. In our experiments, we
realized that this problem limits the final quality of direct
matching strategies. Therefore, we studied alternatives to
reduce the referred vocabulary impedance.
For this, we propose to expand the triggering pages with
new terms. Figure 2 illustrates our intuition. We already
know that the addition of keywords (selected by the
advertiser) to the ads leads to improved results. We say that a
keyword reduces the vocabulary impedance by providing an
alternative matching path. Our idea is to add new terms
(words) to the Web page p to also reduce the vocabulary
impedance by providing a second alternative matching path.
We refer to our expansion technique as impedance coupling.
For this, we proceed as follows.
expansion
terms keyword
vocabulary impedance
triggering
page p ad
Figure 2: Addition of new terms to a Web page to
reduce the vocabulary impedance.
An advertiser trying to describe a certain topic in a concise
way probably will choose general terms to characterize that
topic. To facilitate the matching between this ad and our
triggering page p, we need to associate new general terms
with p. For this, we assume that Web documents similar
to the triggering page p share common topics. Therefore,
498
by inspecting the vocabulary of these similar documents we
might find good terms for better characterizing the main
topics in the page p. We now describe this idea using a
Bayesian network model [10,11,13] depicted in Figure 3.
R
D0 D1 Dj Dk
T1 T2 T3 Ti Tm
... ...
... ...
Figure 3: Bayesian network model for our
impedance coupling technique.
In our model, which is based on the belief network in [11],
the nodes represent pieces of information in the domain.
With each node is associated a binary random variable,
which takes the value 1 to mean that the corresponding
entity (a page or terms) is observed and, thus, relevant in our
computations. In this case, we say that the information was
observed. Node R represents the page r, a new
representation for the triggering page p. Let N be the set of the k
most similar documents to the triggering page, including the
triggering page p itself, in a large enough Web collection C.
Root nodes D0 through Dk represent the documents in N,
that is, the triggering page D0 and its k nearest neighbors,
D1 through Dk, among all pages in C. There is an edge
from node Dj to node R if document dj is in N. Nodes
T1 through Tm represent the terms in the vocabulary of C.
There is an edge from node Dj to a node Ti if term ti occurs
in document dj. In our model, the observation of the pages
in N leads to the observation of a new representation of the
triggering page p and to a set of terms describing the main
topics associated with p and its neighbors.
Given these definitions, we can now use the network to
determine the probability that a term ti is a good term for
representing a topic of the triggering page p. In other words,
we are interested in the probability of observing the final
evidence regarding a term ti, given that the new
representation of the page p has been observed, P(Ti = 1|R = 1).
This translates into the following equation1
:
P(Ti|R) =
1
P(R)
X
d
P(Ti|d)P(R|d)P(d) (2)
where d represents the set of states of the document nodes.
Since we are interested just in the states in which only a
single document dj is observed and P(d) can be regarded as
a constant, we can rewrite Eq. (2) as:
P(Ti|R) =
ν
P(R)
kX
j=0
P(Ti|dj)P(R|dj) (3)
where dj represents the state of the document nodes in
which only document dj is observed and ν is a constant
1
To simplify our notation we represent the probabilities
P(X = 1) as P(X) and P(X = 0) as P(X).
associated with P(dj). Eq. (3) is the general equation to
compute the probability that a term ti is related to the
triggering page. We now define the probabilities P(Ti|dj) and
P(R|dj) as follows:
P(Ti|dj) = η wij (4)
P(R|dj) =
(1 − α) j = 0
α sim(r, dj) 1 ≤ j ≤ k
(5)
where η is a normalizing constant, wij is the weight
associated with term ti in the document dj, and sim(p, dj) is
given by Eq. (1), i.e., is the cosine similarity between p and
dj. The weight wij is computed using a classic tf-idf scheme
and is zero if term ti does not occur in document dj. Notice
that P(Ti|dj) = 1 − P(Ti|dj) and P(R|dj) = 1 − P(R|dj).
By defining the constant α, it is possible to determine how
important should be the influence of the triggering page p
to its new representation r. By substituting Eq. (4) and
Eq. (5) into Eq. (3), we obtain:
P(Ti|R) = ρ ((1 − α) wi0 + α
kX
j=1
wij sim(r, dj)) (6)
where ρ = η ν is a normalizing constant.
We use Eq. (6) to determine the set of terms that will
compose r, as illustrated in Figure 2. Let ttop be the top
ranked term according to Eq. (6). The set r is composed
of the terms ti such that P (Ti|R)
P (Ttop|R)
≥ β, where β is a given
threshold. In our experiments, we have used β = 0.05.
Notice that the set r might contain terms that already occur
in p. That is, while we will refer to the set r as expansion
terms, it should be clear that p ∩ r = ∅.
By using α = 0, we simply consider the terms originally
in page p. By increasing α, we relax the context of the page
p, adding terms from neighbor pages, turning page p into its
new representation r. This is important because, sometimes,
a topic apparently not important in the triggering page offers
a good opportunity for advertising. For example, consider
a triggering page that describes a congress in London about
digital photography. Although London is probably not an
important topic in this page, advertisements about hotels
in London would be appropriate. Thus, adding hotels to
page p is important. This suggests using α > 0, that is,
preserving the contents of p and using the terms in r to
expand p.
In this paper, we examine both approaches. Thus, in our
sixth method we match r, the set of new expansion terms,
directly to the ads, as follows:
AAK T(p, ai) = AAK(r, ai)
where AAK T stands for match the ad and keywords to the
set r of expansion terms.
In our seventh method, we match an expanded page p to
the ads as follows:
AAK EXP(p, ai) = AAK(p ∪ r, ai)
where AAK EXP stands for match the ad and keywords to
the expanded triggering page.
499
To improve our ad placement methods, other external
source that we can use is the content of the page h pointed to
by the advertisement"s hyperlink, that is, its landing page.
After all, this page comprises the real target of the ad and
perhaps could present a more detailed description of the
product or service being advertised. Given that the
advertisement ai points to the landing page hi, we denote this
association as the pair (ai, hi) ∈ H, where H is the set of
associations between the ads and the pages they point to.
Our eighth method consists of matching the triggering page
p to the landing pages pointed to by the advertisements, as
follows:
H(p, ai) = sim(p, hi)
where (ai, hi) ∈ H and H stands for match the hyperlink
pointed to by the ad.
We can also combine this information with the more
promising methods previously described, AAK and AAK EXP as
follows. Given that (ai, hi) ∈ H and (ai, ki) ∈ K, we have our
last two methods:
AAK H(p, ai) =
sim(p, ai ∪ hi ∪ ki) if ki p
0 if otherwise
AAK EXP H(p, ai) =
sim(p ∪ r, ai ∪ hi ∪ ki) if ki (p ∪ r)
0 if otherwise
where AAK H stands for match ads and keywords also
considering the page pointed by the ad and AAH EXP H stands
for match ads and keywords with expanded triggering page,
also considering the page pointed by the ad.
Notice that other combinations were not considered in this
study due to space restrictions. These other combinations
led to poor results in our experimentation and for this reason
were discarded.
4. EXPERIMENTS
4.1 Methodology
To evaluate our ad placement strategies, we performed
a series of experiments using a sample of a real case ad
collection with 93,972 advertisements, 1,744 advertisers, and
68,238 keywords2
. The advertisements are grouped in 2,029
campaigns with an average of 1.16 campaigns per advertiser.
For the strategies AAK T and AAK EXP, we had to
generate a set of expansion terms. For that, we used a database
of Web pages crawled by the TodoBR search engine [12]
(http://www.todobr.com.br/). This database is composed
of 5,939,061 pages of the Brazilian Web, under the domain
.br. For the strategies H, AAK H, and AAK EXP H, we also
crawled the pages pointed to by the advertisers. No other
filtering method was applied to these pages besides the
removal of HTML tags.
Since we are initially interested in the placement of
advertisements in the pages of information portals, our test
collection was composed of 100 pages extracted from a
Brazilian newspaper. These are our triggering pages. They were
crawled in such a way that only the contents of their
articles was preserved. As we have no preferences for particular
2
Data in portuguese provided by an on-line advertisement
company that operates in Brazil.
topics, the crawled pages cover topics as diverse as politics,
economy, sports, and culture.
For each of our 100 triggering pages, we selected the top
three ranked ads provided by each of our 10 ad placement
strategies. Thus, for each triggering page we select no more
than 30 ads. These top ads were then inserted in a pool
for that triggering page. Each pool contained an average of
15.81 advertisements. All advertisements in each pool were
submitted to a manual evaluation by a group of 15 users.
The average number of relevant advertisements per page
pool was 5.15. Notice that we adopted the same pooling
method used to evaluate the TREC Web-based collection [6].
To quantify the precision of our results, we used 11-point
average figures [2]. Since we are not able to evaluate the
entire ad collection, recall values are relative to the set of
evaluated advertisements.
4.2 Tuning Idf factors
We start by analyzing the impact of different idf factors
in our advertisement collection. Idf factors are important
because they quantify how discriminative is a term in the
collection. In our ad collection, idf factors can be computed
by taking ads, advertisers or campaigns as documents. To
exemplify, consider the computation of ad idf for a term
ti that occurs 9 times in a collection of 100 ads. Then, the
inverse document frequency of ti is given by:
idfi = log
100
9
Hence, we can compute ad, advertiser or campaign idf
factors. As we observe in Figure 4, for the AD strategy, the best
ranking is obtained by the use of campaign idf, that is, by
calculating our idf factor so that it discriminates campaigns.
Similar results were obtained for all the other methods.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0 0.2 0.4 0.6 0.8 1
precision
recall
Campaign idf
Advertiser idf
Ad idf
Figure 4: Precision-recall curves obtained for the
AD strategy using ad, advertiser, and campaign idf
factors.
This reflects the fact that terms might be better
discriminators for a business topic than for an specific ad. This
effect can be accomplished by calculating the factor relative
to idf advertisers or campaigns instead of ads. In fact,
campaign idf factors yielded the best results. Thus, they will be
used in all the experiments reported from now on.
500
4.3 Results
Matching Strategies
Figure 5 displays the results for the matching strategies
presented in Section 2. As shown, directly matching the
contents of the ad to the triggering page (AD strategy) is not so
effective. The reason is that the ad contents are very noisy.
It may contain messages that do not properly describe the
ad topics such as requisitions for user actions (e.g, visit our
site) and general sentences that could be applied to any
product or service (e.g, we delivery for the whole
country). On the other hand, an advertiser provided keyword
summarizes well the topic of the ad. As a consequence, the
KW strategy is superior to the AD and AD KW strategies. This
situation changes when we require the keywords to appear
in the target Web page. By filtering out ads whose keywords
do not occur in the triggering page, much noise is discarded.
This makes ANDKW a better alternative than KW. Further, in
this new situation, the contents of the ad becomes useful
to rank the most relevant ads making AD ANDKW (or AAK for
ads and keywords) the best among all described methods.
For this reason, we adopt AAK as our baseline in the next set
of experiments.
0
0.1
0.2
0.3
0.4
0.5
0.6
0 0.2 0.4 0.6 0.8 1
precision
recall
AAK
ANDKW
KW
AD_KW
AD
Figure 5: Comparison among our five matching
strategies. AAK (ads and keywords) is superior.
Table 1 illustrates average precision figures for Figure 5.
We also present actual hits per advertisement slot. We call
hit an assignment of an ad (to the triggering page) that
was considered relevant by the evaluators. We notice that
our AAK strategy provides a gain in average precision of 60%
relative to the trivial AD strategy. This shows that careful
consideration of the evidence related to the problem does
pay off.
Impedance Coupling Strategies
Table 2 shows top ranked terms that occur in a page
covering Argentinean wines produced using grapes derived from
the Bordeaux region of France. The p column includes the
top terms for this page ranked according to our tf-idf
weighting scheme. The r column includes the top ranked
expansion terms generated according to Eq. (6). Notice that the
expansion terms not only emphasize important terms of the
target page (by increasing their weights) such as wines and
Methods Hits 11-pt average
#1 #2 #3 total score gain(%)
AD 41 32 13 86 0.104
AD KW 51 28 17 96 0.106 +1.9
KW 46 34 28 108 0.125 +20.2
ANDKW 49 37 35 121 0.153 +47.1
AD ANDKW (AAK) 51 48 39 138 0.168 +61.5
Table 1: Average precision figures, corresponding to
Figure 5, for our five matching strategies. Columns
labelled #1, #2, and #3 indicate total of hits in
first, second, and third advertisement slots,
respectively. The AAK strategy provides improvements of
60% relative to the AD strategy.
Rank p r
term score term score
1 argentina 0.090 wines 0.251
2 obtained* 0.047 wine* 0.140
3 class* 0.036 whites 0.091
4 whites 0.035 red* 0.057
5 french* 0.031 grape 0.051
6 origin* 0.029 bordeaux 0.045
7 france* 0.029 acideness* 0.038
8 grape 0.017 argentina 0.037
9 sweet* 0.016 aroma* 0.037
10 country* 0.013 blanc* 0.036
...
35 wines 0.010
-...
Table 2: Top ranked terms for the triggering page
p according to our tf-idf weighting scheme and top
ranked terms for r, the expansion terms for p,
generated according to Eq. (6). Ranking scores were
normalized in order to sum up to 1. Terms marked
with ‘*" are not shared by the sets p and r.
whites, but also reveal new terms related to the main topic
of the page such as aroma and red. Further, they avoid
some uninteresting terms such as obtained and country.
Figure 6 illustrates our results when the set r of
expansion terms is used. They show that matching the ads to
the terms in the set r instead of to the triggering page p
(AAK T strategy) leads to a considerable improvement over
our baseline, AAK. The gain is even larger when we use the
terms in r to expand the triggering page (AAK EXP method).
This confirms our hypothesis that the triggering page could
have some interesting terms that should not be completely
discarded.
Finally, we analyze the impact on the ranking of using the
contents of pages pointed by the ads. Figure 7 displays our
results. It is clear that using only the contents of the pages
pointed by the ads (H strategy) yields very poor results.
However, combining evidence from the pages pointed by the
ads with our baseline yields improved results. Most
important, combining our best strategy so far (AAK EXP) with
pages pointed by ads (AAK EXP H strategy) leads to superior
results. This happens because the two additional sources
of evidence, expansion terms and pages pointed by the ads,
are distinct and complementary, providing extra and
valuable information for matching ads to a Web page.
501
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 0.2 0.4 0.6 0.8 1
precision
recall
AAK_EXP
AAK_T
AAK
Figure 6: Impact of using a new representation for
the triggering page, one that includes expansion
terms.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 0.2 0.4 0.6 0.8 1
precision
recall
AAK_EXP_H
AAK_H
AAK
H
Figure 7: Impact of using the contents of the page
pointed by the ad (the hyperlink).
Figure 8 and Table 3 summarize all results described in
this section. In Figure 8 we show precision-recall curves
and in Table 3 we show 11-point average figures. We also
present actual hits per advertisement slot and gains in
average precision relative to our baseline, AAK. We notice that
the highest number of hits in the first slot was generated by
the method AAK EXP. However, the method with best
overall retrieval performance was AAK EXP H, yielding a gain in
average precision figures of roughly 50% over the baseline
(AAK).
4.4 Performance Issues
In a keyword targeted advertising system, ads are assigned
at query time, thus the performance of the system is a very
important issue. In content-targeted advertising systems,
we can associate ads with a page at publishing (or
updating) time. Also, if a new ad comes in we might consider
assigning this ad to already published pages in offline mode.
That is, we might design the system such that its
performance depends fundamentally on the rate that new pages
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 0.2 0.4 0.6 0.8 1
precision
recall
AAK_EXP_H
AAK_EXP
AAK_T
AAK_H
AAK
H
Figure 8: Comparison among our ad placement
strategies.
Methods Hits 11-pt average
#1 #2 #3 total score gain(%)
H 28 5 6 39 0.026 -84.3
AAK 51 48 39 138 0.168
AAK H 52 50 46 148 0.191 +13.5
AAK T 65 49 43 157 0.226 +34.6
AAK EXP 70 52 53 175 0.242 +43.8
AAK EXP H 64 61 51 176 0.253 +50.3
Table 3: Results for our impedance coupling
strategies.
are published and the rate that ads are added or modified.
Further, the data needed by our strategies (page crawling,
page expansion, and ad link crawling) can be gathered and
processed offline, not affecting the user experience. Thus,
from this point of view, the performance is not critical and
will not be addressed in this work.
5. RELATED WORK
Several works have stressed the importance of relevance
in advertising. For example, in [14] it was shown that
advertisements that are presented to users when they are not
interested on them are viewed just as annoyance. Thus,
in order to be effective, the authors conclude that
advertisements should be relevant to consumer concerns at the
time of exposure. The results in [9] enforce this conclusion
by pointing out that the more targeted the advertising, the
more effective it is.
Therefore it is not surprising that other works have
addressed the relevance issue. For instance, in [8] it is proposed
a system called ADWIZ that is able to adapt online
advertisement to a user"s short-term interests in a non-intrusive
way. Contrary to our work, ADWIZ does not directly use
the content of the page viewed by the user. It relies on search
keywords supplied by the user to search engines and on the
URL of the page requested by the user. On the other hand,
in [7] the authors presented an intrusive approach in which
an agent sits between advertisers and the user"s browser
allowing a banner to be placed into the currently viewed page.
In spite of having the opportunity to use the page"s content,
502
the agent infers relevance based on category information and
user"s private information collected along the time.
In [5] the authors provide a comparison between the
ranking strategies used by Google and Overture for their keyword
advertising systems. Both systems select advertisements by
matching them to the keywords provided by the user in a
search query and rank the resulting advertisement list
according to the advertisers" willingness to pay. In
particular, Google approach also considers the clickthrough rate
of each advertisement as an additional evidence for its
relevance. The authors conclude that Google"s strategy is better
than that used by Overture. As mentioned before, the
ranking problem in keyword advertising is different from that of
content-targeted advertising. Instead of dealing with
keywords provided by users in search queries, we have to deal
with the contents of a page which can be very diffuse.
Finally, the work in [4] focuses on improving search
engine results in a TREC collection by means of an automatic
query expansion method based on kNN [17]. Such method
resembles our expansion approach presented in section 3.
Our method is different from that presented by [4]. They
expand user queries applied to a document collection with
terms extracted from the top k documents returned as
answer to the query in the same collection. In our case, we
use two collections: an advertisement and a Web collection.
We expand triggering pages with terms extracted from the
Web collection and then we match these expanded pages to
the ads from the advertisement collection. By doing this, we
emphasize the main topics of the triggering pages, increasing
the possibility of associating relevant ads with them.
6. CONCLUSIONS
In this work we investigated ten distinct strategies for
associating ads with a Web page that is browsed
(contenttargeted advertising). Five of our strategies attempt to
match the ads directly to the Web page. Because of that,
they are called matching strategies. The other five
strategies recognize that there is a vocabulary impedance problem
among ads and Web pages and attempt to solve the problem
by expanding the Web pages and the ads with new terms.
Because of that they are called impedance coupling
strategies.
Using a sample of a real case database with over 93
thousand ads, we evaluated our strategies. For the five matching
strategies, our results indicated that planned consideration
of additional evidence (such as the keywords provided by the
advertisers) yielded gains in average precision figures (for
our test collection) of 60%. This was obtained by a
strategy called AAK (for ads and keywords), which is taken as
the baseline for evaluating our more advanced impedance
coupling strategies.
For our five impedance coupling strategies, the results
indicate that additional gains in average precision of 50% (now
relative to the AAK strategy) are possible. These were
generated by expanding the Web page with new terms (obtained
using a sample Web collection containing over five million
pages) and the ads with the contents of the page they point
to (a hyperlink provided by the advertisers).
These are first time results that indicate that high quality
content-targeted advertising is feasible and practical.
7. ACKNOWLEDGEMENTS
This work was supported in part by the GERINDO
project, grant MCT/CNPq/CT-INFO 552.087/02-5, by CNPq
grant 300.188/95-1 (Berthier Ribeiro-Neto), and by CNPq
grant 303.576/04-9 (Edleno Silva de Moura). Marco Cristo
is supported by Fucapi, Manaus, AM, Brazil.
8. REFERENCES
[1] The Google adwords. Google content-targeted advertising.
http://adwords.google.com/select/ct_faq.html, November
2004.
[2] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information
Retrieval. Addison-Wesley-Longman, 1st edition, 1999.
[3] H. K. Bhargava and J. Feng. Paid placement strategies for
internet search engines. In Proceedings of the eleventh
international conference on World Wide Web, pages 117-123.
ACM Press, 2002.
[4] E. P. Chan, S. Garcia, and S. Roukos. Trec-5 ad-hoc retrieval
using k nearest-neighbors re-scoring. In The Fifth Text
REtrieval Conference (TREC-5). National Institute of
Standards and Technology (NIST), November 1996.
[5] J. Feng, H. K. Bhargava, and D. Pennock. Comparison of
allocation rules for paid placement advertising in search
engines. In Proceedings of the 5th international conference on
Electronic commerce, pages 294-299. ACM Press, 2003.
[6] D. Hawking, N. Craswell, and P. B. Thistlewaite. Overview of
TREC-7 very large collection track. In The Seventh Text
REtrieval Conference (TREC-7), pages 91-104, Gaithersburg,
Maryland, USA, November 1998.
[7] Y. Kohda and S. Endo. Ubiquitous advertising on the www:
merging advertisement on the browser. Comput. Netw. ISDN
Syst., 28(7-11):1493-1499, 1996.
[8] M. Langheinrich, A. Nakamura, N. Abe, T. Kamba, and
Y. Koseki. Unintrusive customization techniques for web
advertising. Comput. Networks, 31(11-16):1259-1272, 1999.
[9] T. P. Novak and D. L. Hoffman. New metrics for new media:
toward the development of web measurement standards. World
Wide Web J., 2(1):213-246, 1997.
[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems:
Networks of plausible inference. Morgan Kaufmann Publishers,
2nd edition, 1988.
[11] B. Ribeiro-Neto and R. Muntz. A belief network model for IR.
In Proceedings of the 19th Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval, pages 253-260, Zurich, Switzerland, August 1996.
[12] A. Silva, E. Veloso, P. Golgher, B. Ribeiro-Neto, A. Laender,
and N. Ziviani. CobWeb - a crawler for the brazilian web. In
Proceedings of the String Processing and Information
Retrieval Symposium (SPIRE"99), pages 184-191, Cancun,
Mexico, September 1999.
[13] H. Turtle and W. B. Croft. Evaluation of an inference
network-based retrieval model. ACM Transactions on
Information Systems, 9(3):187-222, July 1991.
[14] C. Wang, P. Zhang, R. Choi, and M. Daeredita. Understanding
consumers attitude toward advertising. In Eighth Americas
Conference on Information Systems, pages 1143-1148, August
2002.
[15] M. Weideman. Ethical issues on content distribution to digital
consumers via paid placement as opposed to website visibility
in search engine results. In The Seventh ETHICOMP
International Conference on the Social and Ethical Impacts
of Information and Communication Technologies, pages
904-915. Troubador Publishing Ltd, April 2004.
[16] M. Weideman and T. Haig-Smith. An investigation into search
engines as a form of targeted advert delivery. In Proceedings of
the 2002 annual research conference of the South African
institute of computer scientists and information technologists
on Enablement through technology, pages 258-258. South
African Institute for Computer Scientists and Information
Technologists, 2002.
[17] Y. Yang. Expert network: Effective and efficient learning from
human decisions in text categorization and retrieval. In W. B.
Croft and e. C. J. van Rijsbergen, editors, Proceedings of the
17rd annual international ACM SIGIR conference on
Research and development in information retrieval, pages
13-22. Springer-Verlag, 1994.
503 | paid placement strategy;web;bayesian network model;bayesian network;keyword targeted advertising;expansion term;ad and keyword;knn;on-line advertising;matching strategy;ad placement strategy;advertise;content-targeted advertising;impedance coupling strategy |
train_H-62 | Implicit User Modeling for Personalized Search | Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word java to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user"s interest from the user"s search context and use the inferred implicit user model for personalized search . We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. | 1. INTRODUCTION
Although many information retrieval systems (e.g., web search
engines and digital library systems) have been successfully deployed,
the current retrieval systems are far from optimal. A major
deficiency of existing retrieval systems is that they generally lack user
modeling and are not adaptive to individual users [17]. This
inherent non-optimality is seen clearly in the following two cases:
(1) Different users may use exactly the same query (e.g., Java) to
search for different information (e.g., the Java island in Indonesia or
the Java programming language), but existing IR systems return the
same results for these users. Without considering the actual user, it
is impossible to know which sense Java refers to in a query. (2)
A user"s information needs may change over time. The same user
may use Java sometimes to mean the Java island in Indonesia
and some other times to mean the programming language.
Without recognizing the search context, it would be again impossible to
recognize the correct sense.
In order to optimize retrieval accuracy, we clearly need to model
the user appropriately and personalize search according to each
individual user. The major goal of user modeling for information
retrieval is to accurately model a user"s information need, which is,
unfortunately, a very difficult task. Indeed, it is even hard for a user
to precisely describe what his/her information need is.
What information is available for a system to infer a user"s
information need? Obviously, the user"s query provides the most direct
evidence. Indeed, most existing retrieval systems rely solely on
the query to model a user"s information need. However, since a
query is often extremely short, the user model constructed based
on a keyword query is inevitably impoverished . An effective way
to improve user modeling in information retrieval is to ask the user
to explicitly specify which documents are relevant (i.e., useful for
satisfying his/her information need), and then to improve user
modeling based on such examples of relevant documents. This is called
relevance feedback, which has been proved to be quite effective for
improving retrieval accuracy [19, 20]. Unfortunately, in real world
applications, users are usually reluctant to make the extra effort to
provide relevant examples for feedback [11].
It is thus very interesting to study how to infer a user"s
information need based on any implicit feedback information, which
naturally exists through user interactions and thus does not require
any extra user effort. Indeed, several previous studies have shown
that implicit user modeling can improve retrieval accuracy. In [3],
a web browser (Curious Browser) is developed to record a user"s
explicit relevance ratings of web pages (relevance feedback) and
browsing behavior when viewing a page, such as dwelling time,
mouse click, mouse movement and scrolling (implicit feedback).
It is shown that the dwelling time on a page, amount of scrolling
on a page and the combination of time and scrolling have a strong
correlation with explicit relevance ratings, which suggests that
implicit feedback may be helpful for inferring user information need.
In [10], user clickthrough data is collected as training data to learn
a retrieval function, which is used to produce a customized ranking
of search results that suits a group of users" preferences. In [25],
the clickthrough data collected over a long time period is exploited
through query expansion to improve retrieval accuracy.
824
While a user may have general long term interests and
preferences for information, often he/she is searching for documents to
satisfy an ad-hoc information need, which only lasts for a short
period of time; once the information need is satisfied, the user
would generally no longer be interested in such information. For
example, a user may be looking for information about used cars
in order to buy one, but once the user has bought a car, he/she is
generally no longer interested in such information. In such cases,
implicit feedback information collected over a long period of time
is unlikely to be very useful, but the immediate search context and
feedback information, such as which of the search results for the
current information need are viewed, can be expected to be much
more useful. Consider the query Java again. Any of the
following immediate feedback information about the user could
potentially help determine the intended meaning of Java in the query:
(1) The previous query submitted by the user is hashtable (as
opposed to, e.g., travel Indonesia). (2) In the search results, the user
viewed a page where words such as programming, software,
and applet occur many times.
To the best of our knowledge, how to exploit such immediate
and short-term search context to improve search has so far not been
well addressed in the previous work. In this paper, we study how to
construct and update a user model based on the immediate search
context and implicit feedback information and use the model to
improve the accuracy of ad-hoc retrieval. In order to maximally
benefit the user of a retrieval system through implicit user modeling,
we propose to perform eager implicit feedback. That is, as soon
as we observe any new piece of evidence from the user, we would
update the system"s belief about the user"s information need and
respond with improved retrieval results based on the updated user
model. We present a decision-theoretic framework for optimizing
interactive information retrieval based on eager user model
updating, in which the system responds to every action of the user by
choosing a system action to optimize a utility function. In a
traditional retrieval paradigm, the retrieval problem is to match a query
with documents and rank documents according to their relevance
values. As a result, the retrieval process is a simple independent
cycle of query and result display. In the proposed new retrieval
paradigm, the user"s search context plays an important role and the
inferred implicit user model is exploited immediately to benefit the
user. The new retrieval paradigm is thus fundamentally different
from the traditional paradigm, and is inherently more general.
We further propose specific techniques to capture and exploit two
types of implicit feedback information: (1) identifying related
immediately preceding query and using the query and the
corresponding search results to select appropriate terms to expand the current
query, and (2) exploiting the viewed document summaries to
immediately rerank any documents that have not yet been seen by the
user. Using these techniques, we develop a client-side web search
agent UCAIR (User-Centered Adaptive Information Retrieval) on
top of a popular search engine (Google). Experiments on web
search show that our search agent can improve search accuracy over
Google. Since the implicit information we exploit already naturally
exists through user interactions, the user does not need to make any
extra effort. Thus the developed search agent can improve existing
web search performance without additional effort from the user.
The remaining sections are organized as follows. In Section 2,
we discuss the related work. In Section 3, we present a
decisiontheoretic interactive retrieval framework for implicit user modeling.
In Section 4, we present the design and implementation of an
intelligent client-side web search agent (UCAIR) that performs eager
implicit feedback. In Section 5, we report our experiment results
using the search agent. Section 6 concludes our work.
2. RELATED WORK
Implicit user modeling for personalized search has been
studied in previous work, but our work differs from all previous work
in several aspects: (1) We emphasize the exploitation of
immediate search context such as the related immediately preceding query
and the viewed documents in the same session, while most previous
work relies on long-term collection of implicit feedback
information [25]. (2) We perform eager feedback and bring the benefit of
implicit user modeling as soon as any new implicit feedback
information is available, while the previous work mostly exploits
longterm implicit feedback [10]. (3) We propose a retrieval framework
to integrate implicit user modeling with the interactive retrieval
process, while the previous work either studies implicit user modeling
separately from retrieval [3] or only studies specific retrieval
models for exploiting implicit feedback to better match a query with
documents [23, 27, 22]. (4) We develop and evaluate a
personalized Web search agent with online user studies, while most existing
work evaluates algorithms offline without real user interactions.
Currently some search engines provide rudimentary
personalization, such as Google Personalized web search [6], which allows
users to explicitly describe their interests by selecting from
predefined topics, so that those results that match their interests are
brought to the top, and My Yahoo! search [16], which gives users
the option to save web sites they like and block those they
dislike. In contrast, UCAIR personalizes web search through implicit
user modeling without any additional user efforts. Furthermore, the
personalization of UCAIR is provided on the client side. There are
two remarkable advantages on this. First, the user does not need to
worry about the privacy infringement, which is a big concern for
personalized search [26]. Second, both the computation of
personalization and the storage of the user profile are done at the client
side so that the server load is reduced dramatically [9].
There have been many works studying user query logs [1] or
query dynamics [13]. UCAIR makes direct use of a user"s query
history to benefit the same user immediately in the same search
session. UCAIR first judges whether two neighboring queries
belong to the same information session and if so, it selects terms from
the previous query to perform query expansion.
Our query expansion approach is similar to automatic query
expansion [28, 15, 5], but instead of using pseudo feedback to expand
the query, we use user"s implicit feedback information to expand
the current query. These two techniques may be combined.
3. OPTIMIZATION IN INTERACTIVE IR
In interactive IR, a user interacts with the retrieval system through
an action dialogue, in which the system responds to each user
action with some system action. For example, the user"s action may
be submitting a query and the system"s response may be returning
a list of 10 document summaries. In general, the space of user
actions and system responses and their granularities would depend on
the interface of a particular retrieval system.
In principle, every action of the user can potentially provide new
evidence to help the system better infer the user"s information need.
Thus in order to respond optimally, the system should use all the
evidence collected so far about the user when choosing a response.
When viewed in this way, most existing search engines are clearly
non-optimal. For example, if a user has viewed some documents on
the first page of search results, when the user clicks on the Next
link to fetch more results, an existing retrieval system would still
return the next page of results retrieved based on the original query
without considering the new evidence that a particular result has
been viewed by the user.
825
We propose to optimize retrieval performance by adapting
system responses based on every action that a user has taken, and cast
the optimization problem as a decision task. Specifically, at any
time, the system would attempt to do two tasks: (1) User model
updating: Monitor any useful evidence from the user regarding
his/her information need and update the user model as soon as such
evidence is available; (2) Improving search results: Rerank
immediately all the documents that the user has not yet seen, as soon
as the user model is updated. We emphasize eager updating and
reranking, which makes our work quite different from any existing
work. Below we present a formal decision theoretic framework for
optimizing retrieval performance through implicit user modeling in
interactive information retrieval.
3.1 A decision-theoretic framework
Let A be the set of all user actions and R(a) be the set of all
possible system responses to a user action a ∈ A. At any time, let
At = (a1, ..., at) be the observed sequence of user actions so far
(up to time point t) and Rt−1 = (r1, ..., rt−1) be the responses that
the system has made responding to the user actions. The system"s
goal is to choose an optimal response rt ∈ R(at) for the current
user action at.
Let M be the space of all possible user models. We further
define a loss function L(a, r, m) ∈ , where a ∈ A is a user action,
r ∈ R(a) is a system response, and m ∈ M is a user model.
L(a, r, m) encodes our decision preferences and assesses the
optimality of responding with r when the current user model is m
and the current user action is a. According to Bayesian decision
theory, the optimal decision at time t is to choose a response that
minimizes the Bayes risk, i.e.,
r∗
t = argmin
r∈R(at) M
L(at, r, mt)P(mt|U, D, At, Rt−1)dmt (1)
where P(mt|U, D, At, Rt−1) is the posterior probability of the
user model mt given all the observations about the user U we have
made up to time t.
To simplify the computation of Equation 1, let us assume that the
posterior probability mass P(mt|U, D, At, Rt−1) is mostly
concentrated on the mode m∗
t = argmaxmt P(mt|U, D, At, Rt−1).
We can then approximate the integral with the value of the loss
function at m∗
t . That is,
r∗
t ≈ argminr∈R(at)L(at, r, m∗
t ) (2)
where m∗
t = argmaxmt P(mt|U, D, At, Rt−1).
Leaving aside how to define and estimate these probabilistic
models and the loss function, we can see that such a decision-theoretic
formulation suggests that, in order to choose the optimal response
to at, the system should perform two tasks: (1) compute the
current user model and obtain m∗
t based on all the useful
information. (2) choose a response rt to minimize the loss function value
L(at, rt, m∗
t ). When at does not affect our belief about m∗
t , the
first step can be omitted and we may reuse m∗
t−1 for m∗
t .
Note that our framework is quite general since we can
potentially model any kind of user actions and system responses. In most
cases, as we may expect, the system"s response is some ranking of
documents, i.e., for most actions a, R(a) consists of all the
possible rankings of the unseen documents, and the decision problem
boils down to choosing the best ranking of unseen documents based
on the most current user model. When a is the action of submitting
a keyword query, such a response is exactly what a current retrieval
system would do. However, we can easily imagine that a more
intelligent web search engine would respond to a user"s clicking of
the Next link (to fetch more unseen results) with a more
optimized ranking of documents based on any viewed documents in
the current page of results. In fact, according to our eager updating
strategy, we may even allow a system to respond to a user"s clicking
of browser"s Back button after viewing a document in the same
way, so that the user can maximally benefit from implicit feedback.
These are precisely what our UCAIR system does.
3.2 User models
A user model m ∈ M represents what we know about the user
U, so in principle, it can contain any information about the user
that we wish to model. We now discuss two important components
in a user model.
The first component is a component model of the user"s
information need. Presumably, the most important factor affecting the
optimality of the system"s response is how well the response addresses
the user"s information need. Indeed, at any time, we may assume
that the system has some belief about what the user is interested
in, which we model through a term vector x = (x1, ..., x|V |),
where V = {w1, ..., w|V |} is the set of all terms (i.e., vocabulary)
and xi is the weight of term wi. Such a term vector is commonly
used in information retrieval to represent both queries and
documents. For example, the vector-space model, assumes that both
the query and the documents are represented as term vectors and
the score of a document with respect to a query is computed based
on the similarity between the query vector and the document
vector [21]. In a language modeling approach, we may also regard
the query unigram language model [12, 29] or the relevance model
[14] as a term vector representation of the user"s information need.
Intuitively, x would assign high weights to terms that characterize
the topics which the user is interested in.
The second component we may include in our user model is the
documents that the user has already viewed. Obviously, even if a
document is relevant, if the user has already seen the document, it
would not be useful to present the same document again. We thus
introduce another variable S ⊂ D (D is the whole set of documents
in the collection) to denote the subset of documents in the search
results that the user has already seen/viewed.
In general, at time t, we may represent a user model as mt =
(S, x, At, Rt−1), where S is the seen documents, x is the system"s
understanding of the user"s information need, and (At, Rt−1)
represents the user"s interaction history. Note that an even more
general user model may also include other factors such as the user"s
reading level and occupation.
If we assume that the uncertainty of a user model mt is solely
due to the uncertainty of x, the computation of our current estimate
of user model m∗
t will mainly involve computing our best estimate
of x. That is, the system would choose a response according to
r∗
t = argminr∈R(at)L(at, r, S, x∗
, At, Rt−1) (3)
where x∗
= argmaxx P(x|U, D, At, Rt−1). This is the
decision mechanism implemented in the UCAIR system to be described
later. In this system, we avoided specifying the probabilistic model
P(x|U, D, At, Rt−1) by computing x∗
directly with some existing
feedback method.
3.3 Loss functions
The exact definition of loss function L depends on the responses,
thus it is inevitably application-specific. We now briefly discuss
some possibilities when the response is to rank all the unseen
documents and present the top k of them. Let r = (d1, ..., dk) be the
top k documents, S be the set of seen documents by the user, and
x∗
be the system"s best guess of the user"s information need. We
826
may simply define the loss associated with r as the negative sum
of the probability that each of the di is relevant, i.e., L(a, r, m) =
− k
i=1 P(relevant|di, m). Clearly, in order to minimize this
loss function, the optimal response r would contain the k
documents with the highest probability of relevance, which is intuitively
reasonable.
One deficiency of this top-k loss function is that it is not
sensitive to the internal order of the selected top k documents, so
switching the ranking order of a non-relevant document and a relevant one
would not affect the loss, which is unreasonable. To model
ranking, we can introduce a factor of the user model - the probability
of each of the k documents being viewed by the user, P(view|di),
and define the following ranking loss function:
L(a, r, m) = −
k
i=1
P(view|di)P(relevant|di, m)
Since in general, if di is ranked above dj (i.e., i < j), P(view|di) >
P(view|dj), this loss function would favor a decision to rank
relevant documents above non-relevant ones, as otherwise, we could
always switch di with dj to reduce the loss value. Thus the
system should simply perform a regular retrieval and rank documents
according to the probability of relevance [18].
Depending on the user"s retrieval preferences, there can be many
other possibilities. For example, if the user does not want to see
redundant documents, the loss function should include some
redundancy measure on r based on the already seen documents S.
Of course, when the response is not to choose a ranked list of
documents, we would need a different loss function. We discuss
one such example that is relevant to the search agent that we
implement. When a user enters a query qt (current action), our search
agent relies on some existing search engine to actually carry out
search. In such a case, even though the search agent does not have
control of the retrieval algorithm, it can still attempt to optimize the
search results through refining the query sent to the search engine
and/or reranking the results obtained from the search engine. The
loss functions for reranking are already discussed above; we now
take a look at the loss functions for query refinement.
Let f be the retrieval function of the search engine that our agent
uses so that f(q) would give us the search results using query q.
Given that the current action of the user is entering a query qt (i.e.,
at = qt), our response would be f(q) for some q. Since we have
no choice of f, our decision is to choose a good q. Formally,
r∗
t = argminrt L(a, rt, m)
= argminf(q)L(a, f(q), m)
= f(argminqL(qt, f(q), m))
which shows that our goal is to find q∗
= argminqL(qt, f(q), m),
i.e., an optimal query that would give us the best f(q). A different
choice of loss function L(qt, f(q), m) would lead to a different
query refinement strategy. In UCAIR, we heuristically compute q∗
by expanding qt with terms extracted from rt−1 whenever qt−1 and
qt have high similarity. Note that rt−1 and qt−1 are contained in
m as part of the user"s interaction history.
3.4 Implicit user modeling
Implicit user modeling is captured in our framework through
the computation of x∗
= argmaxx P(x|U, D, At, Rt−1), i.e., the
system"s current belief of what the user"s information need is. Here
again there may be many possibilities, leading to different
algorithms for implicit user modeling. We now discuss a few of them.
First, when two consecutive queries are related, the previous
query can be exploited to enrich the current query and provide more
search context to help disambiguation. For this purpose, instead of
performing query expansion as we did in the previous section, we
could also compute an updated x∗
based on the previous query and
retrieval results. The computed new user model can then be used to
rank the documents with a standard information retrieval model.
Second, we can also infer a user"s interest based on the
summaries of the viewed documents. When a user is presented with a
list of summaries of top ranked documents, if the user chooses to
skip the first n documents and to view the (n+1)-th document, we
may infer that the user is not interested in the displayed summaries
for the first n documents, but is attracted by the displayed summary
of the (n + 1)-th document. We can thus use these summaries as
negative and positive examples to learn a more accurate user model
x∗
. Here many standard relevance feedback techniques can be
exploited [19, 20]. Note that we should use the displayed summaries,
as opposed to the actual contents of those documents, since it is
possible that the displayed summary of the viewed document is
relevant, but the document content is actually not. Similarly, a
displayed summary may mislead a user to skip a relevant document.
Inferring user models based on such displayed information, rather
than the actual content of a document is an important difference
between UCAIR and some other similar systems.
In UCAIR, both of these strategies for inferring an implicit user
model are implemented.
4. UCAIR: A PERSONALIZED
SEARCH AGENT
4.1 Design
In this section, we present a client-side web search agent called
UCAIR, in which we implement some of the methods discussed
in the previous section for performing personalized search through
implicit user modeling. UCAIR is a web browser plug-in 1
that
acts as a proxy for web search engines. Currently, it is only
implemented for Internet Explorer and Google, but it is a matter of
engineering to make it run on other web browsers and interact with
other search engines.
The issue of privacy is a primary obstacle for deploying any real
world applications involving serious user modeling, such as
personalized search. For this reason, UCAIR is strictly running as
a client-side search agent, as opposed to a server-side application.
This way, the captured user information always resides on the
computer that the user is using, thus the user does not need to release
any information to the outside. Client-side personalization also
allows the system to easily observe a lot of user information that may
not be easily available to a server. Furthermore, performing
personalized search on the client-side is more scalable than on the
serverside, since the overhead of computation and storage is distributed
among clients.
As shown in Figure 1, the UCAIR toolbar has 3 major
components: (1) The (implicit) user modeling module captures a user"s
search context and history information, including the submitted
queries and any clicked search results and infers search session
boundaries. (2) The query modification module selectively
improves the query formulation according to the current user model.
(3) The result re-ranking module immediately re-ranks any unseen
search results whenever the user model is updated.
In UCAIR, we consider four basic user actions: (1) submitting a
keyword query; (2) viewing a document; (3) clicking the Back
button; (4) clicking the Next link on a result page. For each
of these four actions, the system responds with, respectively, (1)
1
UCAIR is available at: http://sifaka.cs.uiuc.edu/ir/ucair/download.html
827
Search
Engine
(e.g.,
Google)
Search History Log
(e.g.,past queries,
clicked results)
Query
Modification
Result
Re-Ranking
User
Modeling
Result Buffer
UCAIR
Userquery
results
clickthrough…
Figure 1: UCAIR architecture
generating a ranked list of results by sending a possibly expanded
query to a search engine; (2) updating the information need model
x; (3) reranking the unseen results on the current result page based
on the current model x; and (4) reranking the unseen pages and
generating the next page of results based on the current model x.
Behind these responses, there are three basic tasks: (1) Decide
whether the previous query is related to the current query and if so
expand the current query with useful terms from the previous query
or the results of the previous query. (2) Update the information
need model x based on a newly clicked document summary. (3)
Rerank a set of unseen documents based on the current model x.
Below we describe our algorithms for each of them.
4.2 Session boundary detection and query
expansion
To effectively exploit previous queries and their corresponding
clickthrough information, UCAIR needs to judge whether two
adjacent queries belong to the same search session (i.e., detect
session boundaries). Existing work on session boundary detection is
mostly in the context of web log analysis (e.g., [8]), and uses
statistical information rather than textual features. Since our
clientside agent does not have access to server query logs, we make
session boundary decisions based on textual similarity between two
queries. Because related queries do not necessarily share the same
words (e.g., java island and travel Indonesia), it is insufficient
to use only query text. Therefore we use the search results of the
two queries to help decide whether they are topically related. For
example, for the above queries java island and travel
Indonesia", the words java, bali, island, indonesia and travel
may occur frequently in both queries" search results, yielding a high
similarity score.
We only use the titles and summaries of the search results to
calculate the similarity since they are available in the retrieved search
result page and fetching the full text of every result page would
significantly slow down the process. To compensate for the terseness
of titles and summaries, we retrieve more results than a user would
normally view for the purpose of detecting session boundaries
(typically 50 results).
The similarity between the previous query q and the current
query q is computed as follows. Let {s1, s2, . . . , sn } and
{s1, s2, . . . , sn} be the result sets for the two queries. We use
the pivoted normalization TF-IDF weighting formula [24] to
compute a term weight vector si for each result si. We define the
average result savg to be the centroid of all the result vectors, i.e.,
(s1 + s2 + . . . + sn)/n. The cosine similarity between the two
average results is calculated as
s avg · savg/ s
2
avg · s2
avg
If the similarity value exceeds a predefined threshold, the two queries
will be considered to be in the same information session.
If the previous query and the current query are found to belong
to the same search session, UCAIR would attempt to expand the
current query with terms from the previous query and its search
results. Specifically, for each term in the previous query or the
corresponding search results, if its frequency in the results of the
current query is greater than a preset threshold (e.g. 5 results out
of 50), the term would be added to the current query to form an
expanded query. In this case, UCAIR would send this expanded
query rather than the original one to the search engine and return
the results corresponding to the expanded query. Currently, UCAIR
only uses the immediate preceding query for query expansion; in
principle, we could exploit all related past queries.
4.3 Information need model updating
Suppose at time t, we have observed that the user has viewed
k documents whose summaries are s1, ..., sk. We update our user
model by computing a new information need vector with a standard
feedback method in information retrieval (i.e., Rocchio [19]).
According to the vector space retrieval model, each clicked summary
si can be represented by a term weight vector si with each term
weighted by a TF-IDF weighting formula [21]. Rocchio computes
the centroid vector of all the summaries and interpolates it with the
original query vector to obtain an updated term vector. That is,
x = αq + (1 − α)
1
k
k
i=1
si
where q is the query vector, k is the number of summaries the user
clicks immediately following the current query and α is a parameter
that controls the influence of the clicked summaries on the inferred
information need model. In our experiments, α is set to 0.5. Note
that we update the information need model whenever the user views
a document.
4.4 Result reranking
In general, we want to rerank all the unseen results as soon as the
user model is updated. Currently, UCAIR implements reranking in
two cases, corresponding to the user clicking the Back button
and Next link in the Internet Explorer. In both cases, the current
(updated) user model would be used to rerank the unseen results so
that the user would see improved search results immediately.
To rerank any unseen document summaries, UCAIR uses the
standard vector space retrieval model and scores each summary
based on the similarity of the result and the current user information
need vector x [21]. Since implicit feedback is not completely
reliable, we bring up only a small number (e.g. 5) of highest reranked
results to be followed by any originally high ranked results.
828
Google result (user query = java map) UCAIR result (user query =java map)
previous query = travel Indonesia previous query = hashtable
expanded user query = java map Indonesia expanded user query = java map class
1 Java map projections of the world ... Lonely Planet - Indonesia Map Map (Java 2 Platform SE v1.4.2)
www.btinternet.com/ se16/js/mapproj.htm www.lonelyplanet.com/mapshells/... java.sun.com/j2se/1.4.2/docs/...
2 Java map projections of the world ... INDONESIA TOURISM : CENTRAL JAVA - MAP Java 2 Platform SE v1.3.1: Interface Map
www.btinternet.com/ se16/js/oldmapproj.htm www.indonesia-tourism.com/... java.sun.com/j2se/1.3/docs/api/java/...
3 Java Map INDONESIA TOURISM : WEST JAVA - MAP An Introduction to Java Map Collection Classes
java.sun.com/developer/... www.indonesia-tourism.com/ ... www.oracle.com/technology/...
4 Java Technology Concept Map IndoStreets - Java Map An Introduction to Java Map Collection Classes
java.sun.com/developer/onlineTraining/... www.indostreets.com/maps/java/ www.theserverside.com/news/...
5 Science@NASA Home Indonesia Regions and Islands Maps, Bali, Java, ... Koders - Mappings.java
science.nasa.gov/Realtime/... www.maps2anywhere.com/Maps/... www.koders.com/java/
6 An Introduction to Java Map Collection Classes Indonesia City Street Map,... Hibernate simplifies inheritance mapping
www.oracle.com/technology/... www.maps2anywhere.com/Maps/... www.ibm.com/developerworks/java/...
7 Lonely Planet - Java Map Maps Of Indonesia tmap 30.map Class Hierarchy
www.lonelyplanet.com/mapshells/ www.embassyworld.com/maps/... tmap.pmel.noaa.gov/...
8 ONJava.com: Java API Map Maps of Indonesia by Peter Loud Class Scope
www.onjava.com/pub/a/onjava/api map/ users.powernet.co.uk/... jalbum.net/api/se/datadosen/util/Scope.html
9 GTA San Andreas : Sam Maps of Indonesia by Peter Loud Class PrintSafeHashMap
www.gtasanandreas.net/sam/ users.powernet.co.uk/mkmarina/indonesia/ jalbum.net/api/se/datadosen/...
10 INDONESIA TOURISM : WEST JAVA - MAP indonesiaphoto.com Java Pro - Union and Vertical Mapping of Classes
www.indonesia-tourism.com/... www.indonesiaphoto.com/... www.fawcette.com/javapro/...
Table 1: Sample results of query expansion
5. EVALUATION OF UCAIR
We now present some results on evaluating the two major UCAIR
functions: selective query expansion and result reranking based on
user clickthrough data.
5.1 Sample results
The query expansion strategy implemented in UCAIR is
intentionally conservative to avoid misinterpretation of implicit user
models. In practice, whenever it chooses to expand the query, the
expansion usually makes sense. In Table 1, we show how UCAIR can
successfully distinguish two different search contexts for the query
java map, corresponding to two different previous queries (i.e.,
travel Indonesia vs. hashtable). Due to implicit user modeling,
UCAIR intelligently figures out to add Indonesia and class,
respectively, to the user"s query java map, which would
otherwise be ambiguous as shown in the original results from Google
on March 21, 2005. UCAIR"s results are much more accurate than
Google"s results and reflect personalization in search.
The eager implicit feedback component is designed to
immediately respond to a user"s activity such as viewing a document. In
Figure 2, we show how UCAIR can successfully disambiguate an
ambiguous query jaguar by exploiting a viewed document
summary. In this case, the initial retrieval results using jaguar (shown
on the left side) contain two results about the Jaguar cars followed
by two results about the Jaguar software. However, after the user
views the web page content of the second result (about Jaguar
car) and returns to the search result page by clicking Back
button, UCAIR automatically nominates two new search results about
Jaguar cars (shown on the right side), while the original two results
about Jaguar software are pushed down on the list (unseen from the
picture).
5.2 Quantitative evaluation
To further evaluate UCAIR quantitatively, we conduct a user
study on the effectiveness of the eager implicit feedback
component. It is a challenge to quantitatively evaluate the potential
performance improvement of our proposed model and UCAIR over
Google in an unbiased way [7]. Here, we design a user study,
in which participants would do normal web search and judge a
randomly and anonymously mixed set of results from Google and
UCAIR at the end of the search session; participants do not know
whether a result comes from Google or UCAIR.
We recruited 6 graduate students for this user study, who have
different backgrounds (3 computer science, 2 biology, and 1
chem<top>
<num> Number: 716
<title> Spammer arrest sue
<desc> Description: Have any spammers
been arrested or sued for sending unsolicited
e-mail?
<narr> Narrative: Instances of arrests,
prosecutions, convictions, and punishments
of spammers, and lawsuits against them are
relevant. Documents which describe laws to
limit spam without giving details of lawsuits
or criminal trials are not relevant.
</top>
Figure 3: An example of TREC query topic, expressed in a
form which might be given to a human assistant or librarian
istry). We use query topics from TREC 2
2004 Terabyte track [2]
and TREC 2003 Web track [4] topic distillation task in the way to
be described below.
An example topic from TREC 2004 Terabyte track appears in
Figure 3. The title is a short phrase and may be used as a query
to the retrieval system. The description field provides a slightly
longer statement of the topic requirement, usually expressed as a
single complete sentence or question. Finally the narrative supplies
additional information necessary to fully specify the requirement,
expressed in the form of a short paragraph.
Initially, each participant would browse 50 topics either from
Terabyte track or Web track and pick 5 or 7 most interesting topics.
For each picked topic, the participant would essentially do the
normal web search using UCAIR to find many relevant web pages by
using the title of the query topic as the initial keyword query.
During this process, the participant may view the search results and
possibly click on some interesting ones to view the web pages, just
as in a normal web search. There is no requirement or restriction
on how many queries the participant must submit or when the
participant should stop the search for one topic. When the participant
plans to change the search topic, he/she will simply press a button
2
Text REtrieval Conference: http://trec.nist.gov/
829
Figure 2: Screen shots for result reranking
to evaluate the search results before actually switching to the next
topic.
At the time of evaluation, 30 top ranked results from Google and
UCAIR (some are overlapping) are randomly mixed together so
that the participant would not know whether a result comes from
Google or UCAIR. The participant would then judge the relevance
of these results. We measure precision at top n (n = 5, 10, 20, 30)
documents of Google and UCAIR. We also evaluate precisions at
different recall levels.
Altogether, 368 documents judged as relevant from Google search
results and 429 documents judged as relevant from UCAIR by
participants. Scatter plots of precision at top 10 and top 20 documents
are shown in Figure 4 and Figure 5 respectively (The scatter plot
of precision at top 30 documents is very similar to precision at top
20 documents). Each point of the scatter plots represents the
precisions of Google and UCAIR on one query topic.
Table 2 shows the average precision at top n documents among
32 topics. From Figure 4, Figure 5 and Table 2, we see that the
search results from UCAIR are consistently better than those from
Google by all the measures. Moreover, the performance
improvement is more dramatic for precision at top 20 documents than that
at precision at top 10 documents. One explanation for this is that
the more interaction the user has with the system, the more
clickthrough data UCAIR can be expected to collect. Thus the retrieval
system can build more precise implicit user models, which lead to
better retrieval accuracy.
Ranking Method prec@5 prec@10 prec@20 prec@30
Google 0.538 0.472 0.377 0.308
UCAIR 0.581 0.556 0.453 0.375
Improvement 8.0% 17.8% 20.2% 21.8%
Table 2: Table of average precision at top n documents for 32
query topics
The plot in Figure 6 shows the precision-recall curves for UCAIR
and Google, where it is clearly seen that the performance of UCAIR
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
UCAIR prec@10
Googleprec@10
Scatterplot of Precision at Top 10 Documents
Figure 4: Precision at top 10 documents of UCAIR and Google
is consistently and considerably better than that of Google at all
levels of recall.
6. CONCLUSIONS
In this paper, we studied how to exploit implicit user modeling to
intelligently personalize information retrieval and improve search
accuracy. Unlike most previous work, we emphasize the use of
immediate search context and implicit feedback information as well
as eager updating of search results to maximally benefit a user. We
presented a decision-theoretic framework for optimizing
interactive information retrieval based on eager user model updating, in
which the system responds to every action of the user by
choosing a system action to optimize a utility function. We further
propose specific techniques to capture and exploit two types of implicit
feedback information: (1) identifying related immediately
preceding query and using the query and the corresponding search results
to select appropriate terms to expand the current query, and (2)
exploiting the viewed document summaries to immediately rerank
any documents that have not yet been seen by the user. Using these
techniques, we develop a client-side web search agent (UCAIR)
on top of a popular search engine (Google). Experiments on web
search show that our search agent can improve search accuracy over
830
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
UCAIR prec@20
Googleprec@20
Scatterplot of Precision at Top 20 documents
Figure 5: Precision at top 20 documents of UCAIR and Google
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
recall
precision
Precision−Recall curves
Google Result
UCAIR Result
Figure 6: Precision at top 20 result of UCAIR and Google
Google. Since the implicit information we exploit already naturally
exists through user interactions, the user does not need to make any
extra effort. The developed search agent thus can improve
existing web search performance without any additional effort from the
user.
7. ACKNOWLEDGEMENT
We thank the six participants of our evaluation experiments. This
work was supported in part by the National Science Foundation
grants IIS-0347933 and IIS-0428472.
8. REFERENCES
[1] S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman,
and O. Frieder. Hourly analysis of a very large topically
categorized web query log. In Proceedings of SIGIR 2004,
pages 321-328, 2004.
[2] C. Clarke, N. Craswell, and I. Soboroff. Overview of the
TREC 2004 terabyte track. In Proceedings of TREC 2004,
2004.
[3] M. Claypool, P. Le, M. Waseda, and D. Brown. Implicit
interest indicators. In Proceedings of Intelligent User
Interfaces 2001, pages 33-40, 2001.
[4] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.
Overview of the TREC 2003 web track. In Proceedings of
TREC 2003, 2003.
[5] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.
Relevance feedback and personalization: A language
modeling perspective. In Proeedings of Second DELOS
Workshop: Personalisation and Recommender Systems in
Digital Libraries, 2001.
[6] Google Personalized. http://labs.google.com/personalized.
[7] D. Hawking, N. Craswell, P. B. Thistlewaite, and D. Harman.
Results and challenges in web search evaluation. Computer
Networks, 31(11-16):1321-1330, 1999.
[8] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic
web log session identification with statistical language
models. Journal of the American Society for Information
Science and Technology, 55(14):1290-1303, 2004.
[9] G. Jeh and J. Widom. Scaling personalized web search. In
Proceedings of WWW 2003, pages 271-279, 2003.
[10] T. Joachims. Optimizing search engines using clickthrough
data. In Proceedings of SIGKDD 2002, pages 133-142,
2002.
[11] D. Kelly and J. Teevan. Implicit feedback for inferring user
preference: A bibliography. SIGIR Forum, 37(2):18-28,
2003.
[12] J. Lafferty and C. Zhai. Document language models, query
models, and risk minimization for information retrieval. In
Proceedings of SIGIR"01, pages 111-119, 2001.
[13] T. Lau and E. Horvitz. Patterns of search: Analyzing and
modeling web query refinement. In Proceedings of the
Seventh International Conference on User Modeling (UM),
pages 145 -152, 1999.
[14] V. Lavrenko and B. Croft. Relevance-based language
models. In Proceedings of SIGIR"01, pages 120-127, 2001.
[15] M. Mitra, A. Singhal, and C. Buckley. Improving automatic
query expansion. In Proceedings of SIGIR 1998, pages
206-214, 1998.
[16] My Yahoo! http://mysearch.yahoo.com.
[17] G. Nunberg. As google goes, so goes the nation. New York
Times, May 2003.
[18] S. E. Robertson. The probability ranking principle in ı˚.
Journal of Documentation, 33(4):294-304, 1977.
[19] J. J. Rocchio. Relevance feedback in information retrieval. In
The SMART Retrieval System: Experiments in Automatic
Document Processing, pages 313-323. Prentice-Hall Inc.,
1971.
[20] G. Salton and C. Buckley. Improving retrieval performance
by retrieval feedback. Journal of the American Society for
Information Science, 41(4):288-297, 1990.
[21] G. Salton and M. J. McGill. Introduction to Modern
Information Retrieval. McGraw-Hill, 1983.
[22] X. Shen, B. Tan, and C. Zhai. Context-sensitive information
retrieval using implicit feedback. In Proceedings of SIGIR
2005, pages 43-50, 2005.
[23] X. Shen and C. Zhai. Exploiting query history for document
ranking in interactive information retrieval (Poster). In
Proceedings of SIGIR 2003, pages 377-378, 2003.
[24] A. Singhal. Modern information retrieval: A brief overview.
Bulletin of the IEEE Computer Society Technical Committee
on Data Engineering, 24(4):35-43, 2001.
[25] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web
search based on user profile constructed without any effort
from users. In Proceedings of WWW 2004, pages 675-684,
2004.
[26] E. Volokh. Personalization and privacy. Communications of
the ACM, 43(8):84-88, 2000.
[27] R. W. White, J. M. Jose, C. J. van Rijsbergen, and
I. Ruthven. A simulated study of implicit feedback models.
In Proceedings of ECIR 2004, pages 311-326, 2004.
[28] J. Xu and W. B. Croft. Query expansion using local and
global document analysis. In Proceedings of SIGIR 1996,
pages 4-11, 1996.
[29] C. Zhai and J. Lafferty. Model-based feedback in KL
divergence retrieval model. In Proceedings of the CIKM
2001, pages 403-410, 2001.
831 | interactive ir;personalize search;user model;interactive retrieval;query expansion;personalized web search;user-centered adaptive information retrieval;personalize information retrieval;implicit feedback;search accuracy;implicit user modeling;information retrieval system;retrieval performance;query refinement |
train_H-63 | Location based Indexing Scheme for DAYS | Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. | 1. INTRODUCTION
Wireless data dissemination is an economical and efficient
way to make desired data available to a large number of mobile or
static users. The mode of data transfer is essentially asymmetric,
that is, the capacity of the transfer of data (downstream
communication) from the server to the client (mobile user) is
significantly larger than the client or mobile user to the server
(upstream communication). The effectiveness of a data
dissemination system is judged by its ability to provide user the
required data at anywhere and at anytime. One of the best ways to
accomplish this is through the dissemination of highly
personalized Location Based Services (LBS) which allows users
to access personalized location dependent data. An example
would be someone using their mobile device to search for a
vegetarian restaurant. The LBS application would interact with
other location technology components or use the mobile user's
input to determine the user's location and download the
information about the restaurants in proximity to the user by
tuning into the wireless channel which is disseminating LDD.
We see a limited deployment of LBS by some service
providers. But there are every indications that with time some of
the complex technical problems such as uniform location
framework, calculating and tracking locations in all types of
places, positioning in various environments, innovative location
applications, etc., will be resolved and LBS will become a
common facility and will help to improve market productivity and
customer comfort. In our project called DAYS, we use wireless
data broadcast mechanism to push LDD to users and mobile users
monitor and tune the channel to find and download the required
data. A simple broadcast, however, is likely to cause significant
performance degradation in the energy constrained mobile devices
and a common solution to this problem is the use of efficient air
indexing. The indexing approach stores control information which
tells the user about the data location in the broadcast and how and
when he could access it. A mobile user, thus, has some free time
to go into the doze mode which conserves valuable power. It also
allows the user to personalize his own mobile device by
selectively tuning to the information of his choice.
Access efficiency and energy conservation are the two issues
which are significant for data broadcast systems. Access efficiency
refers to the latency experienced when a request is initiated till the
response is received. Energy conservation [7, 10] refers to the
efficient use of the limited energy of the mobile device in
accessing broadcast data. Two parameters that affect these are the
tuning time and the access latency. Tuning time refers to the time
during which the mobile unit (MU) remains in active state to tune
the channel and download its required data. It can also be defined
as the number of buckets tuned by the mobile device in active
state to get its required data. Access latency may be defined as the
time elapsed since a request has been issued till the response has
been received.
1
This research was supported by a grant from NSF IIS-0209170.
Several indexing schemes have been proposed in the past and
the prominent among them are the tree based and the exponential
indexing schemes [17]. The main disadvantages of the tree based
schemes are that they are based on centralized tree structures. To
start a search, the MU has to wait until it reaches the root of the
next broadcast tree. This significantly affects the tuning time of
the mobile unit. The exponential schemes facilitate index
replication by sharing links in different search trees. For
broadcasts with large number of pages, the exponential scheme
has been shown to perform similarly as the tree based schemes in
terms of access latency. Also, the average length of broadcast
increases due to the index replication and this may cause
significant increase in the access latency. None of the above
indexing schemes is equally effective in broadcasting location
dependent data. In addition to providing low latency, they lack
properties which are used to address LDD issues. We propose an
indexing scheme in DAYS which takes care of some these
problems. We show with simulation results that our scheme
outperforms some of the earlier indexing schemes for
broadcasting LDD in terms of tuning time.
The rest of the paper is presented as follows. In section 2, we
discuss previous work related to indexing of broadcast data.
Section 3 describes our DAYS architecture. Location dependent
data, its generation and subsequent broadcast is presented in
section 4. Section 5 discusses our indexing scheme in detail.
Simulation of our scheme and its performance evaluation is
presented in section 6. Section 7 concludes the paper and
mentions future related work.
2. PREVIOUS WORK
Several disk-based indexing techniques have been used for air
indexing. Imielinski et al. [5, 6] applied the B+ index tree, where
the leaf nodes store the arrival times of the data items. The
distributed indexing method was proposed to efficiently replicate
and distribute the index tree in a broadcast. Specifically, the index
tree is divided into a replicated part and a non replicated part.
Each broadcast consists of the replicated part and the
nonreplicated part that indexes the data items immediately following
it. As such, each node in the non-replicated part appears only once
in a broadcast and, hence, reduces the replication cost and access
latency while achieving a good tuning time. Chen et al. [2] and
Shivakumar et al. [8] considered unbalanced tree structures to
optimize energy consumption for non-uniform data access. These
structures minimize the average index search cost by reducing the
number of index searches for hot data at the expense of spending
more on cold data. Tan and Yu discussed data and index
organization under skewed broadcast Hashing and signature
methods have also been suggested for wireless broadcast that
supports equality queries [9]. A flexible indexing method was
proposed in [5]. The flexible index first sorts the data items in
ascending (or descending) order of the search key values and then
divides them into p segments. The first bucket in each data
segment contains a control index, which is a binary index
mapping a given key value to the segment containing that key,
and a local index, which is an m-entry index mapping a given key
value to the buckets within the current segment. By tuning the
parameters of p and m, mobile clients can achieve either a good
tuning time or good access latency. Another indexing technique
proposed is the exponential indexing scheme [17]. In this scheme,
a parameterized index, called the exponential index is used to
optimize the access latency or the tuning time. It facilitates index
replication by linking different search trees. All of the above
mentioned schemes have been applied to data which are non
related to each other. These non related data may be clustered or
non clustered. However, none of them has specifically addressed
the requirements of LDD. Location dependent data are data which
are associated with a location. Presently there are several
applications that deal with LDD [13, 16]. Almost all of them
depict LDD with the help of hierarchical structures [3, 4]. This is
based on the containment property of location dependent data.
The Containment property helps determining relative position of
an object by defining or identifying locations that contains those
objects. The subordinate locations are hierarchically related to
each other. Thus, Containment property limits the range of
availability or operation of a service. We use this containment
property in our indexing scheme to index LDD.
3. DAYS ARCHITECTURE
DAYS has been conceptualized to disseminate topical and
nontopical data to users in a local broadcast space and to accept
queries from individual users globally. Topical data, for example,
weather information, traffic information, stock information, etc.,
constantly changes over time. Non topical data such as hotel,
restaurant, real estate prices, etc., do not change so often. Thus,
we envision the presence of two types of data distribution: In the
first case, server pushes data to local users through wireless
channels. The other case deals with the server sending results of
user queries through downlink wireless channels. Technically, we
see the presence of two types of queues in the pull based data
access. One is a heavily loaded queue containing globally
uploaded queries. The other is a comparatively lightly loaded
queue consisting of locally uploaded queries. The DAYS
architecture [12] as shown in figure 1 consists of a Data Server,
Broadcast Scheduler, DAYS Coordinator, Network of LEO
satellites for global data delivery and a Local broadcast space.
Data is pushed into the local broadcast space so that users may
tune into the wireless channels to access the data. The local
broadcast space consists of a broadcast tower, mobile units and a
network of data staging machines called the surrogates. Data
staging in surrogates has been earlier investigated as a successful
technique [12, 15] to cache users' related data. We believe that
data staging can be used to drastically reduce the latency time for
both the local broadcast data as well as global responses. Query
request in the surrogates may subsequently be used to generate the
popularity patterns which ultimately decide the broadcast
schedule [12].
18
Popularity
Feedback from
Surrogates for
Broadcast
Scheduler
Local Broadcast Space
Broadcast Tower
SurrogateMU
MU
MU
MU
Data ServerBroadcast schedulerDAYS Coordinator
Local downlink
channel
Global downlink
channel
Pull request queue
Global request queue
Local request queue Location based index
Starbucks
Plaza
Kansas
City
Figure 1. DAYS Architecture Figure 2. Location Structure ofStarbucks, Plaza
4. LOCATION DEPENDENT DATA (LDD)
We argue that incorporating location information in wireless data
broadcast can significantly decrease the access latency. This
property becomes highly useful for mobile unit which has limited
storage and processing capability. There are a variety of
applications to obtain information about traffic, restaurant and
hotel booking, fast food, gas stations, post office, grocery stores,
etc. If these applications are coupled with location information,
then the search will be fast and highly cost effective. An important
property of the locations is Containment which helps to determine
the relative location of an object with respect to its parent that
contains the object. Thus, Containment limits the range of
availability of a data. We use this property in our indexing
scheme. The database contains the broadcast contents which are
converted into LDD [14] by associating them with respective
locations so that it can be broadcasted in a clustered manner. The
clustering of LDD helps the user to locate information efficiently
and supports containment property. We present an example to
justify our proposition.
Example: Suppose a user issues query Starbucks Coffee in
Plaza please. to access information about the Plaza branch of
Starbucks Coffee in Kansas City. In the case of location
independent set up the system will list all Starbucks coffee shops
in Kansas City area. It is obvious that such responses will
increase access latency and are not desirable. These can be
managed efficiently if the server has location dependent data, i.e.,
a mapping between a Starbucks coffee shop data and its physical
location. Also, for a query including range of locations of
Starbucks, a single query requesting locations for the entire
region of Kansas City, as shown in Figure 2, will suffice. This
will save enormous amount of bandwidth by decreasing the
number of messages and at the same time will be helpful in
preventing the scalability bottleneck in highly populated area.
4.1 Mapping Function for LDD
The example justifies the need for a mapping function to process
location dependent queries. This will be especially important for
pull based queries across the globe for which the reply could be
composed for different parts of the world. The mapping function
is necessary to construct the broadcast schedule.
We define Global Property Set (GPS) [11], Information Content
(IC) set, and Location Hierarchy (LH) set where IC ⊆ GPS and
LH ⊆ GPS to develop a mapping function. LH = {l1, l2, l3…,lk}
where li represent locations in the location tree and IC = {ic1, ic2,
ic3,…,icn} where ici represent information type. For example, if
we have traffic, weather, and stock information are in broadcast
then IC = {ictraffic, icweather, and icstock}. The mapping scheme must
be able to identify and select an IC member and a LH node for (a)
correct association, (b) granularity match, (c) and termination
condition. For example, weather ∈ IC could be associated with a
country or a state or a city or a town of LH. The granularity match
between the weather and a LH node is as per user requirement.
Thus, with a coarse granularity weather information is associated
with a country to get country"s weather and with town in a finer
granularity. If a town is the finest granularity, then it defines the
terminal condition for association between IC and LH for weather.
This means that a user cannot get weather information about
subdivision of a town. In reality weather of a subdivision does
not make any sense.
We develop a simple heuristic mapping approach scheme based
on user requirement. Let IC = {m1, m2,m3 .,..., mk}, where mi
represent its element and let LH = {n1, n2, n3, ..., nl}, where ni
represents LH"s member. We define GPS for IC (GPSIC) ⊆ GPS
and for LH (GPSLH) ⊆ GPS as GPSIC = {P1, P2,…, Pn}, where
P1, P2, P3,…, Pn are properties of its members and GPSLH = {Q1,
Q2,…, Qm} where Q1, Q2,…, Qm are properties of its members.
The properties of a particular member of IC are a subset of
GPSIC. It is generally true that (property set (mi∈ IC) ∪ property
set (mj∈ IC)) ≠ ∅, however, there may be cases where the
intersection is not null. For example, stock ∈ IC and movie ∈ IC
rating do not have any property in common. We assume that any
two or more members of IC have at least one common
geographical property (i.e. location) because DAYS broadcasts
information about those categories, which are closely tied with a
location. For example, stock of a company is related to a country,
weather is related to a city or state, etc.
We define the property subset of mi∈ IC as PSm
i
∀ mi ∈ IC and
PSm
i
= {P1, P2, ..., Pr} where r ≤ n. ∀ Pr {Pr ∈ PSm
i
→ Pr∈
GPSIC} which implies that ∀ i, PSm
i
⊆ GPSIC. The geographical
properties of this set are indicative of whether mi ∈ IC can be
mapped to only a single granularity level (i.e. a single location) in
LH or a multiple granularity levels (i.e. more than one nodes in
19
the hierarchy) in LH. How many and which granularity levels
should a mi map to, depends upon the level at which the service
provider wants to provide information about the mi in question.
Similarly we define a property subset of LH members as PSn
j
∀ nj
∈ LH which can be written as PSn
j
={Q1, Q2, Q3, …, Qs} where s ≤
m. In addition, ∀ Qs {Qs∈ PSn
j
→ Qs∈ GPSLH} which implies that
∀j, PSn
j
⊆ GPSLH.
The process of mapping from IC to LH is then identifying for
some mx∈ IC one or more ny∈ LH such that PSmx ∩ PSnv ≠ φ.
This means that when mx maps to ny and all children of ny if mx
can map to multiple granularity levels or mx maps only to ny if mx
can map to a single granularity level.
We assume that new members can join and old member can leave
IC or LH any time. The deletion of members from the IC space is
simple but addition of members to the IC space is more restrictive.
If we want to add a new member to the IC space, then we first
define a property set for the new member: PSmnew_m ={P1, P2, P3,
…, Pt} and add it to the IC only if the condition:∀ Pw {Pw∈
PSpnew_m → Pw∈ GPSIC} is satisfied. This scheme has an
additional benefit of allowing the information service providers to
have a control over what kind of information they wish to provide
to the users. We present the following example to illustrate the
mapping concept.
IC = {Traffic, Stock, Restaurant, Weather, Important history
dates, Road conditions}
LH = {Country, State, City, Zip-code, Major-roads}
GPSIC = {Surface-mobility, Roads, High, Low, Italian-food,
StateName, Temp, CityName, Seat-availability, Zip, Traffic-jams,
Stock-price, CountryName, MajorRoadName, Wars, Discoveries,
World}
GPSLH = {Country, CountrySize, StateName, CityName, Zip,
MajorRoadName}
Ps(ICStock) = {Stock-price, CountryName, High, Low}
Ps(ICTraffic) = {Surface-mobility, Roads, High, Low, Traffic-jams,
CityName}
Ps(ICImportant dates in history) = {World, Wars, Discoveries}
Ps(ICRoad conditions) = {Precipitation, StateName, CityName}
Ps(ICRestaurant) = {Italian-food, Zip code}
Ps(ICWeather) = {StateName, CityName, Precipitation,
Temperature}
PS(LHCountry) = {CountryName, CountrySize}
PS(LHState = {StateName, State size},
PS(LHCity) ={CityName, City size}
PS(LHZipcode) = {ZipCodeNum }
PS(LHMajor roads) = {MajorRoadName}
Now, only PS(ICStock) ∩ PSCountry ≠φ. In addition, PS(ICStock)
indicated that Stock can map to only a single location Country.
When we consider the member Traffic of IC space, only
PS(ICTraffic) ∩ PScity ≠ φ. As PS(ICTraffic) indicates that Traffic can
map to only a single location, it maps only to City and none of its
children. Now unlike Stock, mapping of Traffic with Major roads,
which is a child of City, is meaningful. However service providers
have right to control the granularity levels at which they want to
provide information about a member of IC space.
PS(ICRoad conditions) ∩ PSState ≠φ and PS(ICRoad conditions) ∩ PSCity≠φ.
So Road conditions maps to State as well as City. As PS(ICRoad
conditions) indicates that Road conditions can map to multiple
granularity levels, Road conditions will also map to Zip Code and
Major roads, which are the children of State and City. Similarly,
Restaurant maps only to Zip code, and Weather maps to State,
City and their children, Major Roads and Zip Code.
5. LOCATION BASED INDEXING SCHEME
This section discusses our location based indexing scheme
(LBIS). The scheme is designed to conform to the LDD broadcast
in our project DAYS. As discussed earlier, we use the
containment property of LDD in the indexing scheme. This
significantly limits the search of our required data to a particular
portion of broadcast. Thus, we argue that the scheme provides
bounded tuning time.
We describe the architecture of our indexing scheme. Our scheme
contains separate data buckets and index buckets. The index
buckets are of two types. The first type is called the Major index.
The Major index provides information about the types of data
broadcasted. For example, if we intend to broadcast information
like Entertainment, Weather, Traffic etc., then the major index
points to either these major types of information and/or their main
subtypes of information, the number of main subtypes varying
from one information to another. This strictly limits number of
accesses to a Major index. The Major index never points to the
original data. It points to the sub indexes called the Minor index.
The minor indexes are the indexes which actually points to the
original data. We called these minor index pointers as Location
Pointers as they points to the data which are associated with a
location. Thus, our search for a data includes accessing of a major
index and some minor indexes, the number of minor index
varying depending on the type of information.
Thus, our indexing scheme takes into account the hierarchical
nature of the LDD, the Containment property, and requires our
broadcast schedule to be clustered based on data type and
location. The structure of the location hierarchy requires the use
of different types of index at different levels. The structure and
positions of index strictly depend on the location hierarchy as
described in our mapping scheme earlier. We illustrate the
implementation of our scheme with an example. The rules for
framing the index are mentioned subsequently.
20
A1
Entertainment
Resturant
Movie
A2
A3
A4
R1
R2
R3
R4
R5
R6
R7
R8
Weather
KC
SL
JC
SF
Entertainment
R1 R2 R3 R4 R5 R6 R7 R8 KC SL JC SF
(A, R, NEXT = 8)
3, R5
4, R7
Type (S, L)
ER
W
E
EM
(1, 4)
(5, 4)
(1, 4), (9, 4)
(9, 4)
Type (S, L)
W
E
EM
ER
(1, 4)
(5, 8)
(5, 4)
(9, 4)
Type (S, L)
E
EM
ER
W
(1, 8)
(1, 4)
(5, 4)
(9, 4)
A1 A2 A3 A4
Movie Resturant Weather
1 2 3 4 5 6 7 8 9 10 11 12
Major index Major index Major index
Minor index
Major index Minor index
Figure 3. Location Mapped Information for Broadcast Figure 4. Data coupled with Location based Index
Example: Let us suppose that our broadcast content contains
ICentertainment and ICweather which is represented as shown in Fig. 3.
Ai represents Areas of City and Ri represents roads in a certain
area. The leaves of Weather structure represent four cities. The
index structure is given in Fig. 4 which shows the position of
major and minor index and data in the broadcast schedule.
We propose the following rules for the creation of the air indexed
broadcast schedule:
• The major index and the minor index are created.
• The major index contains the position and range of different
types of data items (Weather and Entertainment, Figure 3)
and their categories. The sub categories of Entertainment,
Movie and Restaurant, are also in the index. Thus, the major
index contains Entertainment (E), Entertainment-Movie
(EM), Entertainment-Restaurant (ER), and Weather (W). The
tuple (S, L) represents the starting position (S) of the data
item and L represents the range of the item in terms of
number of data buckets.
• The minor index contains the variables A, R and a pointer
Next. In our example (Figure 3), road R represents the first
node of area A. The minor index is used to point to actual
data buckets present at the lowest levels of the hierarchy. In
contrast, the major index points to a broader range of
locations and so it contains information about main and sub
categories of data.
• Index information is not incorporated in the data buckets.
Index buckets are separate containing only the control
information.
• The number of major index buckets m=#(IC), IC = {ic1, ic2,
ic3,…,icn} where ici represent information type and #
represents the cardinality of the Information Content set IC.
In this example, IC= {icMovie, icWeather, icRestaurant} and so
#(IC) =3. Hence, the number of major index buckets is 3.
• Mechanism to resolve the query is present in the java based
coordinator in MU. For example, if a query Q is presented as
Q (Entertainment, Movie, Road_1), then the resultant search
will be for the EM information in the major index. We say,
Q EM.
Our proposed index works as follows: Let us suppose that an MU
issues a query which is represented by Java Coordinator present in
the MU as Restaurant information on Road 7. This is resolved
by the coordinator as Q ER. This means one has to search for
ER unit of index in the major index. Let us suppose that the MU
logs into the channel at R2. The first index it receives is a minor
index after R2. In this index, value of Next variable = 4, which
means that the next major index is present after bucket 4. The MU
may go into doze mode. It becomes active after bucket 4 and
receives the major index. It searches for ER information which is
the first entry in this index. It is now certain that the MU will get
the position of the data bucket in the adjoining minor index. The
second unit in the minor index depicts the position of the required
data R7. It tells that the data bucket is the first bucket in Area 4.
The MU goes into doze mode again and becomes active after
bucket 6. It gets the required data in the next bucket. We present
the algorithm for searching the location based Index.
Algorithm 1 Location based Index Search in DAYS
1. Scan broadcast for the next index bucket, found=false
2. While (not found) do
3. if bucket is Major Index then
4. Find the Type & Tuple (S, L)
5. if S is greater than 1, go into doze mode for S seconds
6. end if
7. Wake up at the Sth
bucket and observe the Minor Index
8. end if
9. if bucket is Minor Index then
10. if TypeRequested not equal to Typefound and (A,R)Request not
equal to (A,R)found then
11. Go into doze mode till NEXT & repeat from step 3
12. end if
13. else find entry in Minor Index which points to data
14. Compute time of arrival T of data bucket
15. Go into doze mode till T
16. Wake up at T and access data, found = true
17. end else
18. end if
19. end While
21
6. PERFORMANCE EVALUATION
Conservation of energy is the main concern when we try to access
data from wireless broadcast. An efficient scheme should allow
the mobile device to access its required data by staying active for
a minimum amount of time. This would save considerable amount
of energy. Since items are distributed based on types and are
mapped to suitable locations, we argue that our broadcast deals
with clustered data types. The mobile unit has to access a larger
major index and a relatively much smaller minor index to get
information about the time of arrival of data. This is in contrast to
the exponential scheme where the indexes are of equal sizes. The
example discussed and Algorithm 1 reveals that to access any
data, we need to access the major index only once followed by
one or more accesses to the minor index. The number of minor
index access depends on the number of internal locations. As the
number of internal locations vary for item to item (for example,
Weather is generally associated with a City whereas traffic is
granulated up to major and minor roads of a city), we argue that
the structure of the location mapped information may be
visualized as a forest which is a collection of general trees, the
number of general trees depending on the types of information
broadcasted and depth of a tree depending on the granularity of
the location information associated with the information.
For our experiments, we assume the forest as a collection of
balanced M-ary trees. We further assume the M-ary trees to be
full by assuming the presence of dummy nodes in different levels
of a tree.
Thus, if the number of data items is d and the height of the tree is
m, then
n= (m*d-1)/(m-1) where n is the number of vertices in the tree and
i= (d-1)/(m-1) where i is the number of internal vertices.
Tuning time for a data item involves 1 unit of time required to
access the major index plus time required to access the data items
present in the leaves of the tree.
Thus, tuning time with d data items is t = logmd+1
We can say that tuning time is bounded by O(logmd).
We compare our scheme with the distributed indexing and
exponential scheme. We assume a flat broadcast and number of
pages varying from 5000 to 25000. The various simulation
parameters are shown in Table 1.
Figure 5-8 shows the relative tuning times of three indexing
algorithms, ie, the LBIS, exponential scheme and the distributed
tree scheme. Figure 5 shows the result for number of internal
location nodes = 3. We can see that LBIS significantly
outperforms both the other schemes. The tuning time in LBIS
ranges from approx 6.8 to 8. This large tuning time is due to the
fact that after reaching the lowest minor index, the MU may have
to access few buckets sequentially to get the required data bucket.
We can see that the tuning time tends to become stable as the
length of broadcast increases. In figure 6 we consider m= 4. Here
we can see that the exponential and the distributed perform almost
similarly, though the former seems to perform slightly better as
the broadcast length increases. A very interesting pattern is visible
in figure 7. For smaller broadcast size, the LBIS seems to have
larger tuning time than the other two schemes. But as the length of
broadcast increases, it is clearly visible the LBIS outperforms the
other two schemes. The Distributed tree indexing shows similar
behavior like the LBIS. The tuning time in LBIS remains low
because the algorithm allows the MU to skip some intermediate
Minor Indexes. This allows the MU to move into lower levels
directly after coming into active mode, thus saving valuable
energy. This action is not possible in the distributed tree indexing
and hence we can observe that its tuning time is more than the
LBIS scheme, although it performs better than the exponential
scheme. Figure 8, in contrast, shows us that the tuning time in
LBIS, though less than the other two schemes, tends to increase
sharply as the broadcast length becomes greater than the 15000
pages. This may be attributed both due to increase in time
required to scan the intermediate Minor Indexes and the length of
the broadcast. But we can observe that the slope of the LBIS
curve is significantly less than the other two curves.
Table 1 Simulation Parameters
P Definition Values
N Number of data Items 5000 - 25000
m Number of internal location nodes 3, 4, 5, 6
B Capacity of bucket without index (for
exponential index)
10,64,128,256
i Index base for exponential index 2,4,6,8
k Index size for distributed tree 8 bytes
The simulation results establish some facts about our
location based indexing scheme. The scheme performs
better than the other two schemes in terms of tuning time in
most of the cases. As the length of the broadcast increases, after a
certain point, though the tuning time increases as a result of
factors which we have described before, the scheme always
performs better than the other two schemes. Due to the prescribed
limit of the number of pages in the paper, we are unable to show
more results. But these omitted results show similar trend as the
results depicted in figure 5-8.
7. CONCLUSION AND FUTURE WORK
In this paper we have presented a scheme for mapping of wireless
broadcast data with their locations. We have presented an example
to show how the hierarchical structure of the location tree maps
with the data to create LDD. We have presented a scheme called
LBIS to index this LDD. We have used the containment property
of LDD in the scheme that limits the search to a narrow range of
data in the broadcast, thus saving valuable energy in the device.
The mapping of data with locations and the indexing scheme will
be used in our DAYS project to create the push based
architecture. The LBIS has been compared with two other
prominent indexing schemes, i.e., the distributed tree indexing
scheme and the exponential indexing scheme. We showed in our
simulations that the LBIS scheme has the lowest tuning time for
broadcasts having large number of pages, thus saving valuable
battery power in the MU.
22
In the future work we try to incorporate pull based architecture in
our DAYS project. Data from the server is available for access by
the global users. This may be done by putting a request to the
source server. The query in this case is a global query. It is
transferred from the user"s source server to the destination server
through the use of LEO satellites. We intend to use our LDD
scheme and data staging architecture in the pull based architecture.
We will show that the LDD scheme together with the data staging
architecture significantly improves the latency for global as well as
local query.
8. REFERENCES
[1] Acharya, S., Alonso, R. Franklin, M and Zdonik S. Broadcast
disk: Data management for asymmetric communications
environments. In Proceedings of ACM SIGMOD Conference
on Management of Data, pages 199-210, San Jose, CA, May
1995.
[2] Chen, M.S.,Wu, K.L. and Yu, P. S. Optimizing index
allocation for sequential data broadcasting in wireless mobile
computing. IEEE Transactions on Knowledge and Data
Engineering (TKDE), 15(1):161-173, January/February 2003.
Figure 5. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 6. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 7. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 8. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Averagetuningtime
Averagetuningtime
Averagetuningtime
Averagetuningtime
23
[3] Hu, Q. L., Lee, D. L. and Lee, W.C. Performance evaluation
of a wireless hierarchical data dissemination system. In
Proceedings of the 5th
Annual ACM International Conference
on Mobile Computing and Networking (MobiCom"99), pages
163-173, Seattle, WA, August 1999.
[4] Hu, Q. L. Lee, W.C. and Lee, D. L. Power conservative
multi-attribute queries on data broadcast. In Proceedings of
the 16th International Conference on Data Engineering
(ICDE"00), pages 157-166, San Diego, CA, February 2000.
[5] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Power
efficient filtering of data on air. In Proceedings of the 4th
International Conference on Extending Database Technology
(EDBT"94), pages 245-258, Cambridge, UK, March 1994.
[6] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Data on
air - Organization and access. IEEE Transactions on
Knowledge and Data Engineering (TKDE), 9(3):353-372,
May/June 1997.
[7] Shih, E., Bahl, P. and Sinclair, M. J. Wake on wireless: An
event driven energy saving strategy for battery operated
devices. In Proceedings of the 8th Annual ACM International
Conference on Mobile Computing and Networking
(MobiCom"02), pages 160-171, Atlanta, GA, September
2002.
[8] Shivakumar N. and Venkatasubramanian, S. Energy-efficient
indexing for information dissemination in wireless systems.
ACM/Baltzer Journal of Mobile Networks and Applications
(MONET), 1(4):433-446, December 1996.
[9] Tan K. L. and Yu, J. X. Energy efficient filtering of non
uniform broadcast. In Proceedings of the 16th International
Conference on Distributed Computing Systems (ICDCS"96),
pages 520-527, Hong Kong, May 1996.
[10] Viredaz, M. A., Brakmo, L. S. and Hamburgen, W. R. Energy
management on handheld devices. ACM Queue, 1(7):44-52,
October 2003.
[11] Garg, N. Kumar, V., & Dunham, M.H. Information Mapping
and Indexing in DAYS, 6th International Workshop on
Mobility in Databases and Distributed Systems, in
conjunction with the 14th International Conference on
Database and Expert Systems Applications September 1-5,
Prague, Czech Republic, 2003.
[12] Acharya D., Kumar, V., & Dunham, M.H. InfoSpace: Hybrid
and Adaptive Public Data Dissemination System for
Ubiquitous Computing. Accepted for publication in the
special issue of Pervasive Computing. Wiley Journal for
Wireless Communications and Mobile Computing, 2004.
[13] Acharya D., Kumar, V., & Prabhu, N. Discovering and using
Web Services in M-Commerce, Proceedings for 5th VLDB
Workshop on Technologies for E-Services, Toronto,
Canada,2004.
[14] Acharya D., Kumar, V. Indexing Location Dependent Data in
broadcast environment. Accepted for publication, JDIM
special issue on Distributed Data Management, 2005.
[15] Flinn, J., Sinnamohideen, S., & Satyanarayan, M. Data
Staging on Untrusted Surrogates, Intel Research, Pittsburg,
Unpublished Report, 2003.
[16] Seydim, A.Y., Dunham, M.H. & Kumar, V. Location
dependent query processing, Proceedings of the 2nd ACM
international workshop on Data engineering for wireless and
mobile access, p.47-53, Santa Barbara, California, USA,
2001.
[17] Xu, J., Lee, W.C., Tang., X. Exponential Index: A
Parameterized Distributed Indexing Scheme for Data on Air.
In Proceedings of the 2nd ACM/USENIX International
Conference on Mobile Systems, Applications, and Services
(MobiSys'04), Boston, MA, June 2004.
24 | wireless broadcast datum mapping;location base service;datum stage;indexing scheme;pull based datum access;wireless datum dissemination;tree structure;day;wireless datum broadcast;datum broadcast system;location dependent datum;ldd;location based service;index;wireless channel;mapping of wireless broadcast datum;mobile user |
train_H-64 | Machine Learning for Information Architecture in a Large Governmental Website | This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. | 1. INTRODUCTION
The GovStat Project is a joint effort of the University
of North Carolina Interaction Design Lab and the
University of Maryland Human-Computer Interaction Lab1
.
Citing end-user difficulty in finding governmental information
(especially statistical data) online, the project seeks to
create an integrated model of user access to US government
statistical information that is rooted in realistic data
models and innovative user interfaces. To enable such models
and interfaces, we propose a data-driven approach, based
on data mining and machine learning techniques. In
particular, our work analyzes a particular digital library-the
website of the Bureau of Labor Statistics2
(BLS)-in efforts
to discover a small number of linguistically meaningful
concepts, or bins, that collectively summarize the semantic
domain of the site.
The project goal is to classify the site"s web content
according to these inferred concepts as an initial step towards
data filtering via active user interfaces (cf. [13]). Many
digital libraries already make use of content classification,
both explicitly and implicitly; they divide their resources
manually by topical relation; they organize content into
hierarchically oriented file systems. The goal of the present
1
http://www.ils.unc.edu/govstat
2
http://www.bls.gov
151
research is to develop another means of browsing the content
of these collections. By analyzing the distribution of terms
across documents, our goal is to supplement the agency"s
pre-existing information structures. Statistical learning
technologies are appealing in this context insofar as they stand
to define a data-driven-as opposed to an
agency-drivennavigational structure for a site.
Our approach combines supervised and unsupervised
learning techniques. A pure document clustering [12] approach
to such a large, diverse collection as BLS led to poor results
in early tests [6]. But strictly supervised techniques [5] are
inappropriate, too. Although BLS designers have defined
high-level subject headings for their collections, as we
discuss in Section 2, this scheme is less than optimal. Thus we
hope to learn an additional set of concepts by letting the
data speak for themselves.
The remainder of this paper describes the details of our
concept discovery efforts and subsequent evaluation. In
Section 2 we describe the previously existing, human-created
conceptual structure of the BLS website. This section also
describes evidence that this structure leaves room for
improvement. Next (Sections 3-5), we turn to a description
of the concepts derived via content clustering under three
document representations: keyword, title only, and full-text.
Section 6 describes a two-part evaluation of the derived
conceptual structures. Finally, we conclude in Section 7 by
outlining upcoming work on the project.
2. STRUCTURING ACCESS TO THE BLS
WEBSITE
The Bureau of Labor Statistics is a federal government
agency charged with compiling and publishing statistics
pertaining to labor and production in the US and abroad. Given
this broad mandate, the BLS publishes a wide array of
information, intended for diverse audiences. The agency"s
website acts as a clearinghouse for this process. With over
15,000 text/html documents (and many more documents if
spreadsheets and typeset reports are included), providing
access to the collection provides a steep challenge to
information architects.
2.1 The Relation Browser
The starting point of this work is the notion that access
to information in the BLS website could be improved by
the addition of a dynamic interface such as the relation
browser described by Marchionini and Brunk [13]. The
relation browser allows users to traverse complex data sets by
iteratively slicing the data along several topics. In Figure
1 we see a prototype instantiation of the relation browser,
applied to the FedStats website3
.
The relation browser supports information seeking by
allowing users to form queries in a stepwise fashion, slicing and
re-slicing the data as their interests dictate. Its motivation
is in keeping with Shneiderman"s suggestion that queries
and their results should be tightly coupled [2]. Thus in
Fig3
http://www.fedstats.gov
Figure 1: Relation Browser Prototype
ure 1, users might limit their search set to those documents
about energy. Within this subset of the collection, they
might further eliminate documents published more than a
year ago. Finally, they might request to see only documents
published in PDF format.
As Marchionini and Brunk discuss, capturing the
publication date and format of documents is trivial. But successful
implementations of the relation browser also rely on topical
classification. This presents two stumbling blocks for system
designers:
• Information architects must define the appropriate set
of topics for their collection
• Site maintainers must classify each document into its
appropriate categories
These tasks parallel common problems in the metadata
community: defining appropriate elements and marking up
documents to support metadata-aware information access.
Given a collection of over 15,000 documents, these
hurdles are especially daunting, and automatic methods of
approaching them are highly desirable.
2.2 A Pre-Existing Structure
Prior to our involvement with the project, designers at
BLS created a shallow classificatory structure for the most
important documents in their website. As seen in Figure 2,
the BLS home page organizes 65 top-level documents into
15 categories. These include topics such as Employment and
Unemployment, Productivity, and Inflation and Spending.
152
Figure 2: The BLS Home Page
We hoped initially that these pre-defined categories could
be used to train a 15-way document classifier, thus
automating the process of populating the relation browser altogether.
However, this approach proved unsatisfactory. In personal
meetings, BLS officials voiced dissatisfaction with the
existing topics. Their form, it was argued, owed as much to
the institutional structure of BLS as it did to the inherent
topology of the website"s information space. In other words,
the topics reflected official divisions rather than semantic
clusters. The BLS agents suggested that re-designing this
classification structure would be desirable.
The agents" misgivings were borne out in subsequent
analysis. The BLS topics comprise a shallow classificatory
structure; each of the 15 top-level categories is linked to a small
number of related pages. Thus there are 7 pages associated
with Inflation. Altogether, the link structure of this
classificatory system contains 65 documents; that is, excluding
navigational links, there are 65 documents linked from the
BLS home page, where each hyperlink connects a document
to a topic (pages can be linked to multiple topics). Based on
this hyperlink structure, we defined M, a symmetric 65×65
matrix, where mij counts the number of topics in which
documents i and j are both classified on the BLS home page. To
analyze the redundancy inherent in the pre-existing
structure, we derived the principal components of M (cf. [11]).
Figure 3 shows the resultant scree plot4
.
Because all 65 documents belong to at least one BLS topic,
4
A scree plot shows the magnitude of the kth
eigenvalue
versus its rank. During principal component analysis scree
plots visualize the amount of variance captured by each
component.
m00M0M
0
1010M10M
10
2020M20M
20
3030M30M
30
4040M40M
40
5050M50M
50
6060M60M
60
m00M0M
0
22M2M
2
44M4M
4
66M6M
6
88M8M
8
1010M10M
10
1212M12M
12
1414M14M
14
Eigenvalue RankMEigenvalue RankM
Eigenvalue Rank
Eigenvlue MagnitudeMEigenvlue MagnitudeM
EigenvlueMagnitude
Figure 3: Scree Plot of BLS Categories
the rank of M is guaranteed to be less than or equal to
15 (hence, eigenvalues 16 . . . 65 = 0). What is surprising
about Figure 3, however, is the precipitous decline in
magnitude among the first four eigenvalues. The four largest
eigenvlaues account for 62.2% of the total variance in the
data. This fact suggests a high degree of redundancy among
the topics. Topical redundancy is not in itself problematic.
However, the documents in this very shallow classificatory
structure are almost all gateways to more specific
information. Thus the listing of the Producer Price Index under
three categories could be confusing to the site"s users. In
light of this potential for confusion and the agency"s own
request for redesign, we undertook the task of topic discovery
described in the following sections.
3. A HYBRID APPROACH TO TOPIC
DISCOVERY
To aid in the discovery of a new set of high-level topics for
the BLS website, we turned to unsupervised machine
learning methods. In efforts to let the data speak for themselves,
we desired a means of concept discovery that would be based
not on the structure of the agency, but on the content of the
material. To begin this process, we crawled the BLS
website, downloading all documents of MIME type text/html.
This led to a corpus of 15,165 documents. Based on this
corpus, we hoped to derive k ≈ 10 topical categories, such
that each document di is assigned to one or more classes.
153
Document clustering (cf. [16]) provided an obvious, but
only partial solution to the problem of automating this type
of high-level information architecture discovery. The
problems with standard clustering are threefold.
1. Mutually exclusive clusters are inappropriate for
identifying the topical content of documents, since
documents may be about many subjects.
2. Due to the heterogeneity of the data housed in the
BLS collection (tables, lists, surveys, etc.), many
documents" terms provide noisy topical information.
3. For application to the relation browser, we require a
small number (k ≈ 10) of topics. Without significant
data reduction, term-based clustering tends to deliver
clusters at too fine a level of granularity.
In light of these problems, we take a hybrid approach to
topic discovery. First, we limit the clustering process to
a sample of the entire collection, described in Section 4.
Working on a focused subset of the data helps to overcome
problems two and three, listed above. To address the
problem of mutual exclusivity, we combine unsupervised with
supervised learning methods, as described in Section 5.
4. FOCUSING ON CONTENT-RICH
DOCUMENTS
To derive empirically evidenced topics we initially turned
to cluster analysis. Let A be the n×p data matrix with n
observations in p variables. Thus aij shows the measurement
for the ith
observation on the jth
variable. As described
in [12], the goal of cluster analysis is to assign each of the
n observations to one of a small number k groups, each of
which is characterized by high intra-cluster correlation and
low inter-cluster correlation. Though the algorithms for
accomplishing such an arrangement are legion, our analysis
focuses on k-means clustering5
, during which, each
observation oi is assigned to the cluster Ck whose centroid is closest
to it, in terms of Euclidean distance. Readers interested in
the details of the algorithm are referred to [12] for a
thorough treatment of the subject.
Clustering by k-means is well-studied in the statistical
literature, and has shown good results for text analysis (cf.
[8, 16]). However, k-means clustering requires that the
researcher specify k, the number of clusters to define. When
applying k-means to our 15,000 document collection,
indicators such as the gap statistic [17] and an analysis of
the mean-squared distance across values of k suggested that
k ≈ 80 was optimal. This paramterization led to
semantically intelligible clusters. However, 80 clusters are far too
many for application to an interface such as the relation
5
We have focused on k-means as opposed to other clustering
algorithms for several reasons. Chief among these is the
computational efficiency enjoyed by the k-means approach.
Because we need only a flat clustering there is little to be
gained by the more expensive hierarchical algorithms. In
future work we will turn to model-based clustering [7] as a
more principled method of selecting the number of clusters
and of representing clusters.
browser. Moreover, the granularity of these clusters was
unsuitably fine. For instance, the 80-cluster solution derived
a cluster whose most highly associated words (in terms of
log-odds ratio [1]) were drug, pharmacy, and chemist. These
words are certainly related, but they are related at a level
of specificity far below what we sought.
To remedy the high dimensionality of the data, we
resolved to limit the algorithm to a subset of the collection.
In consultation with employees of the BLS, we continued
our analysis on documents that form a series titled From
the Editor"s Desk6
. These are brief articles, written by BLS
employees. BLS agents suggested that we focus on the
Editor"s Desk because it is intended to span the intellectual
domain of the agency. The column is published daily, and
each entry describes an important current issue in the BLS
domain. The Editor"s Desk column has been written daily
(five times per week) since 1998. As such, we operated on a
set of N = 1279 documents.
Limiting attention to these 1279 documents not only
reduced the dimensionality of the problem. It also allowed
the clustering process to learn on a relatively clean data set.
While the entire BLS collection contains a great deal of
nonprose text (i.e. tables, lists, etc.), the Editor"s Desk
documents are all written in clear, journalistic prose. Each
document is highly topical, further aiding the discovery of
termtopic relations. Finally, the Editor"s Desk column provided
an ideal learning environment because it is well-supplied
with topical metadata. Each of the 1279 documents
contains a list of one or more keywords. Additionally, a subset
of the documents (1112) contained a subject heading. This
metadata informed our learning and evaluation, as described
in Section 6.1.
5. COMBINING SUPERVISED AND
UNSUPERVISED LEARNING FORTOPIC
DISCOVERY
To derive suitably general topics for the application of a
dynamic interface to the BLS collection, we combined
document clustering with text classification techniques.
Specifically, using k-means, we clustered each of the 1279
documents into one of k clusters, with the number of clusters
chosen by analyzing the within-cluster mean squared
distance at different values of k (see Section 6.1).
Constructing mutually exclusive clusters violates our assumption that
documents may belong to multiple classes. However, these
clusters mark only the first step in a two-phase process of
topic identification. At the end of the process,
documentcluster affinity is measured by a real-valued number.
Once the Editor"s Desk documents were assigned to
clusters, we constructed a k-way classifier that estimates the
strength of evidence that a new document di is a member
of class Ck. We tested three statistical classification
techniques: probabilistic Rocchio (prind), naive Bayes, and
support vector machines (SVMs). All were implemented using
McCallum"s BOW text classification library [14]. Prind is a
probabilistic version of the Rocchio classification algorithm
[9]. Interested readers are referred to Joachims" article for
6
http://www.bls.gov/opub/ted
154
further details of the classification method. Like prind, naive
Bayes attempts to classify documents into the most
probable class. It is described in detail in [15]. Finally, support
vector machines were thoroughly explicated by Vapnik [18],
and applied specifically to text in [10]. They define a
decision boundary by finding the maximally separating
hyperplane in a high-dimensional vector space in which document
classes become linearly separable.
Having clustered the documents and trained a suitable
classifier, the remaining 14,000 documents in the collection
are labeled by means of automatic classification. That is, for
each document di we derive a k-dimensional vector,
quantifying the association between di and each class C1 . . . Ck.
Deriving topic scores via naive Bayes for the entire
15,000document collection required less than two hours of CPU
time. The output of this process is a score for every
document in the collection on each of the automatically
discovered topics. These scores may then be used to populate a
relation browser interface, or they may be added to a
traditional information retrieval system. To use these weights in
the relation browser we currently assign to each document
the two topics on which it scored highest. In future work we
will adopt a more rigorous method of deriving
documenttopic weight thresholds. Also, evaluation of the utility of
the learned topics for users will be undertaken.
6. EVALUATION OF CONCEPT
DISCOVERY
Prior to implementing a relation browser interface and
undertaking the attendant user studies, it is of course
important to evaluate the quality of the inferred concepts, and
the ability of the automatic classifier to assign documents
to the appropriate subjects. To evaluate the success of the
two-stage approach described in Section 5, we undertook
two experiments. During the first experiment we compared
three methods of document representation for the
clustering task. The goal here was to compare the quality of
document clusters derived by analysis of full-text documents,
documents represented only by their titles, and documents
represented by human-created keyword metadata. During
the second experiment, we analyzed the ability of the
statistical classifiers to discern the subject matter of documents
from portions of the database in addition to the Editor"s
Desk.
6.1 Comparing Document Representations
Documents from The Editor"s Desk column came
supplied with human-generated keyword metadata.
Additionally, The titles of the Editor"s Desk documents tend to be
germane to the topic of their respective articles. With such
an array of distilled evidence of each document"s subject
matter, we undertook a comparison of document
representations for topic discovery by clustering. We hypothesized
that keyword-based clustering would provide a useful model.
But we hoped to see whether comparable performance could
be attained by methods that did not require extensive
human indexing, such as the title-only or full-text
representations. To test this hypothesis, we defined three modes of
document representation-full-text, title-only, and keyword
only-we generated three sets of topics, Tfull, Ttitle, and
Tkw, respectively.
Topics based on full-text documents were derived by
application of k-means clustering to the 1279 Editor"s Desk
documents, where each document was represented by a
1908dimensional vector. These 1908 dimensions captured the
TF.IDF weights [3] of each term ti in document dj, for all
terms that occurred at least three times in the data. To
arrive at the appropriate number of clusters for these data, we
inspected the within-cluster mean-squared distance for each
value of k = 1 . . . 20. As k approached 10 the reduction in
error with the addition of more clusters declined notably,
suggesting that k ≈ 10 would yield good divisions. To
select a single integer value, we calculated which value of k led
to the least variation in cluster size. This metric stemmed
from a desire to suppress the common result where one large
cluster emerges from the k-means algorithm, accompanied
by several accordingly small clusters. Without reason to
believe that any single topic should have dramatically high
prior odds of document membership, this heuristic led to
kfull = 10.
Clusters based on document titles were constructed
similarly. However, in this case, each document was represented
in the vector space spanned by the 397 terms that occur
at least twice in document titles. Using the same method
of minimizing the variance in cluster membership ktitle-the
number of clusters in the title-based representation-was also
set to 10.
The dimensionality of the keyword-based clustering was
very similar to that of the title-based approach. There were
299 keywords in the data, all of which were retained. The
median number of keywords per document was 7, where a
keyword is understood to be either a single word, or a
multiword term such as consumer price index. It is worth noting
that the keywords were not drawn from any controlled
vocabulary; they were assigned to documents by publishers at
the BLS. Using the keywords, the documents were clustered
into 10 classes.
To evaluate the clusters derived by each method of
document representation, we used the subject headings that were
included with 1112 of the Editor"s Desk documents. Each
of these 1112 documents was assigned one or more subject
headings, which were withheld from all of the cluster
applications. Like the keywords, subject headings were assigned
to documents by BLS publishers. Unlike the keywords,
however, subject headings were drawn from a controlled
vocabulary. Our analysis began with the assumption that
documents with the same subject headings should cluster
together. To facilitate this analysis, we took a conservative
approach; we considered multi-subject classifications to be
unique. Thus if document di was assigned to a single
subject prices, while document dj was assigned to two subjects,
international comparisons, prices, documents di and dj are
not considered to come from the same class.
Table 1 shows all Editor"s Desk subject headings that were
assigned to at least 10 documents. As noted in the table,
155
Table 1: Top Editor"s Desk Subject Headings
Subject Count
prices 92
unemployment 55
occupational safety & health 53
international comparisons, prices 48
manufacturing, prices 45
employment 44
productivity 40
consumer expenditures 36
earnings & wages 27
employment & unemployment 27
compensation costs 25
earnings & wages, metro. areas 18
benefits, compensation costs 18
earnings & wages, occupations 17
employment, occupations 14
benefits 14
earnings & wage, regions 13
work stoppages 12
earnings & wages, industries 11
Total 609
Table 2: Contingecy Table for Three Document
Representations
Representation Right Wrong Accuracy
Full-text 392 217 0.64
Title 441 168 0.72
Keyword 601 8 0.98
there were 19 such subject headings, which altogether
covered 609 (54%) of the documents with subjects assigned.
These document-subject pairings formed the basis of our
analysis. Limiting analysis to subjects with N > 10 kept
the resultant χ2
tests suitably robust.
The clustering derived by each document representation
was tested by its ability to collocate documents with the
same subjects. Thus for each of the 19 subject headings
in Table 1, Si, we calculated the proportion of documents
assigned to Si that each clustering co-classified. Further,
we assumed that whichever cluster captured the majority of
documents for a given class constituted the right answer
for that class. For instance, There were 92 documents whose
subject heading was prices. Taking the BLS editors"
classifications as ground truth, all 92 of these documents should
have ended up in the same cluster. Under the full-text
representation 52 of these documents were clustered into category
5, while 35 were in category 3, and 5 documents were in
category 6. Taking the majority cluster as the putative right
home for these documents, we consider the accuracy of this
clustering on this subject to be 52/92 = 0.56. Repeating
this process for each topic across all three representations
led to the contingency table shown in Table 2.
The obvious superiority of the keyword-based clustering
evidenced by Table 2 was borne out by a χ2
test on the
accuracy proportions. Comparing the proportion right and
Table 3: Keyword-Based Clusters
benefits costs international jobs
plans compensation import employment
benefits costs prices jobs
employees benefits petroleum youth
occupations prices productivity safety
workers prices productivity safety
earnings index output health
operators inflation nonfarm occupational
spending unemployment
expenditures unemployment
consumer mass
spending jobless
wrong achieved by keyword and title-based clustering led to
p 0.001. Due to this result, in the remainder of this paper,
we focus our attention on the clusters derived by analysis of
the Editor"s Desk keywords. The ten keyword-based clusters
are shown in Table 3, represented by the three terms most
highly associated with each cluster, in terms of the log-odds
ratio. Additionally, each cluster has been given a label by
the researchers.
Evaluating the results of clustering is notoriously difficult.
In order to lend our analysis suitable rigor and utility, we
made several simplifying assumptions. Most problematic is
the fact that we have assumed that each document belongs
in only a single category. This assumption is certainly false.
However, by taking an extremely rigid view of what
constitutes a subject-that is, by taking a fully qualified and
often multipart subject heading as our unit of analysis-we
mitigate this problem. Analogically, this is akin to
considering the location of books on a library shelf. Although a
given book may cover many subjects, a classification system
should be able to collocate books that are extremely similar,
say books about occupational safety and health. The most
serious liability with this evaluation, then, is the fact that
we have compressed multiple subject headings, say prices :
international into single subjects. This flattening obscures
the multivalence of documents. We turn to a more realistic
assessment of document-class relations in Section 6.2.
6.2 Accuracy of the Document Classifiers
Although the keyword-based clusters appear to classify
the Editor"s Desk documents very well, their discovery only
solved half of the problem required for the successful
implementation of a dynamic user interface such as the
relation browser. The matter of roughly fourteen thousand
unclassified documents remained to be addressed. To solve
this problem, we trained the statistical classifiers described
above in Section 5. For each document in the collection
di, these classifiers give pi, a k-vector of probabilities or
distances (depending on the classification method used), where
pik quantifies the strength of association between the ith
document and the kth
class. All classifiers were trained on
the full text of each document, regardless of the
representation used to discover the initial clusters. The different
training sets were thus constructed simply by changing the
156
Table 4: Cross Validation Results for 3 Classifiers
Method Av. Percent Accuracy SE
Prind 59.07 1.07
Naive Bayes 75.57 0.4
SVM 75.08 0.68
class variable for each instance (document) to reflect its
assigned cluster under a given model.
To test the ability of each classifier to locate documents
correctly, we first performed a 10-fold cross validation on
the Editor"s Desk documents. During cross-validation the
data are split randomly into n subsets (in this case n = 10).
The process proceeds by iteratively holding out each of the
n subsets as a test collection for a model trained on the
remaining n − 1 subsets. Cross validation is described in
[15]. Using this methodology, we compared the performance
of the three classification models described above. Table 4
gives the results from cross validation.
Although naive Bayes is not significantly more accurate
for these data than the SVM classifier, we limit the
remainder of our attention to analysis of its performance. Our
selection of naive Bayes is due to the fact that it appears to
work comparably to the SVM approach for these data, while
being much simpler, both in theory and implementation.
Because we have only 1279 documents and 10 classes, the
number of training documents per class is relatively small.
In addition to models fitted to the Editor"s Desk data, then,
we constructed a fourth model, supplementing the training
sets of each class by querying the Google search engine7
and
applying naive Bayes to the augmented test set. For each
class, we created a query by submitting the three terms
with the highest log-odds ratio with that class. Further,
each query was limited to the domain www.bls.gov. For
each class we retrieved up to 400 documents from Google
(the actual number varied depending on the size of the
result set returned by Google). This led to a training set
of 4113 documents in the augmented model, as we call
it below8
. Cross validation suggested that the augmented
model decreased classification accuracy (accuracy= 58.16%,
with standard error= 0.32). As we discuss below, however,
augmenting the training set appeared to help generalization
during our second experiment.
The results of our cross validation experiment are
encouraging. However, the success of our classifiers on the Editor"s
Desk documents that informed the cross validation study
may not be good predictors of the models" performance on
the remainder to the BLS website. To test the generality
of the naive Bayes classifier, we solicited input from 11
human judges who were familiar with the BLS website. The
sample was chosen by convenience, and consisted of faculty
and graduate students who work on the GovStat project.
However, none of the reviewers had prior knowledge of the
outcome of the classification before their participation. For
the experiment, a random sample of 100 documents was
drawn from the entire BLS collection. On average each
re7
http://www.google.com
8
A more formal treatment of the combination of labeled and
unlabeled data is available in [4].
Table 5: Human-Model Agreement on 100 Sample
Docs.
Human Judge 1st
Choice
Model Model 1st
Choice Model 2nd
Choice
N. Bayes (aug.) 14 24
N. Bayes 24 1
Human Judge 2nd
Choice
Model Model 1st
Choice Model 2nd
Choice
N. Bayes (aug.) 14 21
N. Bayes 21 4
viewer classified 83 documents, placing each document into
as many of the categories shown in Table 3 as he or she saw
fit.
Results from this experiment suggest that room for
improvement remains with respect to generalizing to the whole
collection from the class models fitted to the Editor"s Desk
documents. In Table 5, we see, for each classifier, the
number of documents for which it"s first or second most probable
class was voted best or second best by the 11 human judges.
In the context of this experiment, we consider a first- or
second-place classification by the machine to be accurate
because the relation browser interface operates on a
multiway classification, where each document is classified into
multiple categories. Thus a document with the correct
class as its second choice would still be easily available to
a user. Likewise, a correct classification on either the most
popular or second most popular category among the human
judges is considered correct in cases where a given document
was classified into multiple classes. There were 72
multiclass documents in our sample, as seen in Figure 4. The
remaining 28 documents were assigned to 1 or 0 classes.
Under this rationale, The augmented naive Bayes
classifier correctly grouped 73 documents, while the smaller model
(not augmented by a Google search) correctly classified 50.
The resultant χ2
test gave p = 0.001, suggesting that
increasing the training set improved the ability of the naive
Bayes model to generalize from the Editor"s Desk documents
to the collection as a whole. However, the improvement
afforded by the augmented model comes at some cost. In
particular, the augmented model is significantly inferior to the
model trained solely on Editor"s Desk documents if we
concern ourselves only with documents selected by the majority
of human reviewers-i.e. only first-choice classes. Limiting
the right answers to the left column of Table 5 gives p = 0.02
in favor of the non-augmented model. For the purposes of
applying the relation browser to complex digital library
content (where documents will be classified along multiple
categories), the augmented model is preferable. But this is not
necessarily the case in general.
It must also be said that 73% accuracy under a fairly
liberal test condition leaves room for improvement in our
assignment of topics to categories. We may begin to
understand the shortcomings of the described techniques by
consulting Figure 5, which shows the distribution of
categories across documents given by humans and by the
augmented naive Bayes model. The majority of reviewers put
157
Number of Human-Assigned ClassesMNumber of Human-Assigned ClassesM
Number of Human-Assigned Classes
FrequencyMFrequencyM
Frequency
m00M0M
0
11M1M
1
22M2M
2
33M3M
3
44M4M
4
55M5M
5
66M6M
6
77M7M
7
m00M0M 055M5M 51010M10M 101515M15M 152020M20M 202525M25M 253030M30M 303535M35M 35
Figure 4: Number of Classes Assigned to
Documents by Judges
documents into only three categories, jobs, benefits, and
occupations. On the other hand, the naive Bayes classifier
distributed classes more evenly across the topics. This behavior
suggests areas for future improvement. Most importantly,
we observed a strong correlation among the three most
frequent classes among the human judges (for instance, there
was 68% correlation between benefits and occupations). This
suggests that improving the clustering to produce topics
that were more nearly orthogonal might improve
performance.
7. CONCLUSIONS AND FUTURE WORK
Many developers and maintainers of digital libraries share
the basic problem pursued here. Given increasingly large,
complex bodies of data, how may we improve access to
collections without incurring extraordinary cost, and while also
keeping systems receptive to changes in content over time?
Data mining and machine learning methods hold a great deal
of promise with respect to this problem. Empirical
methods of knowledge discovery can aid in the organization and
retrieval of information. As we have argued in this paper,
these methods may also be brought to bear on the design
and implementation of advanced user interfaces.
This study explored a hybrid technique for aiding
information architects as they implement dynamic interfaces such
as the relation browser. Our approach combines
unsupervised learning techniques, applied to a focused subset of the
BLS website. The goal of this initial stage is to discover the
most basic and far-reaching topics in the collection. Based
mjobsjobsMjobsM
jobs
benefitsunemploymentpricespricesMpricesM
prices
safetyinternationalspendingspendingMspendingM
spending
occupationscostscostsMcostsM
costs
productivityHuman ClassificationsMHuman ClassificationsM
Human Classifications
m0.000.00M0.00M
0.00
0.050.100.150.15M0.15M
0.15
0.200.25mjobsjobsMjobsM
jobs
benefitsunemploymentpricespricesMpricesM
prices
safetyinternationalspendingspendingMspendingM
spending
occupationscostscostsMcostsM
costs
productivityMachine ClassificationsMMachine ClassificationsM
Machine Classifications
m0.000.00M0.00M
0.00
0.050.100.10M0.10M
0.10
0.15
Figure 5: Distribution of Classes Across Documents
on a statistical model of these topics, the second phase of
our approach uses supervised learning (in particular, a naive
Bayes classifier, trained on individual words), to assign
topical relations to the remaining documents in the collection.
In the study reported here, this approach has
demonstrated promise. In its favor, our approach is highly scalable.
It also appears to give fairly good results. Comparing three
modes of document representation-full-text, title only, and
keyword-we found 98% accuracy as measured by
collocation of documents with identical subject headings. While it
is not surprising that editor-generated keywords should give
strong evidence for such learning, their superiority over
fulltext and titles was dramatic, suggesting that even a small
amount of metadata can be very useful for data mining.
However, we also found evidence that learning topics from
a subset of the collection may lead to overfitted models.
After clustering 1279 Editor"s Desk documents into 10
categories, we fitted a 10-way naive Bayes classifier to categorize
the remaining 14,000 documents in the collection. While we
saw fairly good results (classification accuracy of 75% with
respect to a small sample of human judges), this experiment
forced us to reconsider the quality of the topics learned by
clustering. The high correlation among human judgments
in our sample suggests that the topics discovered by
analysis of the Editor"s Desk were not independent. While we do
not desire mutually exclusive categories in our setting, we
do desire independence among the topics we model.
Overall, then, the techniques described here provide an
encouraging start to our work on acquiring subject
metadata for dynamic interfaces automatically. It also suggests
that a more sophisticated modeling approach might yield
158
better results in the future. In upcoming work we will
experiment with streamlining the two-phase technique described
here. Instead of clustering documents to find topics and
then fitting a model to the learned clusters, our goal is to
expand the unsupervised portion of our analysis beyond a
narrow subset of the collection, such as The Editor"s Desk.
In current work we have defined algorithms to identify
documents likely to help the topic discovery task. Supplied with
a more comprehensive training set, we hope to experiment
with model-based clustering, which combines the clustering
and classification processes into a single modeling procedure.
Topic discovery and document classification have long been
recognized as fundamental problems in information retrieval
and other forms of text mining. What is increasingly clear,
however, as digital libraries grow in scope and complexity,
is the applicability of these techniques to problems at the
front-end of systems such as information architecture and
interface design. Finally, then, in future work we will build
on the user studies undertaken by Marchionini and Brunk
in efforts to evaluate the utility of automatically populated
dynamic interfaces for the users of digital libraries.
8. REFERENCES
[1] A. Agresti. An Introduction to Categorical Data
Analysis. Wiley, New York, 1996.
[2] C. Ahlberg, C. Williamson, and B. Shneiderman.
Dynamic queries for information exploration: an
implementation and evaluation. In Proceedings of the
SIGCHI conference on Human factors in computing
systems, pages 619-626, 1992.
[3] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. ACM Press, 1999.
[4] A. Blum and T. Mitchell. Combining labeled and
unlabeled data with co-training. In Proceedings of the
eleventh annual conference on Computational learning
theory, pages 92-100. ACM Press, 1998.
[5] H. Chen and S. Dumais. Hierarchical classification of
web content. In Proceedings of the 23rd annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 256-263,
2000.
[6] M. Efron, G. Marchionini, and J. Zhang. Implications
of the recursive representation problem for automatic
concept identification in on-line governmental
information. In Proceedings of the ASIST Special
Interest Group on Classification Research (ASIST
SIG-CR), 2003.
[7] C. Fraley and A. E. Raftery. How many clusters?
which clustering method? answers via model-based
cluster analysis. The Computer Journal,
41(8):578-588, 1998.
[8] A. K. Jain, M. N. Murty, and P. J. Flynn. Data
clustering: a review. ACM Computing Surveys,
31(3):264-323, September 1999.
[9] T. Joachims. A probabilistic analysis of the Rocchio
algorithm with TFIDF for text categorization. In
D. H. Fisher, editor, Proceedings of ICML-97, 14th
International Conference on Machine Learning, pages
143-151, Nashville, US, 1997. Morgan Kaufmann
Publishers, San Francisco, US.
[10] T. Joachims. Text categorization with support vector
machines: learning with many relevant features. In
C. N´edellec and C. Rouveirol, editors, Proceedings of
ECML-98, 10th European Conference on Machine
Learning, pages 137-142, Chemnitz, DE, 1998.
Springer Verlag, Heidelberg, DE.
[11] I. T. Jolliffe. Principal Component Analysis. Springer,
2nd edition, 2002.
[12] L. Kaufman and P. J. Rosseeuw. Finding Groups in
Data: an Introduction to Cluster Analysis. Wiley,
1990.
[13] G. Marchionini and B. Brunk. Toward a general
relation browser: a GUI for information architects.
Journal of Digital Information, 4(1), 2003.
http://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/.
[14] A. K. McCallum. Bow: A toolkit for statistical
language modeling, text retrieval, classification and
clustering. http://www.cs.cmu.edu/˜mccallum/bow,
1996.
[15] T. Mitchell. Machine Learning. McGraw Hill, 1997.
[16] E. Rasmussen. Clustering algorithms. In W. B. Frakes
and R. Baeza-Yates, editors, Information Retrieval:
Data Structures and Algorithms, pages 419-442.
Prentice Hall, 1992.
[17] R. Tibshirani, G. Walther, and T. Hastie. Estimating
the number of clusters in a dataset via the gap
statistic, 2000.
http://citeseer.nj.nec.com/tibshirani00estimating.html.
[18] V. N. Vapnik. The Nature of Statistical Learning
Theory. Springer, 2000.
159 | machine learn;information architecture;interface design;multiway classification;access;bureau of labor statistics;bls collection;data-driven approach;digital library;k-means clustering;machine learning technique;eigenvalue;complex digital library;supervised and unsupervised learning technique |
train_H-69 | Ranking Web Objects from Multiple Communities | Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking. | 1. INTRODUCTION
Despite numerous refinements and optimizations, general
purpose search engines still fail to find relevant results for
many queries. As a new trend, vertical search has shown
promise because it can leverage domain-specific knowledge
and is more effective in connecting users with the
information they want. There are many vertical search engines,
including some for paper search (e.g. Libra [21], Citeseer
[7] and Google Scholar [4]), product search (e.g. Froogle
[5]), movie search [6], image search [1, 8], video search [6],
local search [2], as well as news search [3]. We believe the
vertical search engine trend will continue to grow.
Essentially, building vertical search engines includes data
crawling, information extraction, object identification and
integration, and object-level Web information retrieval (or
Web object ranking) [20], among which ranking is one of the
most important factors. This is because it deals with the
core problem of how to combine and rank objects coming
from multiple communities.
Although object-level ranking has been well studied in
building vertical search engines, there are still some kinds
of vertical domains in which objects cannot be effectively
ranked. For example, algorithms that evolved from
PageRank [22], PopRank [21] and LinkFusion [27] were proposed
to rank objects coming from multiple communities, but can
only work on well-defined graphs of heterogeneous data.
Well-defined means that like objects (e.g. authors in
paper search) can be identified in multiple communities (e.g.
conferences). This allows heterogeneous objects to be well
linked to form a graph through leveraging all the
relationships (e.g. cited-by, authored-by and published-by) among
the multiple communities.
However, this assumption does not always stand for some
domains. High-quality photo search, movie search and news
search are exceptions. For example, a photograph forum
377
website usually includes three kinds of objects: photos,
authors and reviewers. Yet different photo forums seem to
lack any relationships, as there are no cited-by relationships.
This makes it difficult to judge whether two authors cited
are the same author, or two photos are indeed identical
photos. Consequently, although each photo has a rating score
in a forum, it is non-trivial to rank photos coming from
different photo forums. Similar problems also exist in movie
search and news search. Although two movie titles can be
identified as the same one by title and director in different
movie discussion groups, it is non-trivial to combine
rating scores from different discussion groups and rank movies
effectively. We call such non-trivial object relationship in
which identification is difficult, incomplete relationships.
Other related work includes rank aggregation for the Web
[13, 14], and learning algorithm for rank, such as RankBoost
[15], RankSVM [17, 19], and RankNet [12]. We will contrast
differences of these methods with the proposed methods
after we have described the problem and our methods.
We will specifically focus on Web object-ranking
problem in cases that lack object relationships or have with
incomplete object relationships, and take high-quality photo
search as the test bed for this investigation. In the following,
we will introduce rationale for building high-quality photo
search.
1.1 High-Quality Photo Search
In the past ten years, the Internet has grown to become
an incredible resource, allowing users to easily access a huge
number of images. However, compared to the more than 1
billion images indexed by commercial search engines, actual
queries submitted to image search engines are relatively
minor, and occupy only 8-10 percent of total image and text
queries submitted to commercial search engines [24]. This
is partially because user requirements for image search are
far less than those for general text search. On the other
hand, current commercial search engines still cannot well
meet various user requirements, because there is no
effective and practical solution to understand image content.
To better understand user needs in image search, we
conducted a query log analysis based on a commercial search
engine. The result shows that more than 20% of image
search queries are related to nature and places and daily
life categories. Users apparently are interested in enjoying
high-quality photos or searching for beautiful images of
locations or other kinds. However, such user needs are not
well supported by current image search engines because of
the difficulty of the quality assessment problem.
Ideally, the most critical part of a search engine - the
ranking function - can be simplified as consisting of two
key factors: relevance and quality. For the relevance
factor, search in current commercial image search engines
provide most returned images that are quite relevant to queries,
except for some ambiguity. However, as to quality factor,
there is still no way to give an optimal rank to an image.
Though content-based image quality assessment has been
investigated over many years [23, 25, 26], it is still far from
ready to provide a realistic quality measure in the immediate
future.
Seemingly, it really looks pessimistic to build an image
search engine that can fulfill the potentially large
requirement of enjoying high-quality photos. Various proliferating
Web communities, however, notices us that people today
have created and shared a lot of high-quality photos on the
Web on virtually any topics, which provide a rich source for
building a better image search engine.
In general, photos from various photo forums are of higher
quality than personal photos, and are also much more
appealing to public users than personal photos. In addition,
photos uploaded to photo forums generally require rich
metadata about title, camera setting, category and description to
be provide by photographers. These metadata are actually
the most precise descriptions for photos and undoubtedly
can be indexed to help search engines find relevant results.
More important, there are volunteer users in Web
communities actively providing valuable ratings for these photos.
The rating information is generally of great value in solving
the photo quality ranking problem.
Motivated by such observations, we have been attempting
to build a vertical photo search engine by extracting rich
metadata and integrating information form various photo
Web forums. In this paper, we specifically focus on how to
rank photos from multiple Web forums.
Intuitively, the rating scores from different photo forums
can be empirically normalized based on the number of
photos and the number of users in each forum. However, such
a straightforward approach usually requires large manual
effort in both tedious parameter tuning and subjective
results evaluation, which makes it impractical when there are
tens or hundreds of photo forums to combine. To address
this problem, we seek to build relationships/links between
different photo forums. That is, we first adopt an efficient
algorithm to find duplicate photos which can be considered
as hidden links connecting multiple forums. We then
formulate the ranking challenge as an optimization problem,
which eventually results in an optimal ranking function.
1.2 Main Contributions and Organization.
The main contributions of this paper are:
1. We have proposed and built a vertical image search
engine by leveraging rich metadata from various photo
forum Web sites to meet user requirements of searching
for and enjoying high-quality photos, which is
impossible in traditional image search engines.
2. We have proposed two kinds of Web object-ranking
algorithms for photos with incomplete relationships,
which can automatically and efficiently integrate as
many as possible Web communities with rating
information and achieves an equal qualitative result
compared with the manually tuned fusion scheme.
The rest of this paper is organized as follows. In Section
2, we present in detail the proposed solutions to the
ranking problem, including how to find hidden links between
different forums, normalize rating scores, obtain the
optimal ranking function, and contrast our methods with some
other related research. In Section 3, we describe the
experimental setting and experiments and user studies conducted
to evaluate our algorithm. Our conclusion and a discussion
of future work is in Section 4.
It is worth noting that although we treat vertical photo
search as the test bed in this paper, the proposed ranking
algorithm can also be applied to rank other content that
includes video clips, poems, short stories, drawings,
sculptures, music, and so on.
378
2. ALGORITHM
2.1 Overview
The difficulty of integrating multiple Web forums is in
their different rating systems, where there are generally two
kinds of freedom. The first kind of freedom is the rating
interval or rating scale including the minimal and maximal
ratings for each Web object. For example, some forums use
a 5-point rating scale whereas other forums use 3-point or
10-point rating scales. It seems easy to fix this freedom, but
detailed analysis of the data and experiments show that it
is a non-trivial problem.
The second kind of freedom is the varying rating criteria
found in different Web forums. That is, the same score does
not mean the same quality in different forums. Intuitively, if
we can detect same photographers or same photographs, we
can build relationships between any two photo forums and
therefore can standardize the rating criterion by score
normalization and transformation. Fortunately, we find that
quite a number of duplicate photographs exist in various
Web photo forums. This fact is reasonable when
considering that photographers sometimes submit a photo to more
than one forum to obtain critiques or in hopes of widespread
publicity. In this work, we adopt an efficient duplicate photo
detection algorithm [10] to find these photos.
The proposed methods below are based on the following
considerations. Faced with the need to overcome a ranking
problem, a standardized rating criterion rather than a
reasonable rating criterion is needed. Therefore, we can take
a large scale forum as the reference forum, and align other
forums by taking into account duplicate Web objects
(duplicate photos in this work). Ideally, the scores of duplicate
photos should be equal even though they are in different
forums. Yet we can deem that scores in different
forumsexcept for the reference forum - can vary in a parametric
space. This can be determined by minimizing the objective
function defined by the sum of squares of the score
differences. By formulating the ranking problem as an
optimization problem that attempts to make the scores of duplicate
photos in non-reference forums as close as possible to those
in the reference forum, we can effectively solve the ranking
problem.
For convenience, the following notations are employed.
Ski and ¯Ski denote the total score and mean score of ith Web
object (photo) in the kth Web site, respectively. The total
score refers to the sum of the various rating scores (e.g.,
novelty rating and aesthetic rating), and the mean score refers
to the mean of the various rating scores. Suppose there are
a total of K Web sites. We further use
{Skl
i |i = 1, ..., Ikl; k, l = 1, ..., K; k = l}
to denote the set of scores for Web objects (photos) in kth
Web forums that are duplicate with the lth Web forums,
where Ikl is the total number of duplicate Web objects
between these two Web sites. In general, score fusion can be
seen as the procedure of finding K transforms
ψk(Ski) = eSki, k = 1, ..., K
such that eSki can be used to rank Web objects from different
Web sites. The objective function described in the above
Figure 1: Web community integration. Each Web
community forms a subgraph, and all communities
are linked together by some hidden links (dashed
lines).
paragraph can then be formulated as
min
{ψk|k=2,...,K}
KX
k=2
Ik1X
i=1
¯wk
i
S1k
i − ψk(Sk1
i )
2
(1)
where we use k = 1 as the reference forum and thus ψ1(S1i) =
S1i. ¯wk
i (≥ 0) is the weight coefficient that can be set
heuristically according to the numbers of voters (reviewers or
commenters) in both the reference forum and the non-reference
forum. The more reviewers, the more popular the photo is
and the larger the corresponding weight ¯wk
i should be. In
this work, we do not inspect the problem of how to choose ¯wk
i
and simply set them to one. But we believe the proper use
of ¯wk
i , which leverages more information, can significantly
improve the results.
Figure 1 illustrates the aforementioned idea. The Web
Community 1 is the reference community. The dashed lines
are links indicating that the two linked Web objects are
actually the same. The proposed algorithm will try to find the
best ψk(k = 2, ..., K), which has certain parametric forms
according to certain models. So as to minimize the cost
function defined in Eq. 1, the summation is taken on all the
red dashed lines.
We will first discuss the score normalization methods in
Section 2.2, which serves as the basis for the following work.
Before we describe the proposed ranking algorithms, we first
introduce a manually tuned method in Section 2.3, which is
laborious and even impractical when the number of
communities become large. In Section 2.4, we will briefly explain
how to precisely find duplicate photos between Web forums.
Then we will describe the two proposed methods: Linear
fusion and Non-linear fusion, and a performance measure for
result evaluation in Section 2.5. Finally, in Section 2.6 we
will discuss the relationship of the proposed methods with
some other related work.
2.2 Score Normalization
Since different Web (photo) forums on the Web usually
have different rating criteria, it is necessary to normalize
them before applying different kinds of fusion methods. In
addition, as there are many kinds of ratings, such as
ratings for novelty, ratings for aesthetics etc, it is reasonable
to choose a common one - total score or average
scorethat can always be extracted in any Web forum or
calculated by corresponding ratings. This allows the
normaliza379
tion method on the total score or average score to be viewed
as an impartial rating method between different Web
forums.
It is straightforward to normalize average scores by
linearly transforming them to a fixed interval. We call this
kind of score as Scaled Mean Score. The difficulty, however,
of using this normalization method is that, if there are only
a few users rating an object, say a photo in a photo forum,
the average score for the object is likely to be spammed or
skewed.
Total score can avoid such drawbacks that contain more
information such as a Web object"s quality and popularity.
The problem is thus how to normalize total scores in
different Web forums. The simplest way may be normalization
by the maximal and minimal scores. The drawback of this
normalization method is it is non robust, or in other words,
it is sensitive to outliers.
To make the normalization insensitive to unusual data,
we propose the Mode-90% Percentile normalization method.
Here, the mode score represents the total score that has been
assigned to more photos than any other total score. And The
high percentile score (e.g.,90%) represents the total score for
which the high percentile of images have a lower total score.
This normalization method utilizes the mode and 90%
percentile as two reference points to align two rating systems,
which makes the distributions of total scores in different
forums more consistent. The underlying assumption, for
example in different photo forums, is that even the qualities of
top photos in different forums may vary greatly and be less
dependent on the forum quality, the distribution of photos
of middle-level quality (from mode to 90% percentile) should
be almost of the same quality up to the freedom which
reflects the rating criterion (strictness) of Web forums.
Photos of this middle-level in a Web forum usually occupy more
than 70 % of total photos in that forum.
We will give more detailed analysis of the scores in Section
3.2.
2.3 Manual Fusion
The Web movie forum, IMDB [16], proposed to use a
Bayesian-ranking function to normalize rating scores within
one community. Motivated by this ranking function, we
propose this manual fusion method: For the kth Web site, we
use the following formula
eSki = αk ·
„
nk · ¯Ski
nk + n∗
k
+
n∗
k · S∗
k
nk + n∗
k
«
(2)
to rank photos, where nk is the number of votes and n∗
k,
S∗
k and αk are three parameters. This ranking function first
takes a balance between the original mean score ¯Ski and a
reference score S∗
k to get a weighted mean score which may
be more reliable than ¯Ski. Then the weighted mean score is
scaled by αk to get the final score fSki.
For n Web communities, there are then about 3n
parameters in {(αk, n∗
k, S∗
k)|k = 1, ..., n} to tune. Though this
method can achieves pretty good results after careful and
thorough manual tuning on these parameters, when n
becomes increasingly large, say there are tens or hundreds of
Web communities crawled and indexed, this method will
become more and more laborious and will eventually become
impractical. It is therefore desirable to find an effective
fusion method whose parameters can be automatically
determined.
2.4 Duplicate Photo Detection
We use Dedup [10], an efficient and effective duplicate
image detection algorithm, to find duplicate photos between
any two photo forums. This algorithm uses hash function
to map a high dimensional feature to a 32 bits hash code
(see below for how to construct the hash code). Its
computational complexity to find all the duplicate images among
n images is about O(n log n). The low-level visual feature
for each photo is extracted on k × k regular grids. Based
on all features extracted from the image database, a PCA
model is built. The visual features are then transformed to
a relatively low-dimensional and zero mean PCA space, or
29 dimensions in our system. Then the hash code for each
photo is built as follows: each dimension is transformed to
one, if the value in this dimension is greater than 0, and 0
otherwise. Photos in the same bucket are deemed potential
duplicates and are further filtered by a threshold in terms
of Euclidean similarity in the visual feature space.
Figure 2 illustrates the hashing procedure, where visual
features - mean gray values - are extracted on both 6 × 6
and 7×7 grids. The 85-dimensional features are transformed
to a 32-dimensional vector, and the hash code is generated
according to the signs.
Figure 2: Hashing procedure for duplicate photo
dectection
2.5 Score Fusion
In this section, we will present two solutions on score
fusion based on different parametric form assumptions of ψk
in Eq. 1.
2.5.1 Linear Fusion by Duplicate Photos
Intuitively, the most straightforward way to factor out the
uncertainties caused by the different criterion is to scale,
rel380
ative to a given center, the total scores of each unreferenced
Web photo forum with respect to the reference forum. More
strictly, we assume ψk has the following form
ψk(Ski) = αkSki + tk, k = 2, ..., K (3)
ψ1(S1i) = S1i (4)
which means that the scores of k(= 1)th forum should be
scaled by αk relative to the center tk
1−αk
as shown in Figure
3.
Then, if we substitute above ψk to Eq. 1, we get the
following objective function,
min
{αk,tk|k=2,...,K}
KX
k=2
Ik1X
i=1
¯wk
i
h
S1k
i − αkSk1
i − tk
i2
. (5)
By solving the following set of functions,
(
∂f
∂αk
= = 0
∂f
∂tk
= 0
, k = 1, ..., K
where f is the objective function defined in Eq. 5, we get
the closed form solution as:
„
αk
tk
«
= A−1
k Lk (6)
where
Ak =
„ P
i ¯wi(Sk1
i )2 P
i ¯wiSk1
iP
i ¯wiSk1
i
P
i ¯wi
«
(7)
Lk =
„ P
i ¯wiS1k
i Sk1
iP
i ¯wiS1k
i
«
(8)
and k = 2, ..., K.
This is a linear fusion method. It enjoys simplicity and
excellent performance in the following experiments.
Figure 3: Linear Fusion method
2.5.2 Nonlinear Fusion by Duplicate Photos
Sometimes we want a method which can adjust scores on
intervals with two endpoints unchanged. As illustrated in
Figure 4, the method can tune scores between [C0, C1] while
leaving scores C0 and C1 unchanged. This kind of fusion
method is then much finer than the linear ones and
contains many more parameters to tune and expect to further
improve the results.
Here, we propose a nonlinear fusion solution to satisfy
such constraints. First, we introduce a transform:
ηc0,c1,α(x) =
(
x−c0
c1−c0
α
(c1 − c0) + c0, if x ∈ (c0, c1]
x otherwise
where α > 0. This transform satisfies that for x ∈ [c0, c1],
ηc0,c1,α(x) ∈ [c0, c1] with ηc0,c1,α(c0) = c0 and ηc0,c1,α(c1) =
c1. Then we can utilize this nonlinear transform to adjust
the scores in certain interval, say (M, T],
ψk(Ski) = ηM,T,α(Ski) . (9)
Figure 4: Nonlinear Fusion method. We intent to
finely adjust the shape of the curves in each segment.
Even there is no closed-form solution for the following
optimization problem,
min
{αk|k∈[2,K]}
KX
k=2
Ik1X
i=1
¯wk
i
h
S1k
i − ηM,T,α(Ski)
i2
it is not hard to get the numeric one. Under the same
assumptions made in Section 2.2, we can use this method to
adjust scores of the middle-level (from the mode point to
the 90 % percentile).
This more complicated non-linear fusion method is
expected to achieve better results than the linear one.
However, difficulties in evaluating the rank results block us from
tuning these parameters extensively. The current
experiments in Section 3.5 do not reveal any advantages over the
simple linear model.
2.5.3 Performance Measure of the Fusion Results
Since our objective function is to make the scores of the
same Web objects (e.g. duplicate photos) between a
nonreference forum and the reference forum as close as possible,
it is natural to investigate how close they become to each
other and how the scores of the same Web objects change
between the two non-reference forums before and after score
fusion.
Taken Figure 1 as an example, the proposed algorithms
minimize the score differences of the same Web objects in
two Web forums: the reference forum (the Web Community
1) and a non-reference forum, which corresponds to
minimizing the objective function on the red dashed (hidden)
links. After the optimization, we must ask what happens to
the score differences of the same Web objects in two
nonreference forums? Or, in other words, whether the scores
of two objects linked by the green dashed (hidden) links
become more consistent?
We therefore define the following performance
measureδ measure - to quantify the changes for scores of the same
Web objects in different Web forums as
δkl = Sim(Slk
, Skl
) − Sim(Slk
∗ , Skl
∗ ) (10)
381
where Skl
= (Skl
1 , ..., Skl
Ikl
)T
, Skl
∗ = (eSkl
1 , ..., eSkl
Ikl
)T
and
Sim(a, b) =
a · b
||a||||b||
.
δkl > 0 means after score fusion, scores on the same Web
objects between kth and lth Web forum become more
consistent, which is what we expect. On the contrary, if δkl < 0,
those scores become more inconsistent.
Although we cannot rely on this measure to evaluate our
final fusion results as ranking photos by their popularity and
qualities is such a subjective process that every person can
have its own results, it can help us understand the
intermediate ranking results and provide insights into the final
performances of different ranking methods.
2.6 Contrasts with Other Related Work
We have already mentioned the differences of the proposed
methods with the traditional methods, such as PageRank
[22], PopRank [21], and LinkFusion [27] algorithms in
Section 1. Here, we discuss some other related works.
The current problem can also be viewed as a rank
aggregation one [13, 14] as we deal with the problem of how to
combine several rank lists. However, there are
fundamental differences between them. First of all, unlike the Web
pages, which can be easily and accurately detected as the
same pages, detecting the same photos in different Web
forums is a non-trivial work, and can only be implemented by
some delicate algorithms while with certain precision and
recall. Second, the numbers of the duplicate photos from
different Web forums are small relative to the whole photo
sets (see Table 1). In another words, the top K rank lists
of different Web forums are almost disjointed for a given
query. Under this condition, both the algorithms proposed
in [13] and their measurements - Kendall tau distance or
Spearman footrule distance - will degenerate to some
trivial cases.
Another category of rank fusion (aggregation) methods is
based on machine learning algorithms, such as RankSVM
[17, 19], RankBoost [15], and RankNet [12]. All of these
methods entail some labelled datasets to train a model. In
current settings, it is difficult or even impossible to get these
datasets labelled as to their level of professionalism or
popularity, since the photos are too vague and subjective to rank.
Instead, the problem here is how to combine several ordered
sub lists to form a total order list.
3. EXPERIMENTS
In this section, we carry out our research on high-quality
photo search. We first briefly introduce the newly proposed
vertical image search engine - EnjoyPhoto in section 3.1.
Then we focus on how to rank photos from different Web
forums. In order to do so, we first normalize the scores
(ratings) for photos from different multiple Web forums in
section 3.2. Then we try to find duplicate photos in section
3.3. Some intermediate results are discussed using δ measure
in section 3.4. Finally a set of user studies is carried out
carefully to justify our proposed method in section 3.5.
3.1 EnjoyPhoto: high-quality Photo Search
Engine
In order to meet user requirement of enjoying high-quality
photos, we propose and build a high-quality photo search
engine - EnjoyPhoto, which accounts for the following three
key issues: 1. how to crawl and index photos, 2. how to
determine the qualities of each photo and 3. how to
display the search results in order to make the search process
enjoyable. For a given text based query, this system ranks
the photos based on certain combination of relevance of the
photo to this query (Issue 1) and the quality of the photo
(Issue 2), and finally displays them in an enjoyable manner
(Issue 3).
As for Issue 3, we devise the interface of the system
deliberately in order to smooth the users" process of enjoying
high-quality photos. Techniques, such as Fisheye and slides
show, are utilized in current system. Figure 5 shows the
interface. We will not talk more about this issue as it is not
an emphasis of this paper.
Figure 5: EnjoyPhoto: an enjoyable high-quality
photo search engine, where 26,477 records are
returned for the query fall in about 0.421 seconds
As for Issue 1, we extracted from a commercial search
engine a subset of photos coming from various photo forums
all over the world, and explicitly parsed the Web pages
containing these photos. The number of photos in the data
collection is about 2.5 million. After the parsing, each photo
was associated with its title, category, description, camera
setting, EXIF data 1
(when available for digital images),
location (when available in some photo forums), and many
kinds of ratings. All these metadata are generally precise
descriptions or annotations for the image content, which are
then indexed by general text-based search technologies [9,
18, 11]. In current system, the ranking function was
specifically tuned to emphasize title, categorization, and rating
information.
Issue 2 is essentially dealt with in the following sections
which derive the quality of photos by analyzing ratings
provided by various Web photo forums. Here we chose six photo
forums to study the ranking problem and denote them as
Web-A, Web-B, Web-C, Web-D, Web-E and Web-F.
3.2 Photo Score Normalization
Detailed analysis of different score normalization
methods are analyzed in this section. In this analysis, the zero
1
Digital cameras save JPEG (.jpg) files with EXIF
(Exchangeable Image File) data. Camera settings and scene
information are recorded by the camera into the image file.
www.digicamhelp.com/what-is-exif/
382
0 2 4 6 8 10
0
1000
2000
3000
4000
Normalized Score
TotalNumber
(a) Web-A
0 2 4 6 8 10
0
0.5
1
1.5
2
2.5
3
x 10
4
Normalized Score
TotalNumber
(b) Web-B
0 2 4 6 8 10
0
0.5
1
1.5
2
x 10
5
Normalized Score
TotalNumber
(c) Web-C
0 2 4 6 8 10
0
2
4
6
8
10
x 10
4
Normalized Score
TotalNumber
(d) Web-D
0 2 4 6 8 10
0
2000
4000
6000
8000
10000
12000
14000
Normalized Score
TotalNumber
(e) Web-E
0 2 4 6 8 10
0
1
2
3
4
5
6
x 10
4
Normalized Score
TotalNumber
(f) Web-F
Figure 6: Distributions of mean scores normalized
to [0, 10]
scores that usually occupy about than 30% of the total
number of photos for some Web forums are not currently taken
into account. How to utilize these photos is left for future
explorations.
In Figure 6, we list the distributions of the mean score,
which is transformed to a fixed interval [0, 10]. The
distributions of the average scores of these Web forums look quite
different. Distributions in Figure 6(a), 6(b), and 6(e) looks
like Gaussian distributions, while those in Figure 6(d) and
6(f) are dominated by the top score. The reason of these
eccentric distributions for Web-D and Web-F lies in their
coarse rating systems. In fact, Web-D and Web-F use 2 or
3 point rating scales whereas other Web forums use 7 or 14
point rating scales. Therefore, it will be problematic if we
directly use these averaged scores. Furthermore the average
score is very likely to be spammed, if there are only a few
users rating a photo.
Figure 7 shows the total score normalization method by
maximal and minimal scores, which is one of our base line
system. All the total scores of a given Web forum are
normalized to [0, 100] according to the maximal score and
minimal score of corresponding Web forum. We notice that total
score distribution of Web-A in Figure 7(a) has two larger
tails than all the others. To show the shape of the
distributions more clearly, we only show the distributions on [0, 25]
in Figure 7(b),7(c),7(d),7(e), and 7(f).
Figure 8 shows the Mode-90% Percentile normalization
method, where the modes of the six distributions are
normalized to 5 and the 90% percentile to 8. We can see that
this normalization method makes the distributions of total
scores in different forums more consistent. The two proposed
algorithms are all based on these normalization methods.
3.3 Duplicate photo detection
Targeting at computational efficiency, the Dedup
algorithm may lose some recall rate, but can achieve a high
precision rate. We also focus on finding precise hidden links
rather than all hidden links. Figure 9 shows some duplicate
detection examples. The results are shown in Table 1 and
verify that large numbers of duplicate photos exist in any
two Web forums even with the strict condition for Dedup
where we chose first 29 bits as the hash code. Since there
are only a few parameters to estimate in the proposed fusion
methods, the numbers of duplicate photos shown Table 1 are
0 20 40 60 80 100
0
100
200
300
400
500
600
Normalized Score
TotalNumber
(a) Web-A
0 5 10 15 20 25
0
1
2
3
4
5
x 10
4
Normalized Score
TotalNumber
(b) Web-B
0 5 10 15 20 25
0
1
2
3
4
5
x 10
5
Normalized Score
TotalNumber
(c) Web-C
0 5 10 15 20 25
0
0.5
1
1.5
2
2.5
x 10
4
Normalized Score
TotalNumber
(d) Web-D
0 5 10 15 20 25
0
2000
4000
6000
8000
10000
Normalized Score
TotalNumber
(e) Web-E
0 5 10 15 20 25
0
0.5
1
1.5
2
2.5
3
x 10
4
Normalized Score
TotalNumber
(f) Web-F
Figure 7: Maxmin Normalization
0 5 10 15
0
200
400
600
800
1000
1200
1400
Normalized Score
TotalNumber
(a) Web-A
0 5 10 15
0
1
2
3
4
5
x 10
4
Normalized Score
TotalNumber
(b) Web-B
0 5 10 15
0
2
4
6
8
10
12
14
x 10
4
Normalized Score
TotalNumber
(c) Web-C
0 5 10 15
0
0.5
1
1.5
2
2.5
x 10
4
Normalized Score
TotalNumber
(d) Web-D
0 5 10 15
0
2000
4000
6000
8000
10000
12000
Normalized Score
TotalNumber
(e) Web-E
0 5 10 15
0
2000
4000
6000
8000
10000
Normalized Score
TotalNumber
(f) Web-F
Figure 8: Mode-90% Percentile Normalization
sufficient to determine these parameters. The last table
column lists the total number of photos in the corresponding
Web forums.
3.4 δ Measure
The parameters of the proposed linear and nonlinear
algorithms are calculated using the duplicate data shown in
Table 1, where the Web-C is chosen as the reference Web
forum since it shares the most duplicate photos with other
forums.
Table 2 and 3 show the δ measure on the linear model and
nonlinear model. As δkl is symmetric and δkk = 0, we only
show the upper triangular part. The NaN values in both
tables lie in that no duplicate photos have been detected by
the Dedup algorithm as reported in Table 1.
The linear model guarantees that the δ measures related
Table 1: Number of duplicate photos between each
pair of Web forums
A B C D E F Scale
A 0 316 1,386 178 302 0 130k
B 316 0 14,708 909 8,023 348 675k
C 1,386 14,708 0 1,508 19,271 1,083 1,003k
D 178 909 1,508 0 1,084 21 155k
E 302 8,023 19,271 1,084 0 98 448k
F 0 348 1,083 21 98 0 122k
383
Figure 9: Some results of duplicate photo detection
Table 2: δ measure on the linear model.
Web-B Web-C Web-D Web-E Web-F
Web-A 0.0659 0.0911 0.0956 0.0928 NaN
Web-B - 0.0672 0.0578 0.0791 0.4618
Web-C - - 0.0105 0.0070 0.2220
Web-D - - - 0.0566 0.0232
Web-E - - - - 0.6525
to the reference community should be no less than 0
theoretically. It is indeed the case (see the underlined numbers
in Table 2). But this model can not guarantee that the δ
measures on the non-reference communities can also be no
less than 0, as the normalization steps are based on
duplicate photos between the reference community and a
nonreference community. Results shows that all the numbers in
the δ measure are greater than 0 (see all the non-underlined
numbers in Table 2), which indicates that it is probable that
this model will give optimal results.
On the contrary, the nonlinear model does not guarantee
that δ measures related to the reference community should
be no less than 0, as not all duplicate photos between the
two Web forums can be used when optimizing this model.
In fact, the duplicate photos that lie in different intervals
will not be used in this model. It is these specific duplicate
photos that make the δ measure negative. As a result, there
are both negative and positive items in Table 3, but overall
the number of positive ones are greater than negative ones
(9:5), that indicates the model may be better than the
normalization only method (see next subsection) which has an
all-zero δ measure, and worse than the linear model.
3.5 User Study
Because it is hard to find an objective criterion to evaluate
Table 3: δ measure on the nonlinear model.
Web-B Web-C Web-D Web-E Web-F
Web-A 0.0559 0.0054 -0.0185 -0.0054 NaN
Web-B - -0.0162 -0.0345 -0.0301 0.0466
Web-C - - 0.0136 0.0071 0.1264
Web-D - - - 0.0032 0.0143
Web-E - - - - 0.214
which ranking function is better, we chose to employ user
studies for subjective evaluations. Ten subjects were invited
to participate in the user study. They were recruited from
nearby universities. As search engines of both text search
and image search are familiar to university students, there
was no prerequisite criterion for choosing students.
We conducted user studies using Internet Explorer 6.0 on
Windows XP with 17-inch LCD monitors set at 1,280 pixels
by 1,024 pixels in 32-bit color. Data was recorded with
server logs and paper-based surveys after each task.
Figure 10: User study interface
We specifically device an interface for user study as shown
in Figure 10. For each pair of fusion methods, participants
were encouraged to try any query they wished. For those
without specific ideas, two combo boxes (category list and
query list) were listed on the bottom panel, where the top
1,000 image search queries from a commercial search engine
were provided. After a participant submitted a query, the
system randomly selected the left or right frame to display
each of the two ranking results. The participant were then
required to judge which ranking result was better of the two
ranking results, or whether the two ranking results were of
equal quality, and submit the judgment by choosing the
corresponding radio button and clicking the Submit button.
For example, in Figure 10, query sunset is submitted to
the system. Then, 79,092 photos were returned and ranked
by the Minmax fusion method in the left frame and linear
fusion method in the right frame. A participant then
compares the two ranking results (without knowing the ranking
methods) and submits his/her feedback by choosing answers
in the Your option.
Table 4: Results of user study
Norm.Only Manually Linear
Linear 29:13:10
14:22:15Nonlinear 29:15:9 12:27:12 6:4:45
Table 4 shows the experimental results, where Linear
denotes the linear fusion method, Nonlinear denotes the
non linear fusion method, Norm. Only means Maxmin
normalization method, Manually means the manually tuned
method. The three numbers in each item, say 29:13:10,
mean that 29 judgments prefer the linear fusion results, 10
384
judgments prefer the normalization only method, and 13
judgments consider these two methods as equivalent.
We conduct the ANOVA analysis, and obtain the
following conclusions:
1. Both the linear and nonlinear methods are significantly
better than the Norm. Only method with respective
P-values 0.00165(< 0.05) and 0.00073(<< 0.05). This
result is consistent with the δ-measure evaluation
result. The Norm. Only method assumes that the top
10% photos in different forums are of the same
quality. However, this assumption does not stand in
general. For example, a top 10% photo in a top tier photo
forum is generally of higher quality than a top 10%
photo in a second-tier photo forum. This is similar
to that, those top 10% students in a top-tier
university and those in a second-tier university are generally
of different quality. Both linear and nonlinear fusion
methods acknowledge the existence of such differences
and aim at quantizing the differences. Therefore, they
perform better than the Norm. Only method.
2. The linear fusion method is significantly better than
the nonlinear one with P-value 1.195 × 10−10
. This
result is rather surprising as this more complicated
ranking method is expected to tune the ranking more
finely than the linear one. The main reason for this
result may be that it is difficult to find the best
intervals where the nonlinear tuning should be carried out
and yet simply the middle part of the Mode-90%
Percentile Normalization method was chosen. The
timeconsuming and subjective evaluation methods - user
studies - blocked us extensively tuning these
parameters.
3. The proposed linear and nonlinear methods perform
almost the same with or slightly better than the
manually tuned method. Given that the linear/nonlinear
fusion methods are fully automatic approaches, they
are considered practical and efficient solutions when
more communities (e.g. dozens of communities) need
to be integrated.
4. CONCLUSIONS AND FUTURE WORK
In this paper, we studied the Web object-ranking
problem in the cases of lacking object relationships where
traditional ranking algorithms are no longer valid, and took
high-quality photo search as the test bed for this
investigation. We have built a vertical high-quality photo search
engine, and proposed score fusion methods which can
automatically integrate as many data sources (Web forums) as
possible. The proposed fusion methods leverage the hidden
links discovered by duplicate photo detection algorithm, and
minimize score differences of duplicate photos in different
forums. Both the intermediate results and the user
studies show that the proposed fusion methods are a practical
and efficient solution to Web object ranking in the
aforesaid relationships. Though the experiments were conducted
on high-quality photo ranking, the proposed algorithms are
also applicable to other kinds of Web objects including video
clips, poems, short stories, music, drawings, sculptures, and
so on.
Current system is far from being perfect. In order to make
this system more effective, more delicate analysis for the
vertical domain (e.g., Web photo forums) are needed. The
following points, for example, may improve the searching
results and will be our future work: 1. more subtle
analysis and then utilization of different kinds of ratings (e.g.,
novelty ratings, aesthetic ratings); 2. differentiating various
communities who may have different interests and
preferences or even distinct culture understandings; 3.
incorporating more useful information, including photographers" and
reviewers" information, to model the photos in a
heterogeneous data space instead of the current homogeneous one.
We will further utilize collaborative filtering to recommend
relevant high-quality photos to browsers.
One open problem is whether we can find an objective and
efficient criterion for evaluating the ranking results, instead
of employing subjective and inefficient user studies, which
blocked us from trying more ranking algorithms and tuning
parameters in one algorithm.
5. ACKNOWLEDGMENTS
We thank Bin Wang and Zhi Wei Li for providing Dedup
codes to detect duplicate photos; Zhen Li for helping us
design the interface of EnjoyPhoto; Ming Jing Li, Longbin
Chen, Changhu Wang, Yuanhao Chen, and Li Zhuang etc.
for useful discussions. Special thanks go to Dwight Daniels
for helping us revise the language of this paper.
6. REFERENCES
[1] Google image search. http://images.google.com.
[2] Google local search. http://local.google.com/.
[3] Google news search. http://news.google.com.
[4] Google paper search. http://Scholar.google.com.
[5] Google product search. http://froogle.google.com.
[6] Google video search. http://video.google.com.
[7] Scientific literature digital library.
http://citeseer.ist.psu.edu.
[8] Yahoo image search. http://images.yahoo.com.
[9] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. New York: ACM Press;
Harlow, England: Addison-Wesley, 1999.
[10] W. Bin, L. Zhiwei, L. Ming Jing, and M. Wei-Ying.
Large-scale duplicate detection for web image search.
In Proceedings of the International Conference on
Multimedia and Expo, page 353, 2006.
[11] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Computer
Networks, volume 30, pages 107-117, 1998.
[12] C. Burges, T. Shaked, E. Renshaw, A. Lazier,
M. Deeds, N. Hamilton, and G. Hullender. Learning
to rank using gradient descent. In Proceedings of the
22nd international conference on Machine learning,
pages 89 - 96, 2005.
[13] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar.
Rank aggregation methods for the web. In Proceedings
10th International Conference on World Wide Web,
pages 613 - 622, Hong-Kong, 2001.
[14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing
top k lists. SIAM Journal on Discrete Mathematics,
17(1):134 - 160, 2003.
[15] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An
efficient boosting algorithm for combining preferences.
385
Journal of Machine Learning Research,
4(1):933-969(37), 2004.
[16] IMDB. Formula for calculating the top rated 250 titles
in imdb. http://www.imdb.com/chart/top.
[17] T. Joachims. Optimizing search engines using
clickthrough data. In Proceedings of the eighth ACM
SIGKDD international conference on Knowledge
discovery and data mining, pages 133 - 142, 2002.
[18] J. M. Kleinberg. Authoritative sources in a
hyperlinked environment. Journal of the ACM,
46(5):604-632, 1999.
[19] R. Nallapati. Discriminative models for information
retrieval. In Proceedings of the 25th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 64 - 71,
2004.
[20] Z. Nie, Y. Ma, J.-R. Wen, and W.-Y. Ma. Object-level
web information retrieval. In Technical Report of
Microsoft Research, volume MSR-TR-2005-11, 2005.
[21] Z. Nie, Y. Zhang, J.-R. Wen, and W.-Y. Ma.
Object-level ranking: Bringing order to web objects.
In Proceedings of the 14th international conference on
World Wide Web, pages 567 - 574, Chiba, Japan,
2005.
[22] L. Page, S. Brin, R. Motwani, and T. Winograd. The
pagerank citation ranking: Bringing order to the web.
In Technical report, Stanford Digital Libraries, 1998.
[23] A. Savakis, S. Etz, and A. Loui. Evaluation of image
appeal in consumer photography. In SPIE Human
Vision and Electronic Imaging, pages 111-120, 2000.
[24] D. Sullivan. Hitwise search engine ratings. Search
Engine Watch Articles, http://searchenginewatch.
com/reports/article.php/3099931, August 23, 2005.
[25] S. Susstrunk and S. Winkler. Color image quality on
the internet. In IS&T/SPIE Electronic Imaging 2004:
Internet Imaging V, volume 5304, pages 118-131,
2004.
[26] H. Tong, M. Li, Z. H.J., J. He, and Z. C.S.
Classification of digital photos taken by photographers
or home users. In Pacific-Rim Conference on
Multimedia (PCM), pages 198-205, 2004.
[27] W. Xi, B. Zhang, Z. Chen, Y. Lu, S. Yan, W.-Y. Ma,
and E. A. Fox. Link fusion: a unified link analysis
framework for multi-type interrelated data objects. In
Proceedings of the 13th international conference on
World Wide Web, pages 319 - 327, 2004.
386 | duplicate photo detection algorithm;algorithm;image search query;score fusion method;nonlinear fusion method;multiple web forum;rank photo;domain specific knowledge;web object;rank;high-quality photo search;ranking function;image search;web object-ranking problem;web object-ranking;vertical search |
train_H-73 | Unified Utility Maximization Framework for Resource Selection | This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. | 1. INTRODUCTION
Conventional search engines such as Google or AltaVista use
ad-hoc information retrieval solution by assuming all the
searchable documents can be copied into a single centralized
database for the purpose of indexing. Distributed information
retrieval, also known as federated search [1,4,7,11,14,22] is
different from ad-hoc information retrieval as it addresses the
cases when documents cannot be acquired and stored in a single
database. For example, Hidden Web contents (also called
invisible or deep Web contents) are information on the Web
that cannot be accessed by the conventional search engines.
Hidden web contents have been estimated to be 2-50 [19] times
larger than the contents that can be searched by conventional
search engines. Therefore, it is very important to search this type
of valuable information.
The architecture of distributed search solution is highly
influenced by different environmental characteristics. In a small
local area network such as small company environments, the
information providers may cooperate to provide corpus statistics
or use the same type of search engines. Early distributed
information retrieval research focused on this type of
cooperative environments [1,8]. On the other side, in a wide
area network such as very large corporate environments or on
the Web there are many types of search engines and it is difficult
to assume that all the information providers can cooperate as
they are required. Even if they are willing to cooperate in these
environments, it may be hard to enforce a single solution for all
the information providers or to detect whether information
sources provide the correct information as they are required.
Many applications fall into the latter type of uncooperative
environments such as the Mind project [16] which integrates
non-cooperating digital libraries or the QProber system [9]
which supports browsing and searching of uncooperative hidden
Web databases. In this paper, we focus mainly on uncooperative
environments that contain multiple types of independent search
engines.
There are three important sub-problems in distributed
information retrieval. First, information about the contents of
each individual database must be acquired (resource
representation) [1,8,21]. Second, given a query, a set of
resources must be selected to do the search (resource selection)
[5,7,21]. Third, the results retrieved from all the selected
resources have to be merged into a single final list before it can
be presented to the end user (retrieval and results merging)
[1,5,20,22].
Many types of solutions exist for distributed information
retrieval. Invisible-web.net1
provides guided browsing of hidden
Web databases by collecting the resource descriptions of these
databases and building hierarchies of classes that group them by
similar topics. A database recommendation system goes a step
further than a browsing system like Invisible-web.net by
recommending most relevant information sources to users"
queries. It is composed of the resource description and the
resource selection components. This solution is useful when the
users want to browse the selected databases by themselves
instead of asking the system to retrieve relevant documents
automatically. Distributed document retrieval is a more
sophisticated task. It selects relevant information sources for
users" queries as the database recommendation system does.
Furthermore, users" queries are forwarded to the corresponding
selected databases and the returned individual ranked lists are
merged into a single list to present to the users.
The goal of a database recommendation system is to select a
small set of resources that contain as many relevant documents
as possible, which we call a high-recall goal. On the other side,
the effectiveness of distributed document retrieval is often
measured by the Precision of the final merged document result
list, which we call a high-precision goal. Prior research
indicated that these two goals are related but not identical [4,21].
However, most previous solutions simply use effective resource
selection algorithm of database recommendation system for
distributed document retrieval system or solve the inconsistency
with heuristic methods [1,4,21].
This paper presents a unified utility maximization framework to
integrate the resource selection problem of both database
recommendation and distributed document retrieval together by
treating them as different optimization goals.
First, a centralized sample database is built by randomly
sampling a small amount of documents from each database with
query-based sampling [1]; database size statistics are also
estimated [21]. A logistic transformation model is learned off
line with a small amount of training queries to map the
centralized document scores in the centralized sample database
to the corresponding probabilities of relevance.
Second, after a new query is submitted, the query can be used to
search the centralized sample database which produces a score
for each sampled document. The probability of relevance for
each document in the centralized sample database can be
estimated by applying the logistic model to each document"s
score. Then, the probabilities of relevance of all the (mostly
unseen) documents among the available databases can be
estimated using the probabilities of relevance of the documents
in the centralized sample database and the database size
estimates.
For the task of resource selection for a database
recommendation system, the databases can be ranked by the
expected number of relevant documents to meet the high-recall
goal. For resource selection for a distributed document retrieval
system, databases containing a small number of documents with
large probabilities of relevance are favored over databases
containing many documents with small probabilities of
relevance. This selection criterion meets the high-precision goal
of distributed document retrieval application. Furthermore, the
Semi-supervised learning (SSL) [20,22] algorithm is applied to
merge the returned documents into a final ranked list.
The unified utility framework makes very few assumptions and
works in uncooperative environments. Two key features make it
a more solid model for distributed information retrieval: i) It
formalizes the resource selection problems of different
applications as various utility functions, and optimizes the utility
functions to achieve the optimal results accordingly; and ii) It
shows an effective and efficient way to estimate the probabilities
of relevance of all documents across databases. Specifically, the
framework builds logistic models on the centralized sample
database to transform centralized retrieval scores to the
corresponding probabilities of relevance and uses the centralized
sample database as the bridge between individual databases and
the logistic model. The human effort (relevance judgment)
required to train the single centralized logistic model does not
scale with the number of databases. This is a large advantage
over previous research, which required the amount of human
effort to be linear with the number of databases [7,15].
The unified utility framework is not only more theoretically
solid but also very effective. Empirical studies show the new
model to be at least as accurate as the state-of-the-art algorithms
in a variety of configurations.
The next section discusses related work. Section 3 describes the
new unified utility maximization model. Section 4 explains our
experimental methodology. Sections 5 and 6 present our
experimental results for resource selection and document
retrieval. Section 7 concludes.
2. PRIOR RESEARCH
There has been considerable research on all the sub-problems of
distributed information retrieval. We survey the most related
work in this section.
The first problem of distributed information retrieval is resource
representation. The STARTS protocol is one solution for
acquiring resource descriptions in cooperative environments [8].
However, in uncooperative environments, even the databases are
willing to share their information, it is not easy to judge whether
the information they provide is accurate or not. Furthermore, it
is not easy to coordinate the databases to provide resource
representations that are compatible with each other. Thus, in
uncooperative environments, one common choice is query-based
sampling, which randomly generates and sends queries to
individual search engines and retrieves some documents to build
the descriptions. As the sampled documents are selected by
random queries, query-based sampling is not easily fooled by
any adversarial spammer that is interested to attract more traffic.
Experiments have shown that rather accurate resource
descriptions can be built by sending about 80 queries and
downloading about 300 documents [1].
Many resource selection algorithms such as gGlOSS/vGlOSS
[8] and CORI [1] have been proposed in the last decade. The
CORI algorithm represents each database by its terms, the
document frequencies and a small number of corpus statistics
(details in [1]). As prior research on different datasets has shown
the CORI algorithm to be the most stable and effective of the
three algorithms [1,17,18], we use it as a baseline algorithm in
this work. The relevant document distribution estimation
(ReDDE [21]) resource selection algorithm is a recent algorithm
that tries to estimate the distribution of relevant documents
across the available databases and ranks the databases
accordingly. Although the ReDDE algorithm has been shown to
be effective, it relies on heuristic constants that are set
empirically [21].
The last step of the document retrieval sub-problem is results
merging, which is the process of transforming database-specific
33
document scores into comparable database-independent
document scores. The semi supervised learning (SSL) [20,22]
result merging algorithm uses the documents acquired by
querybased sampling as training data and linear regression to learn the
database-specific, query-specific merging models. These linear
models are used to convert the database-specific document
scores into the approximated centralized document scores. The
SSL algorithm has been shown to be effective [22]. It serves as
an important component of our unified utility maximization
framework (Section 3).
In order to achieve accurate document retrieval results, many
previous methods simply use resource selection algorithms that
are effective of database recommendation system. But as
pointed out above, a good resource selection algorithm
optimized for high-recall may not work well for document
retrieval, which targets the high-precision goal. This type of
inconsistency has been observed in previous research [4,21].
The research in [21] tried to solve the problem with a heuristic
method.
The research most similar to what we propose here is the
decision-theoretic framework (DTF) [7,15]. This framework
computes a selection that minimizes the overall costs (e.g.,
retrieval quality, time) of document retrieval system and several
methods [15] have been proposed to estimate the retrieval
quality. However, two points distinguish our research from the
DTF model. First, the DTF is a framework designed specifically
for document retrieval, but our new model integrates two
distinct applications with different requirements (database
recommendation and distributed document retrieval) into the
same unified framework. Second, the DTF builds a model for
each database to calculate the probabilities of relevance. This
requires human relevance judgments for the results retrieved
from each database. In contrast, our approach only builds one
logistic model for the centralized sample database. The
centralized sample database can serve as a bridge to connect the
individual databases with the centralized logistic model, thus the
probabilities of relevance of documents in different databases
can be estimated. This strategy can save large amount of human
judgment effort and is a big advantage of the unified utility
maximization framework over the DTF especially when there
are a large number of databases.
3. UNIFIED UTILITY MAXIMIZATION
FRAMEWORK
The Unified Utility Maximization (UUM) framework is based
on estimating the probabilities of relevance of the (mostly
unseen) documents available in the distributed search
environment. In this section we describe how the probabilities of
relevance are estimated and how they are used by the Unified
Utility Maximization model. We also describe how the model
can be optimized for the high-recall goal of a database
recommendation system and the high-precision goal of a
distributed document retrieval system.
3.1 Estimating Probabilities of Relevance
As pointed out above, the purpose of resource selection is
highrecall and the purpose of document retrieval is high-precision. In
order to meet these diverse goals, the key issue is to estimate the
probabilities of relevance of the documents in various databases.
This is a difficult problem because we can only observe a
sample of the contents of each database using query-based
sampling. Our strategy is to make full use of all the available
information to calculate the probability estimates.
3.1.1 Learning Probabilities of Relevance
In the resource description step, the centralized sample database
is built by query-based sampling and the database sizes are
estimated using the sample-resample method [21]. At the same
time, an effective retrieval algorithm (Inquery [2]) is applied on
the centralized sample database with a small number (e.g., 50)
of training queries. For each training query, the CORI resource
selection algorithm [1] is applied to select some number
(e.g., 10) of databases and retrieve 50 document ids from each
database. The SSL results merging algorithm [20,22] is used to
merge the results. Then, we can download the top 50 documents
in the final merged list and calculate their corresponding
centralized scores using Inquery and the corpus statistics of the
centralized sample database. The centralized scores are further
normalized (divided by the maximum centralized score for each
query), as this method has been suggested to improve estimation
accuracy in previous research [15]. Human judgment is acquired
for those documents and a logistic model is built to transform
the normalized centralized document scores to probabilities of
relevance as follows:
( )
))(exp(1
))(exp(
|)( _
_
dSba
dSba
drelPdR
ccc
ccc
++
+
== (1)
where )(
_
dSc
is the normalized centralized document score and
ac and bc are the two parameters of the logistic model. These two
parameters are estimated by maximizing the probabilities of
relevance of the training queries. The logistic model provides us
the tool to calculate the probabilities of relevance from
centralized document scores.
3.1.2 Estimating Centralized Document Scores
When the user submits a new query, the centralized document
scores of the documents in the centralized sample database are
calculated. However, in order to calculate the probabilities of
relevance, we need to estimate centralized document scores for
all documents across the databases instead of only the sampled
documents. This goal is accomplished using: the centralized
scores of the documents in the centralized sample database, and
the database size statistics.
We define the database scale factor for the ith
database as the
ratio of the estimated database size and the number of
documents sampled from this database as follows:
^
_
i
i
i
db
db
db samp
N
SF
N
= (2)
where
^
idbN is the estimated database size and _idb sampN is the
number of documents from the ith
database in the centralized
sample database. The intuition behind the database scale factor
is that, for a database whose scale factor is 50, if one document
from this database in the centralized sample database has a
centralized document score of 0.5, we may guess that there are
about 50 documents in that database which have scores of about
0.5. Actually, we can apply a finer non-parametric linear
interpolation method to estimate the centralized document score
curve for each database. Formally, we rank all the sampled
documents from the ith
database by their centralized document
34
scores to get the sampled centralized document score list
{Sc(dsi1), Sc(dsi2), Sc(dsi3),…..} for the ith
database; we assume
that if we could calculate the centralized document scores for all
the documents in this database and get the complete centralized
document score list, the top document in the sampled list would
have rank SFdbi/2, the second document in the sampled list
would rank SFdbi3/2, and so on. Therefore, the data points of
sampled documents in the complete list are: {(SFdbi/2, Sc(dsi1)),
(SFdbi3/2, Sc(dsi2)), (SFdbi5/2, Sc(dsi3)),…..}. Piecewise linear
interpolation is applied to estimate the centralized document
score curve, as illustrated in Figure 1. The complete centralized
document score list can be estimated by calculating the values of
different ranks on the centralized document curve as:
],1[,)(S
^^
c idbij Njd ∈ .
It can be seen from Figure 1 that more sample data points
produce more accurate estimates of the centralized document
score curves. However, for databases with large database scale
ratios, this kind of linear interpolation may be rather inaccurate,
especially for the top ranked (e.g., [1, SFdbi/2]) documents.
Therefore, an alternative solution is proposed to estimate the
centralized document scores of the top ranked documents for
databases with large scale ratios (e.g., larger than 100).
Specifically, a logistic model is built for each of these databases.
The logistic model is used to estimate the centralized document
score of the top 1 document in the corresponding database by
using the two sampled documents from that database with
highest centralized scores.
))()(exp(1
))()(exp(
)(
22110
22110
^
1
iciicii
iciicii
ic
dsSdsS
dsSdsS
dS
ααα
ααα
+++
++
= (3)
0iα , 1iα and 2iα are the parameters of the logistic model. For
each training query, the top retrieved document of each database
is downloaded and the corresponding centralized document
score is calculated. Together with the scores of the top two
sampled documents, these parameters can be estimated.
After the centralized score of the top document is estimated, an
exponential function is fitted for the top part ([1, SFdbi/2]) of the
centralized document score curve as:
]2/,1[)*exp()( 10
^
idbiiijc SFjjdS ∈+= ββ (4)
^
0 1 1log( ( ))i c i iS dβ β= − (5)
)12/(
))(log()((log(
^
11
1
−
−
=
idb
icic
i
SF
dSdsS
β (6)
The two parameters 0iβ and 1iβ are fitted to make sure the
exponential function passes through the two points (1,
^
1)( ic dS )
and (SFdbi/2, Sc(dsi1)). The exponential function is only used to
adjust the top part of the centralized document score curve and
the lower part of the curve is still fitted with the linear
interpolation method described above. The adjustment by fitting
exponential function of the top ranked documents has been
shown empirically to produce more accurate results.
From the centralized document score curves, we can estimate
the complete centralized document score lists accordingly for all
the available databases. After the estimated centralized
document scores are normalized, the complete lists of
probabilities of relevance can be constructed out of the complete
centralized document score lists by Equation 1. Formally for the
ith
database, the complete list of probabilities of relevance is:
],1[,)(R
^^
idbij Njd ∈ .
3.2 The Unified Utility Maximization Model
In this section, we formally define the new unified utility
maximization model, which optimizes the resource selection
problems for two goals of high-recall (database
recommendation) and high-precision (distributed document
retrieval) in the same framework.
In the task of database recommendation, the system needs to
decide how to rank databases. In the task of document retrieval,
the system not only needs to select the databases but also needs
to decide how many documents to retrieve from each selected
database. We generalize the database recommendation selection
process, which implicitly recommends all documents in every
selected database, as a special case of the selection decision for
the document retrieval task. Formally, we denote di as the
number of documents we would like to retrieve from the ith
database and ,.....},{ 21 ddd = as a selection action for all the
databases.
The database selection decision is made based on the complete
lists of probabilities of relevance for all the databases. The
complete lists of probabilities of relevance are inferred from all
the available information specifically sR , which stands for the
resource descriptions acquired by query-based sampling and the
database size estimates acquired by sample-resample; cS stands
for the centralized document scores of the documents in the
centralized sample database.
If the method of estimating centralized document scores and
probabilities of relevance in Section 3.1 is acceptable, then the
most probable complete lists of probabilities of relevance can be
derived and we denote them as 1
^ ^
*
1{(R( ), [1, ]),dbjd j Nθ = ∈
2
^ ^
2(R( ), [1, ]),.......}dbjd j N∈ . Random vector
denotes an
arbitrary set of complete lists of probabilities of relevance and
),|( cs SRP θ as the probability of generating this set of lists.
Finally, to each selection action d and a set of complete lists of
Figure 1. Linear interpolation construction of the complete
centralized document score list (database scale factor is 50).
35
probabilities of relevance θ , we associate a utility function
),( dU θ which indicates the benefit from making the d
selection when the true complete lists of probabilities of
relevance are θ .
Therefore, the selection decision defined by the Bayesian
framework is:
θθθ
θ
dSRPdUd cs
d
).|(),(maxarg
*
= (7)
One common approach to simplify the computation in the
Bayesian framework is to only calculate the utility function at
the most probable parameter values instead of calculating the
whole expectation. In other words, we only need to calculate
),( *
dU θ and Equation 7 is simplified as follows:
),(maxarg *
*
θdUd
d
= (8)
This equation serves as the basic model for both the database
recommendation system and the document retrieval system.
3.3 Resource Selection for High-Recall
High-recall is the goal of the resource selection algorithm in
federated search tasks such as database recommendation. The
goal is to select a small set of resources (e.g., less than Nsdb
databases) that contain as many relevant documents as possible,
which can be formally defined as:
=
=
i
N
j
iji
idb
ddIdU
^
1
^
*
)(R)(),( θ (9)
I(di) is the indicator function, which is 1 when the ith
database is
selected and 0 otherwise. Plug this equation into the basic model
in Equation 8 and associate the selected database number
constraint to obtain the following:
sdb
i
i
i
N
j
iji
d
NdItoSubject
ddId
idb
=
=
=
)(:
)(R)(maxarg
^
1
^*
(10)
The solution of this optimization problem is very simple. We
can calculate the expected number of relevant documents for
each database as follows:
=
=
idb
i
N
j
ijRd dN
^
1
^^
)(R (11)
The Nsdb databases with the largest expected number of relevant
documents can be selected to meet the high-recall goal. We call
this the UUM/HR algorithm (Unified Utility Maximization for
High-Recall).
3.4 Resource Selection for High-Precision
High-Precision is the goal of resource selection algorithm in
federated search tasks such as distributed document retrieval. It
is measured by the Precision at the top part of the final merged
document list. This high-precision criterion is realized by the
following utility function, which measures the Precision of
retrieved documents from the selected databases.
=
=
i
d
j
iji
i
ddIdU
1
^
*
)(R)(),( θ (12)
Note that the key difference between Equation 12 and Equation
9 is that Equation 9 sums up the probabilities of relevance of all
the documents in a database, while Equation 12 only considers a
much smaller part of the ranking. Specifically, we can calculate
the optimal selection decision by:
=
=
i
d
j
iji
d
i
ddId
1
^*
)(R)(maxarg (13)
Different kinds of constraints caused by different characteristics
of the document retrieval tasks can be associated with the above
optimization problem. The most common one is to select a fixed
number (Nsdb) of databases and retrieve a fixed number (Nrdoc) of
documents from each selected database, formally defined as:
0,
)(:
)(R)(maxarg
1
^*
≠=
=
=
=
irdoci
sdb
i
i
i
d
j
iji
d
difNd
NdItoSubject
ddId
i
(14)
This optimization problem can be solved easily by calculating
the number of expected relevant documents in the top part of the
each database"s complete list of probabilities of relevance:
=
=
rdoc
i
N
j
ijRdTop dN
1
^^
_ )(R (15)
Then the databases can be ranked by these values and selected.
We call this the UUM/HP-FL algorithm (Unified Utility
Maximization for High-Precision with Fixed Length document
rankings from each selected database).
A more complex situation is to vary the number of retrieved
documents from each selected database. More specifically, we
allow different selected databases to return different numbers of
documents. For simplification, the result list lengths are required
to be multiples of a baseline number 10. (This value can also be
varied, but for simplification it is set to 10 in this paper.) This
restriction is set to simulate the behavior of commercial search
engines on the Web. (Search engines such as Google and
AltaVista return only 10 or 20 document ids for every result
page.) This procedure saves the computation time of calculating
optimal database selection by allowing the step of dynamic
programming to be 10 instead of 1 (more detail is discussed
latterly). For further simplification, we restrict to select at most
100 documents from each database (di<=100) Then, the
selection optimization problem is formalized as follows:
]10..,,2,1,0[,*10
)(:
)(R)(maxarg
_
1
^*
∈=
=
=
=
=
kkd
Nd
NdItoSubject
ddId
i
rdocTotal
i
i
sdb
i
i
i
d
j
iji
d
i
(16)
NTotal_rdoc is the total number of documents to be retrieved.
Unfortunately, there is no simple solution for this optimization
problem as there are for Equations 10 and 14. However, a
36
dynamic programming algorithm can be applied to calculate the
optimal solution. The basic steps of this dynamic programming
method are described in Figure 2. As this algorithm allows
retrieving result lists of varying lengths from each selected
database, it is called UUM/HP-VL algorithm.
After the selection decisions are made, the selected databases are
searched and the corresponding document ids are retrieved from
each database. The final step of document retrieval is to merge
the returned results into a single ranked list with the
semisupervised learning algorithm. It was pointed out before that the
SSL algorithm maps the database-specific scores into the
centralized document scores and builds the final ranked list
accordingly, which is consistent with all our selection
procedures where documents with higher probabilities of
relevance (thus higher centralized document scores) are selected.
4. EXPERIMENTAL METHODOLOGY
4.1 Testbeds
It is desirable to evaluate distributed information retrieval
algorithms with testbeds that closely simulate the real world
applications.
The TREC Web collections WT2g or WT10g [4,13] provide a
way to partition documents by different Web servers. In this
way, a large number (O(1000)) of databases with rather diverse
contents could be created, which may make this testbed a good
candidate to simulate the operational environments such as open
domain hidden Web. However, two weakness of this testbed are:
i) Each database contains only a small amount of document (259
documents by average for WT2g) [4]; and ii) The contents of
WT2g or WT10g are arbitrarily crawled from the Web. It is not
likely for a hidden Web database to provide personal homepages
or web pages indicating that the pages are under construction
and there is no useful information at all. These types of web
pages are contained in the WT2g/WT10g datasets. Therefore,
the noisy Web data is not similar with that of high-quality
hidden Web database contents, which are usually organized by
domain experts.
Another choice is the TREC news/government data [1,15,17,
18,21]. TREC news/government data is concentrated on
relatively narrow topics. Compared with TREC Web data: i) The
news/government documents are much more similar to the
contents provided by a topic-oriented database than an arbitrary
web page, ii) A database in this testbed is larger than that of
TREC Web data. By average a database contains thousands of
documents, which is more realistic than a database of TREC
Web data with about 250 documents. As the contents and sizes
of the databases in the TREC news/government testbed are more
similar with that of a topic-oriented database, it is a good
candidate to simulate the distributed information retrieval
environments of large organizations (companies) or
domainspecific hidden Web sites, such as West that provides access to
legal, financial and news text databases [3]. As most current
distributed information retrieval systems are developed for the
environments of large organizations (companies) or
domainspecific hidden Web other than open domain hidden Web,
TREC news/government testbed was chosen in this work.
Trec123-100col-bysource testbed is one of the most used TREC
news/government testbed [1,15,17,21]. It was chosen in this
work. Three testbeds in [21] with skewed database size
distributions and different types of relevant document
distributions were also used to give more thorough simulation
for real environments.
Trec123-100col-bysource: 100 databases were created from
TREC CDs 1, 2 and 3. They were organized by source and
publication date [1]. The sizes of the databases are not skewed.
Details are in Table 1.
Three testbeds built in [21] were based on the
trec123-100colbysource testbed. Each testbed contains many small databases
and two large databases created by merging about 10-20 small
databases together.
Input: Complete lists of probabilities of relevance for all
the |DB| databases.
Output: Optimal selection solution for Equation 16.
i) Create the three-dimensional array:
Sel (1..|DB|, 1..NTotal_rdoc/10, 1..Nsdb)
Each Sel (x, y, z) is associated with a selection
decision xyzd , which represents the best selection
decision in the condition: only databases from number 1
to number x are considered for selection; totally y*10
documents will be retrieved; only z databases are
selected out of the x database candidates. And
Sel (x, y, z) is the corresponding utility value by
choosing the best selection.
ii) Initialize Sel (1, 1..NTotal_rdoc/10, 1..Nsdb) with only the
estimated relevance information of the 1st
database.
iii) Iterate the current database candidate i from 2 to |DB|
For each entry Sel (i, y, z):
Find k such that:
)10,min(1:
))()1,,1((maxarg
*10
^
*
yktosubject
dRzkyiSelk
kj
ij
k
≤≤
+−−−=
≤
),,1())()1,,1((
*
*10
^
*
zyiSeldRzkyiSelIf
kj
ij −>+−−−
≤
This means that we should retrieve *
10 k∗ documents
from the ith
database, otherwise we should not select this
database and the previous best solution Sel (i-1, y, z)
should be kept.
Then set the value of iyzd and Sel (i, y, z) accordingly.
iv) The best selection solution is given by _ /10| | Toral rdoc sdbDB N Nd
and the corresponding utility value is Sel (|DB|,
NTotal_rdoc/10, Nsdb).
Figure 2. The dynamic programming optimization
procedure for Equation 16.
Table1: Testbed statistics.
Number of documents Size (MB)
Testbed
Size
(GB) Min Avg Max Min Avg Max
Trec123 3.2 752 10782 39713 28 32 42
Table2: Query set statistics.
Name
TREC
Topic Set
TREC
Topic Field
Average Length
(Words)
Trec123 51-150 Title 3.1
37
Trec123-2ldb-60col (representative): The databases in the
trec123-100col-bysource were sorted with alphabetical order.
Two large databases were created by merging 20 small
databases with the round-robin method. Thus, the two large
databases have more relevant documents due to their large sizes,
even though the densities of relevant documents are roughly the
same as the small databases.
Trec123-AP-WSJ-60col (relevant): The 24 Associated Press
collections and the 16 Wall Street Journal collections in the
trec123-100col-bysource testbed were collapsed into two large
databases APall and WSJall. The other 60 collections were left
unchanged. The APall and WSJall databases have higher
densities of documents relevant to TREC queries than the small
databases. Thus, the two large databases have many more
relevant documents than the small databases.
Trec123-FR-DOE-81col (nonrelevant): The 13 Federal
Register collections and the 6 Department of Energy collections
in the trec123-100col-bysource testbed were collapsed into two
large databases FRall and DOEall. The other 80 collections were
left unchanged. The FRall and DOEall databases have lower
densities of documents relevant to TREC queries than the small
databases, even though they are much larger.
100 queries were created from the title fields of TREC topics
51-150. The queries 101-150 were used as training queries and
the queries 51-100 were used as test queries (details in Table 2).
4.2 Search Engines
In the uncooperative distributed information retrieval
environments of large organizations (companies) or
domainspecific hidden Web, different databases may use different types
of search engine. To simulate the multiple type-engine
environment, three different types of search engines were used
in the experiments: INQUERY [2], a unigram statistical
language model with linear smoothing [12,20] and a TFIDF
retrieval algorithm with ltc weight [12,20]. All these
algorithms were implemented with the Lemur toolkit [12].
These three kinds of search engines were assigned to the
databases among the four testbeds in a round-robin manner.
5. RESULTS: RESOURCE SELECTION OF
DATABASE RECOMMENDATION
All four testbeds described in Section 4 were used in the
experiments to evaluate the resource selection effectiveness of
the database recommendation system.
The resource descriptions were created using query-based
sampling. About 80 queries were sent to each database to
download 300 unique documents. The database size statistics
were estimated by the sample-resample method [21]. Fifty
queries (101-150) were used as training queries to build the
relevant logistic model and to fit the exponential functions of the
centralized document score curves for large ratio databases
(details in Section 3.1). Another 50 queries (51-100) were used
as test data.
Resource selection algorithms of database recommendation
systems are typically compared using the recall metric nR
[1,17,18,21]. Let B denote a baseline ranking, which is often the
RBR (relevance based ranking), and E as a ranking provided by
a resource selection algorithm. And let Bi and Ei denote the
number of relevant documents in the ith
ranked database of B or
E. Then Rn is defined as follows:
=
=
= k
i i
k
i i
k
B
E
R
1
1
(17)
Usually the goal is to search only a few databases, so our figures
only show results for selecting up to 20 databases.
The experiments summarized in Figure 3 compared the
effectiveness of the three resource selection algorithms, namely
the CORI, ReDDE and UUM/HR. The UUM/HR algorithm is
described in Section 3.3. It can be seen from Figure 3 that the
ReDDE and UUM/HR algorithms are more effective (on the
representative, relevant and nonrelevant testbeds) or as good as
(on the Trec123-100Col testbed) the CORI resource selection
algorithm. The UUM/HR algorithm is more effective than the
ReDDE algorithm on the representative and relevant testbeds
and is about the same as the ReDDE algorithm on the
Trec123100Col and the nonrelevant testbeds. This suggests that the
UUM/HR algorithm is more robust than the ReDDE algorithm.
It can be noted that when selecting only a few databases on the
Trec123-100Col or the nonrelevant testbeds, the ReDEE
algorithm has a small advantage over the UUM/HR algorithm.
We attribute this to two causes: i) The ReDDE algorithm was
tuned on the Trec123-100Col testbed; and ii) Although the
difference is small, this may suggest that our logistic model of
estimating probabilities of relevance is not accurate enough.
More training data or a more sophisticated model may help to
solve this minor puzzle.
Collections Selected. Collections Selected.
Trec123-100Col Testbed. Representative Testbed.
Collection Selected. Collection Selected.
Relevant Testbed. Nonrelevant Testbed.
Figure 3. Resource selection experiments on the four testbeds.
38
6. RESULTS: DOCUMENT RETRIEVAL
EFFECTIVENESS
For document retrieval, the selected databases are searched and
the returned results are merged into a single final list. In all of
the experiments discussed in this section the results retrieved
from individual databases were combined by the
semisupervised learning results merging algorithm. This version of
the SSL algorithm [22] is allowed to download a small number
of returned document texts on the fly to create additional
training data in the process of learning the linear models which
map database-specific document scores into estimated
centralized document scores. It has been shown to be very
effective in environments where only short result-lists are
retrieved from each selected database [22]. This is a common
scenario in operational environments and was the case for our
experiments.
Document retrieval effectiveness was measured by Precision at
the top part of the final document list. The experiments in this
section were conducted to study the document retrieval
effectiveness of five selection algorithms, namely the CORI,
ReDDE, UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms.
The last three algorithms were proposed in Section 3. All the
first four algorithms selected 3 or 5 databases, and 50 documents
were retrieved from each selected database. The UUM/HP-FL
algorithm also selected 3 or 5 databases, but it was allowed to
adjust the number of documents to retrieve from each selected
database; the number retrieved was constrained to be from 10 to
100, and a multiple of 10.
The Trec123-100Col and representative testbeds were selected
for document retrieval as they represent two extreme cases of
resource selection effectiveness; in one case the CORI algorithm
is as good as the other algorithms and in the other case it is quite
Table 5. Precision on the representative testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for
UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL
5 docs 0.3720 0.4080 (+9.7%) 0.4640 (+24.7%) 0.4600 (+23.7%)(-0.9%) 0.5000 (+34.4%)(+7.8%)
10 docs 0.3400 0.4060 (+19.4%) 0.4600 (+35.3%) 0.4540 (+33.5%)(-1.3%) 0.4640 (+36.5%)(+0.9%)
15 docs 0.3120 0.3880 (+24.4%) 0.4320 (+38.5%) 0.4240 (+35.9%)(-1.9%) 0.4413 (+41.4%)(+2.2)
20 docs 0.3000 0.3750 (+25.0%) 0.4080 (+36.0%) 0.4040 (+34.7%)(-1.0%) 0.4240 (+41.3%)(+4.0%)
30 docs 0.2533 0.3440 (+35.8%) 0.3847 (+51.9%) 0.3747 (+47.9%)(-2.6%) 0.3887 (+53.5%)(+1.0%)
Table 6. Precision on the representative testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for
UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL
5 docs 0.3960 0.4080 (+3.0%) 0.4560 (+15.2%) 0.4280 (+8.1%)(-6.1%) 0.4520 (+14.1%)(-0.9%)
10 docs 0.3880 0.4060 (+4.6%) 0.4280 (+10.3%) 0.4460 (+15.0%)(+4.2%) 0.4560 (+17.5%)(+6.5%)
15 docs 0.3533 0.3987 (+12.9%) 0.4227 (+19.6%) 0.4440 (+25.7%)(+5.0%) 0.4453 (+26.0%)(+5.4%)
20 docs 0.3330 0.3960 (+18.9%) 0.4140 (+24.3%) 0.4290 (+28.8%)(+3.6%) 0.4350 (+30.6%)(+5.1%)
30 docs 0.2967 0.3740 (+26.1%) 0.4013 (+35.3%) 0.3987 (+34.4%)(-0.7%) 0.4060 (+36.8%)(+1.2%)
Table 3. Precision on the trec123-100col-bysource testbed when 3 databases were selected. (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL
5 docs 0.3640 0.3480 (-4.4%) 0.3960 (+8.8%) 0.4680 (+28.6%)(+18.1%) 0.4640 (+27.5%)(+17.2%)
10 docs 0.3360 0.3200 (-4.8%) 0.3520 (+4.8%) 0.4240 (+26.2%)(+20.5%) 0.4220 (+25.6%)(+19.9%)
15 docs 0.3253 0.3187 (-2.0%) 0.3347 (+2.9%) 0.3973 (+22.2%)(+15.7%) 0.3920 (+20.5%)(+17.1%)
20 docs 0.3140 0.2980 (-5.1%) 0.3270 (+4.1%) 0.3720 (+18.5%)(+13.8%) 0.3700 (+17.8%)(+13.2%)
30 docs 0.2780 0.2660 (-4.3%) 0.2973 (+6.9%) 0.3413 (+22.8%)(+14.8%) 0.3400 (+22.3%)(+14.4%)
Table 4. Precision on the trec123-100col-bysource testbed when 5 databases were selected. (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL
5 docs 0.4000 0.3920 (-2.0%) 0.4280 (+7.0%) 0.4680 (+17.0%)(+9.4%) 0.4600 (+15.0%)(+7.5%)
10 docs 0.3800 0.3760 (-1.1%) 0.3800 (+0.0%) 0.4180 (+10.0%)(+10.0%) 0.4320 (+13.7%)(+13.7%)
15 docs 0.3560 0.3560 (+0.0%) 0.3720 (+4.5%) 0.3920 (+10.1%)(+5.4%) 0.4080 (+14.6%)(+9.7%)
20 docs 0.3430 0.3390 (-1.2%) 0.3550 (+3.5%) 0.3710 (+8.2%)(+4.5%) 0.3830 (+11.7%)(+7.9%)
30 docs 0.3240 0.3140 (-3.1%) 0.3313 (+2.3%) 0.3500 (+8.0%)(+5.6%) 0.3487 (+7.6%)(+5.3%)
39
a lot worse than the other algorithms. Tables 3 and 4 show the
results on the Trec123-100Col testbed, and Tables 5 and 6 show
the results on the representative testbed.
On the Trec123-100Col testbed, the document retrieval
effectiveness of the CORI selection algorithm is roughly the
same or a little bit better than the ReDDE algorithm but both of
them are worse than the other three algorithms (Tables 3 and 4).
The UUM/HR algorithm has a small advantage over the CORI
and ReDDE algorithms. One main difference between the
UUM/HR algorithm and the ReDDE algorithm was pointed out
before: The UUM/HR uses training data and linear interpolation
to estimate the centralized document score curves, while the
ReDDE algorithm [21] uses a heuristic method, assumes the
centralized document score curves are step functions and makes
no distinction among the top part of the curves. This difference
makes UUM/HR better than the ReDDE algorithm at
distinguishing documents with high probabilities of relevance
from low probabilities of relevance. Therefore, the UUM/HR
reflects the high-precision retrieval goal better than the ReDDE
algorithm and thus is more effective for document retrieval.
The UUM/HR algorithm does not explicitly optimize the
selection decision with respect to the high-precision goal as the
UUM/HP-FL and UUM/HP-VL algorithms are designed to do.
It can be seen that on this testbed, the UUM/HP-FL and
UUM/HP-VL algorithms are much more effective than all the
other algorithms. This indicates that their power comes from
explicitly optimizing the high-precision goal of document
retrieval in Equations 14 and 16.
On the representative testbed, CORI is much less effective than
other algorithms for distributed document retrieval (Tables 5 and
6). The document retrieval results of the ReDDE algorithm are
better than that of the CORI algorithm but still worse than the
results of the UUM/HR algorithm. On this testbed the three
UUM algorithms are about equally effective. Detailed analysis
shows that the overlap of the selected databases between the
UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms is much
larger than the experiments on the Trec123-100Col testbed,
since all of them tend to select the two large databases. This
explains why they are about equally effective for document
retrieval.
In real operational environments, databases may return no
document scores and report only ranked lists of results. As the
unified utility maximization model only utilizes retrieval scores
of sampled documents with a centralized retrieval algorithm to
calculate the probabilities of relevance, it makes database
selection decisions without referring to the document scores
from individual databases and can be easily generalized to this
case of rank lists without document scores. The only adjustment
is that the SSL algorithm merges ranked lists without document
scores by assigning the documents with pseudo-document scores
normalized for their ranks (In a ranked list of 50 documents, the
first one has a score of 1, the second has a score of 0.98 etc)
,which has been studied in [22]. The experiment results on
trec123-100Col-bysource testbed with 3 selected databases are
shown in Table 7. The experiment setting was the same as
before except that the document scores were eliminated
intentionally and the selected databases only return ranked lists
of document ids. It can be seen from the results that the
UUM/HP-FL and UUM/HP-VL work well with databases
returning no document scores and are still more effective than
other alternatives. Other experiments with databases that return
no document scores are not reported but they show similar
results to prove the effectiveness of UUM/HP-FL and
UUM/HPVL algorithms.
The above experiments suggest that it is very important to
optimize the high-precision goal explicitly in document
retrieval. The new algorithms based on this principle achieve
better or at least as good results as the prior state-of-the-art
algorithms in several environments.
7. CONCLUSION
Distributed information retrieval solves the problem of finding
information that is scattered among many text databases on local
area networks and Internets. Most previous research use
effective resource selection algorithm of database
recommendation system for distributed document retrieval
application. We argue that the high-recall resource selection
goal of database recommendation and high-precision goal of
document retrieval are related but not identical. This kind of
inconsistency has also been observed in previous work, but the
prior solutions either used heuristic methods or assumed
cooperation by individual databases (e.g., all the databases used
the same kind of search engines), which is frequently not true in
the uncooperative environment.
In this work we propose a unified utility maximization model to
integrate the resource selection of database recommendation and
document retrieval tasks into a single unified framework. In this
framework, the selection decisions are obtained by optimizing
different objective functions. As far as we know, this is the first
work that tries to view and theoretically model the distributed
information retrieval task in an integrated manner.
The new framework continues a recent research trend studying
the use of query-based sampling and a centralized sample
database. A single logistic model was trained on the centralized
Table 7. Precision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.) (Search engines do not return document scores)
Precision at
Doc Rank
CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL
5 docs 0.3520 0.3240 (-8.0%) 0.3680 (+4.6%) 0.4520 (+28.4%)(+22.8%) 0.4520 (+28.4%)(+22.8)
10 docs 0.3320 0.3140 (-5.4%) 0.3340 (+0.6%) 0.4120 (+24.1%)(+23.4%) 0.4020 (+21.1%)(+20.4%)
15 docs 0.3227 0.2987 (-7.4%) 0.3280 (+1.6%) 0.3920 (+21.5%)(+19.5%) 0.3733 (+15.7%)(+13.8%)
20 docs 0.3030 0.2860 (-5.6%) 0.3130 (+3.3%) 0.3670 (+21.2%)(+17.3%) 0.3590 (+18.5%)(+14.7%)
30 docs 0.2727 0.2640 (-3.2%) 0.2900 (+6.3%) 0.3273 (+20.0%)(+12.9%) 0.3273 (+20.0%)(+12.9%)
40
sample database to estimate the probabilities of relevance of
documents by their centralized retrieval scores, while the
centralized sample database serves as a bridge to connect the
individual databases with the centralized logistic model.
Therefore, the probabilities of relevance for all the documents
across the databases can be estimated with very small amount of
human relevance judgment, which is much more efficient than
previous methods that build a separate model for each database.
This framework is not only more theoretically solid but also
very effective. One algorithm for resource selection (UUM/HR)
and two algorithms for document retrieval (UUM/HP-FL and
UUM/HP-VL) are derived from this framework. Empirical
studies have been conducted on testbeds to simulate the
distributed search solutions of large organizations (companies)
or domain-specific hidden Web. Furthermore, the UUM/HP-FL
and UUM/HP-VL resource selection algorithms are extended
with a variant of SSL results merging algorithm to address the
distributed document retrieval task when selected databases do
not return document scores. Experiments have shown that these
algorithms achieve results that are at least as good as the prior
state-of-the-art, and sometimes considerably better. Detailed
analysis indicates that the advantage of these algorithms comes
from explicitly optimizing the goals of the specific tasks.
The unified utility maximization framework is open for different
extensions. When cost is associated with searching the online
databases, the utility framework can be adjusted to automatically
estimate the best number of databases to search so that a large
amount of relevant documents can be retrieved with relatively
small costs. Another extension of the framework is to consider
the retrieval effectiveness of the online databases, which is an
important issue in the operational environments. All of these are
the directions of future research.
ACKNOWLEDGEMENT
This research was supported by NSF grants EIA-9983253 and
IIS-0118767. Any opinions, findings, conclusions, or
recommendations expressed in this paper are the authors", and
do not necessarily reflect those of the sponsor.
REFERENCES
[1] J. Callan. (2000). Distributed information retrieval. In W.B.
Croft, editor, Advances in Information Retrieval. Kluwer
Academic Publishers. (pp. 127-150).
[2] J. Callan, W.B. Croft, and J. Broglio. (1995). TREC and
TIPSTER experiments with INQUERY. Information
Processing and Management, 31(3). (pp. 327-343).
[3] J. G. Conrad, X. S. Guo, P. Jackson and M. Meziou.
(2002). Database selection using actual physical and
acquired logical collection resources in a massive
domainspecific operational environment. Distributed search over
the hidden web: Hierarchical database sampling and
selection. In Proceedings of the 28th
International
Conference on Very Large Databases (VLDB).
[4] N. Craswell. (2000). Methods for distributed information
retrieval. Ph. D. thesis, The Australian Nation University.
[5] N. Craswell, D. Hawking, and P. Thistlewaite. (1999).
Merging results from isolated search engines. In
Proceedings of 10th Australasian Database Conference.
[6] D. D'Souza, J. Thom, and J. Zobel. (2000). A comparison
of techniques for selecting text collections. In Proceedings
of the 11th Australasian Database Conference.
[7] N. Fuhr. (1999). A Decision-Theoretic approach to
database selection in networked IR. ACM Transactions on
Information Systems, 17(3). (pp. 229-249).
[8] L. Gravano, C. Chang, H. Garcia-Molina, and A. Paepcke.
(1997). STARTS: Stanford proposal for internet
metasearching. In Proceedings of the 20th ACM-SIGMOD
International Conference on Management of Data.
[9] L. Gravano, P. Ipeirotis and M. Sahami. (2003). QProber:
A System for Automatic Classification of Hidden-Web
Databases. ACM Transactions on Information Systems,
21(1).
[10] P. Ipeirotis and L. Gravano. (2002). Distributed search over
the hidden web: Hierarchical database sampling and
selection. In Proceedings of the 28th International
Conference on Very Large Databases (VLDB).
[11] InvisibleWeb.com. http://www.invisibleweb.com
[12] The lemur toolkit. http://www.cs.cmu.edu/~lemur
[13] J. Lu and J. Callan. (2003). Content-based information
retrieval in peer-to-peer networks. In Proceedings of the
12th International Conference on Information and
Knowledge Management.
[14] W. Meng, C.T. Yu and K.L. Liu. (2002) Building efficient
and effective metasearch engines. ACM Comput. Surv.
34(1).
[15] H. Nottelmann and N. Fuhr. (2003). Evaluating different
method of estimating retrieval quality for resource
selection. In Proceedings of the 25th Annual International
ACM SIGIR Conference on Research and Development in
Information Retrieval.
[16] H., Nottelmann and N., Fuhr. (2003). The MIND
architecture for heterogeneous multimedia federated digital
libraries. ACM SIGIR 2003 Workshop on Distributed
Information Retrieval.
[17] A.L. Powell, J.C. French, J. Callan, M. Connell, and C.L.
Viles. (2000). The impact of database selection on
distributed searching. In Proceedings of the 23rd Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval.
[18] A.L. Powell and J.C. French. (2003). Comparing the
performance of database selection algorithms. ACM
Transactions on Information Systems, 21(4). (pp. 412-456).
[19] C. Sherman (2001). Search for the invisible web. Guardian
Unlimited.
[20] L. Si and J. Callan. (2002). Using sampled data and
regression to merge search engine results. In Proceedings
of the 25th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval.
[21] L. Si and J. Callan. (2003). Relevant document distribution
estimation method for resource selection. In Proceedings of
the 26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval.
[22] L. Si and J. Callan. (2003). A Semi-Supervised learning
method to merge search engine results. ACM Transactions
on Information Systems, 21(4). (pp. 457-491).
41 | distributed document retrieval;resource selection;retrieval and result merging;unified utility maximization model;distributed text information retrieval resource selection;semi-supervised learning;resource representation;logistic transformation model;distribute information retrieval;hidden web content;resource selection of distributed text information retrieval;federated search;database recommendation |
train_H-77 | Automatic Extraction of Titles from General Documents using Machine Learning | In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. | 1. INTRODUCTION
Metadata of documents is useful for many kinds of document
processing such as search, browsing, and filtering. Ideally,
metadata is defined by the authors of documents and is then used
by various systems. However, people seldom define document
metadata by themselves, even when they have convenient
metadata definition tools [26]. Thus, how to automatically extract
metadata from the bodies of documents turns out to be an
important research issue.
Methods for performing the task have been proposed. However,
the focus was mainly on extraction from research papers. For
instance, Han et al. [10] proposed a machine learning based
method to conduct extraction from research papers. They
formalized the problem as that of classification and employed
Support Vector Machines as the classifier. They mainly used
linguistic features in the model.1
In this paper, we consider metadata extraction from general
documents. By general documents, we mean documents that may
belong to any one of a number of specific genres. General
documents are more widely available in digital libraries, intranets
and the internet, and thus investigation on extraction from them is
sorely needed. Research papers usually have well-formed styles
and noticeable characteristics. In contrast, the styles of general
documents can vary greatly. It has not been clarified whether a
machine learning based approach can work well for this task.
There are many types of metadata: title, author, date of creation,
etc. As a case study, we consider title extraction in this paper.
General documents can be in many different file formats:
Microsoft Office, PDF (PS), etc. As a case study, we consider
extraction from Office including Word and PowerPoint.
We take a machine learning approach. We annotate titles in
sample documents (for Word and PowerPoint respectively) and
take them as training data to train several types of models, and
perform title extraction using any one type of the trained models.
In the models, we mainly utilize formatting information such as
font size as features. We employ the following models: Maximum
Entropy Model, Perceptron with Uneven Margins, Maximum
Entropy Markov Model, and Voted Perceptron.
In this paper, we also investigate the following three problems,
which did not seem to have been examined previously.
(1) Comparison between models: among the models above, which
model performs best for title extraction;
(2) Generality of model: whether it is possible to train a model on
one domain and apply it to another domain, and whether it is
possible to train a model in one language and apply it to another
language;
(3) Usefulness of extracted titles: whether extracted titles can
improve document processing such as search.
Experimental results indicate that our approach works well for
title extraction from general documents. Our method can
significantly outperform the baselines: one that always uses the
first lines as titles and the other that always uses the lines in the
largest font sizes as titles. Precision and recall for title extraction
from Word are 0.810 and 0.837 respectively, and precision and
recall for title extraction from PowerPoint are 0.875 and 0.895
respectively. It turns out that the use of format features is the key
to successful title extraction.
(1) We have observed that Perceptron based models perform
better in terms of extraction accuracies. (2) We have empirically
verified that the models trained with our approach are generic in
the sense that they can be trained on one domain and applied to
another, and they can be trained in one language and applied to
another. (3) We have found that using the extracted titles we can
significantly improve precision of document retrieval (by 10%).
We conclude that we can indeed conduct reliable title extraction
from general documents and use the extracted results to improve
real applications.
The rest of the paper is organized as follows. In section 2, we
introduce related work, and in section 3, we explain the
motivation and problem setting of our work. In section 4, we
describe our method of title extraction, and in section 5, we
describe our method of document retrieval using extracted titles.
Section 6 gives our experimental results. We make concluding
remarks in section 7.
2. RELATED WORK
2.1 Document Metadata Extraction
Methods have been proposed for performing automatic metadata
extraction from documents; however, the main focus was on
extraction from research papers.
The proposed methods fall into two categories: the rule based
approach and the machine learning based approach.
Giuffrida et al. [9], for instance, developed a rule-based system for
automatically extracting metadata from research papers in
Postscript. They used rules like titles are usually located on the
upper portions of the first pages and they are usually in the largest
font sizes. Liddy et al. [14] and Yilmazel el al. [23] performed
metadata extraction from educational materials using rule-based
natural language processing technologies. Mao et al. [16] also
conducted automatic metadata extraction from research papers
using rules on formatting information.
The rule-based approach can achieve high performance. However,
it also has disadvantages. It is less adaptive and robust when
compared with the machine learning approach.
Han et al. [10], for instance, conducted metadata extraction with
the machine learning approach. They viewed the problem as that
of classifying the lines in a document into the categories of
metadata and proposed using Support Vector Machines as the
classifier. They mainly used linguistic information as features.
They reported high extraction accuracy from research papers in
terms of precision and recall.
2.2 Information Extraction
Metadata extraction can be viewed as an application of
information extraction, in which given a sequence of instances, we
identify a subsequence that represents information in which we
are interested. Hidden Markov Model [6], Maximum Entropy
Model [1, 4], Maximum Entropy Markov Model [17], Support
Vector Machines [3], Conditional Random Field [12], and Voted
Perceptron [2] are widely used information extraction models.
Information extraction has been applied, for instance, to
part-ofspeech tagging [20], named entity recognition [25] and table
extraction [19].
2.3 Search Using Title Information
Title information is useful for document retrieval.
In the system Citeseer, for instance, Giles et al. managed to
extract titles from research papers and make use of the extracted
titles in metadata search of papers [8].
In web search, the title fields (i.e., file properties) and anchor texts
of web pages (HTML documents) can be viewed as ‘titles" of the
pages [5]. Many search engines seem to utilize them for web page
retrieval [7, 11, 18, 22]. Zhang et al., found that web pages with
well-defined metadata are more easily retrieved than those without
well-defined metadata [24].
To the best of our knowledge, no research has been conducted on
using extracted titles from general documents (e.g., Office
documents) for search of the documents.
146
3. MOTIVATION AND PROBLEM
SETTING
We consider the issue of automatically extracting titles from
general documents.
By general documents, we mean documents that belong to one of
any number of specific genres. The documents can be
presentations, books, book chapters, technical papers, brochures,
reports, memos, specifications, letters, announcements, or resumes.
General documents are more widely available in digital libraries,
intranets, and internet, and thus investigation on title extraction
from them is sorely needed.
Figure 1 shows an estimate on distributions of file formats on
intranet and internet [15]. Office and PDF are the main file
formats on the intranet. Even on the internet, the documents in the
formats are still not negligible, given its extremely large size. In
this paper, without loss of generality, we take Office documents as
an example.
Figure 1. Distributions of file formats in internet and intranet.
For Office documents, users can define titles as file properties
using a feature provided by Office. We found in an experiment,
however, that users seldom use the feature and thus titles in file
properties are usually very inaccurate. That is to say, titles in file
properties are usually inconsistent with the ‘true" titles in the file
bodies that are created by the authors and are visible to readers.
We collected 6,000 Word and 6,000 PowerPoint documents from
an intranet and the internet and examined how many titles in the
file properties are correct. We found that surprisingly the accuracy
was only 0.265 (cf., Section 6.3 for details). A number of reasons
can be considered. For example, if one creates a new file by
copying an old file, then the file property of the new file will also
be copied from the old file.
In another experiment, we found that Google uses the titles in file
properties of Office documents in search and browsing, but the
titles are not very accurate. We created 50 queries to search Word
and PowerPoint documents and examined the top 15 results of
each query returned by Google. We found that nearly all the titles
presented in the search results were from the file properties of the
documents. However, only 0.272 of them were correct.
Actually, ‘true" titles usually exist at the beginnings of the bodies
of documents. If we can accurately extract the titles from the
bodies of documents, then we can exploit reliable title information
in document processing. This is exactly the problem we address in
this paper.
More specifically, given a Word document, we are to extract the
title from the top region of the first page. Given a PowerPoint
document, we are to extract the title from the first slide. A title
sometimes consists of a main title and one or two subtitles. We
only consider extraction of the main title.
As baselines for title extraction, we use that of always using the
first lines as titles and that of always using the lines with largest
font sizes as titles.
Figure 2. Title extraction from Word document.
Figure 3. Title extraction from PowerPoint document.
Next, we define a ‘specification" for human judgments in title data
annotation. The annotated data will be used in training and testing
of the title extraction methods.
Summary of the specification: The title of a document should be
identified on the basis of common sense, if there is no difficulty in
the identification. However, there are many cases in which the
identification is not easy. There are some rules defined in the
specification that guide identification for such cases. The rules
include a title is usually in consecutive lines in the same format,
a document can have no title, titles in images are not
considered, a title should not contain words like ‘draft",
147
‘whitepaper", etc, if it is difficult to determine which is the title,
select the one in the largest font size, and if it is still difficult to
determine which is the title, select the first candidate. (The
specification covers all the cases we have encountered in data
annotation.)
Figures 2 and 3 show examples of Office documents from which
we conduct title extraction. In Figure 2, ‘Differences in Win32
API Implementations among Windows Operating Systems" is the
title of the Word document. ‘Microsoft Windows" on the top of
this page is a picture and thus is ignored. In Figure 3, ‘Building
Competitive Advantages through an Agile Infrastructure" is the
title of the PowerPoint document.
We have developed a tool for annotation of titles by human
annotators. Figure 4 shows a snapshot of the tool.
Figure 4. Title annotation tool.
4. TITLE EXTRACTION METHOD
4.1 Outline
Title extraction based on machine learning consists of training and
extraction. The same pre-processing step occurs before training
and extraction.
During pre-processing, from the top region of the first page of a
Word document or the first slide of a PowerPoint document a
number of units for processing are extracted. If a line (lines are
separated by ‘return" symbols) only has a single format, then the
line will become a unit. If a line has several parts and each of
them has its own format, then each part will become a unit. Each
unit will be treated as an instance in learning. A unit contains not
only content information (linguistic information) but also
formatting information. The input to pre-processing is a document
and the output of pre-processing is a sequence of units (instances).
Figure 5 shows the units obtained from the document in Figure 2.
Figure 5. Example of units.
In learning, the input is sequences of units where each sequence
corresponds to a document. We take labeled units (labeled as
title_begin, title_end, or other) in the sequences as training data
and construct models for identifying whether a unit is title_begin
title_end, or other. We employ four types of models: Perceptron,
Maximum Entropy (ME), Perceptron Markov Model (PMM), and
Maximum Entropy Markov Model (MEMM).
In extraction, the input is a sequence of units from one document.
We employ one type of model to identify whether a unit is
title_begin, title_end, or other. We then extract units from the unit
labeled with ‘title_begin" to the unit labeled with ‘title_end". The
result is the extracted title of the document.
The unique characteristic of our approach is that we mainly utilize
formatting information for title extraction. Our assumption is that
although general documents vary in styles, their formats have
certain patterns and we can learn and utilize the patterns for title
extraction. This is in contrast to the work by Han et al., in which
only linguistic features are used for extraction from research
papers.
4.2 Models
The four models actually can be considered in the same metadata
extraction framework. That is why we apply them together to our
current problem.
Each input is a sequence of instances kxxx L21 together with a
sequence of labels kyyy L21 . ix and iy represents an instance
and its label, respectively ( ki ,,2,1 L= ). Recall that an instance
here represents a unit. A label represents title_begin, title_end, or
other. Here, k is the number of units in a document.
In learning, we train a model which can be generally denoted as a
conditional probability distribution )|( 11 kk XXYYP LL where
iX and iY denote random variables taking instance ix and label
iy as values, respectively ( ki ,,2,1 L= ).
Learning Tool
Extraction Tool
21121
2222122221
1121111211
nknnknn
kk
kk
yyyxxx
yyyxxx
yyyxxx
LL
LL
LL
LL
→
→
→
)|(maxarg 11 mkmmkm xxyyP LL
)|( 11 kk XXYYP LL
Conditional
Distribution
mkmm xxx L21
Figure 6. Metadata extraction model.
We can make assumptions about the general model in order to
make it simple enough for training.
148
For example, we can assume that kYY ,,1 L are independent of
each other given kXX ,,1 L . Thus, we have
)|()|(
)|(
11
11
kk
kk
XYPXYP
XXYYP
L
LL
=
In this way, we decompose the model into a number of classifiers.
We train the classifiers locally using the labeled data. As the
classifier, we employ the Perceptron or Maximum Entropy model.
We can also assume that the first order Markov property holds for
kYY ,,1 L given kXX ,,1 L . Thus, we have
)|()|(
)|(
111
11
kkk
kk
XYYPXYP
XXYYP
−= L
LL
Again, we obtain a number of classifiers. However, the classifiers
are conditioned on the previous label. When we employ the
Percepton or Maximum Entropy model as a classifier, the models
become a Percepton Markov Model or Maximum Entropy Markov
Model, respectively. That is to say, the two models are more
precise.
In extraction, given a new sequence of instances, we resort to one
of the constructed models to assign a sequence of labels to the
sequence of instances, i.e., perform extraction.
For Perceptron and ME, we assign labels locally and combine the
results globally later using heuristics. Specifically, we first
identify the most likely title_begin. Then we find the most likely
title_end within three units after the title_begin. Finally, we
extract as a title the units between the title_begin and the title_end.
For PMM and MEMM, we employ the Viterbi algorithm to find
the globally optimal label sequence.
In this paper, for Perceptron, we actually employ an improved
variant of it, called Perceptron with Uneven Margin [13]. This
version of Perceptron can work well especially when the number
of positive instances and the number of negative instances differ
greatly, which is exactly the case in our problem.
We also employ an improved version of Perceptron Markov
Model in which the Perceptron model is the so-called Voted
Perceptron [2]. In addition, in training, the parameters of the
model are updated globally rather than locally.
4.3 Features
There are two types of features: format features and linguistic
features. We mainly use the former. The features are used for both
the title-begin and the title-end classifiers.
4.3.1 Format Features
Font Size: There are four binary features that represent the
normalized font size of the unit (recall that a unit has only one
type of font).
If the font size of the unit is the largest in the document, then the
first feature will be 1, otherwise 0. If the font size is the smallest
in the document, then the fourth feature will be 1, otherwise 0. If
the font size is above the average font size and not the largest in
the document, then the second feature will be 1, otherwise 0. If the
font size is below the average font size and not the smallest, the
third feature will be 1, otherwise 0.
It is necessary to conduct normalization on font sizes. For
example, in one document the largest font size might be ‘12pt",
while in another the smallest one might be ‘18pt".
Boldface: This binary feature represents whether or not the
current unit is in boldface.
Alignment: There are four binary features that respectively
represent the location of the current unit: ‘left", ‘center", ‘right",
and ‘unknown alignment".
The following format features with respect to ‘context" play an
important role in title extraction.
Empty Neighboring Unit: There are two binary features that
represent, respectively, whether or not the previous unit and the
current unit are blank lines.
Font Size Change: There are two binary features that represent,
respectively, whether or not the font size of the previous unit and
the font size of the next unit differ from that of the current unit.
Alignment Change: There are two binary features that represent,
respectively, whether or not the alignment of the previous unit and
the alignment of the next unit differ from that of the current one.
Same Paragraph: There are two binary features that represent,
respectively, whether or not the previous unit and the next unit are
in the same paragraph as the current unit.
4.3.2 Linguistic Features
The linguistic features are based on key words.
Positive Word: This binary feature represents whether or not the
current unit begins with one of the positive words. The positive
words include ‘title:", ‘subject:", ‘subject line:" For example, in
some documents the lines of titles and authors have the same
formats. However, if lines begin with one of the positive words,
then it is likely that they are title lines.
Negative Word: This binary feature represents whether or not the
current unit begins with one of the negative words. The negative
words include ‘To", ‘By", ‘created by", ‘updated by", etc.
There are more negative words than positive words. The above
linguistic features are language dependent.
Word Count: A title should not be too long. We heuristically
create four intervals: [1, 2], [3, 6], [7, 9] and [9, ∞) and define one
feature for each interval. If the number of words in a title falls into
an interval, then the corresponding feature will be 1; otherwise 0.
Ending Character: This feature represents whether the unit ends
with ‘:", ‘-", or other special characters. A title usually does not
end with such a character.
5. DOCUMENT RETRIEVAL METHOD
We describe our method of document retrieval using extracted
titles.
Typically, in information retrieval a document is split into a
number of fields including body, title, and anchor text. A ranking
function in search can use different weights for different fields of
149
the document. Also, titles are typically assigned high weights,
indicating that they are important for document retrieval. As
explained previously, our experiment has shown that a significant
number of documents actually have incorrect titles in the file
properties, and thus in addition of using them we use the extracted
titles as one more field of the document. By doing this, we attempt
to improve the overall precision.
In this paper, we employ a modification of BM25 that allows field
weighting [21]. As fields, we make use of body, title, extracted
title and anchor. First, for each term in the query we count the
term frequency in each field of the document; each field
frequency is then weighted according to the corresponding weight
parameter:
∑=
f
tfft tfwwtf
Similarly, we compute the document length as a weighted sum of
lengths of each field. Average document length in the corpus
becomes the average of all weighted document lengths.
∑=
f
ff dlwwdl
In our experiments we used 75.0,8.11 == bk . Weight for content
was 1.0, title was 10.0, anchor was 10.0, and extracted title was
5.0.
6. EXPERIMENTAL RESULTS
6.1 Data Sets and Evaluation Measures
We used two data sets in our experiments.
First, we downloaded and randomly selected 5,000 Word
documents and 5,000 PowerPoint documents from an intranet of
Microsoft. We call it MS hereafter.
Second, we downloaded and randomly selected 500 Word and 500
PowerPoint documents from the DotGov and DotCom domains on
the internet, respectively.
Figure 7 shows the distributions of the genres of the documents.
We see that the documents are indeed ‘general documents" as we
define them.
Figure 7. Distributions of document genres.
Third, a data set in Chinese was also downloaded from the internet.
It includes 500 Word documents and 500 PowerPoint documents
in Chinese.
We manually labeled the titles of all the documents, on the basis
of our specification.
Not all the documents in the two data sets have titles. Table 1
shows the percentages of the documents having titles. We see that
DotCom and DotGov have more PowerPoint documents with titles
than MS. This might be because PowerPoint documents published
on the internet are more formal than those on the intranet.
Table 1. The portion of documents with titles
Domain
Type
MS DotCom DotGov
Word 75.7% 77.8% 75.6%
PowerPoint 82.1% 93.4% 96.4%
In our experiments, we conducted evaluations on title extraction in
terms of precision, recall, and F-measure. The evaluation
measures are defined as follows:
Precision: P = A / ( A + B )
Recall: R = A / ( A + C )
F-measure: F1 = 2PR / ( P + R )
Here, A, B, C, and D are numbers of documents as those defined
in Table 2.
Table 2. Contingence table with regard to title extraction
Is title Is not title
Extracted A B
Not extracted C D
6.2 Baselines
We test the accuracies of the two baselines described in section
4.2. They are denoted as ‘largest font size" and ‘first line"
respectively.
6.3 Accuracy of Titles in File Properties
We investigate how many titles in the file properties of the
documents are reliable. We view the titles annotated by humans as
true titles and test how many titles in the file properties can
approximately match with the true titles. We use Edit Distance to
conduct the approximate match. (Approximate match is only used
in this evaluation). This is because sometimes human annotated
titles can be slightly different from the titles in file properties on
the surface, e.g., contain extra spaces).
Given string A and string B:
if ( (D == 0) or ( D / ( La + Lb ) < θ ) ) then string A = string B
D: Edit Distance between string A and string B
La: length of string A
Lb: length of string B
θ: 0.1
∑ ×
++−
+
=
t
t
n
N
wtf
avwdl
wdl
bbk
kwtf
FBM )log(
))1((
)1(
25
1
1
150
Table 3. Accuracies of titles in file properties
File Type Domain Precision Recall F1
MS 0.299 0.311 0.305
DotCom 0.210 0.214 0.212Word
DotGov 0.182 0.177 0.180
MS 0.229 0.245 0.237
DotCom 0.185 0.186 0.186PowerPoint
DotGov 0.180 0.182 0.181
6.4 Comparison with Baselines
We conducted title extraction from the first data set (Word and
PowerPoint in MS). As the model, we used Perceptron.
We conduct 4-fold cross validation. Thus, all the results reported
here are those averaged over 4 trials. Tables 4 and 5 show the
results. We see that Perceptron significantly outperforms the
baselines. In the evaluation, we use exact matching between the
true titles annotated by humans and the extracted titles.
Table 4. Accuracies of title extraction with Word
Precision Recall F1
Model Perceptron 0.810 0.837 0.823
Largest font size 0.700 0.758 0.727
Baselines
First line 0.707 0.767 0.736
Table 5. Accuracies of title extraction with PowerPoint
Precision Recall F1
Model Perceptron 0.875 0. 895 0.885
Largest font size 0.844 0.887 0.865
Baselines
First line 0.639 0.671 0.655
We see that the machine learning approach can achieve good
performance in title extraction. For Word documents both
precision and recall of the approach are 8 percent higher than
those of the baselines. For PowerPoint both precision and recall of
the approach are 2 percent higher than those of the baselines.
We conduct significance tests. The results are shown in Table 6.
Here, ‘Largest" denotes the baseline of using the largest font size,
‘First" denotes the baseline of using the first line. The results
indicate that the improvements of machine learning over baselines
are statistically significant (in the sense p-value < 0.05)
Table 6. Sign test results
Documents Type Sign test between p-value
Perceptron vs. Largest 3.59e-26
Word
Perceptron vs. First 7.12e-10
Perceptron vs. Largest 0.010
PowerPoint
Perceptron vs. First 5.13e-40
We see, from the results, that the two baselines can work well for
title extraction, suggesting that font size and position information
are most useful features for title extraction. However, it is also
obvious that using only these two features is not enough. There
are cases in which all the lines have the same font size (i.e., the
largest font size), or cases in which the lines with the largest font
size only contain general descriptions like ‘Confidential", ‘White
paper", etc. For those cases, the ‘largest font size" method cannot
work well. For similar reasons, the ‘first line" method alone
cannot work well, either. With the combination of different
features (evidence in title judgment), Perceptron can outperform
Largest and First.
We investigate the performance of solely using linguistic features.
We found that it does not work well. It seems that the format
features play important roles and the linguistic features are
supplements..
Figure 8. An example Word document.
Figure 9. An example PowerPoint document.
We conducted an error analysis on the results of Perceptron. We
found that the errors fell into three categories. (1) About one third
of the errors were related to ‘hard cases". In these documents, the
layouts of the first pages were difficult to understand, even for
humans. Figure 8 and 9 shows examples. (2) Nearly one fourth of
the errors were from the documents which do not have true titles
but only contain bullets. Since we conduct extraction from the top
regions, it is difficult to get rid of these errors with the current
approach. (3). Confusions between main titles and subtitles were
another type of error. Since we only labeled the main titles as
titles, the extractions of both titles were considered incorrect. This
type of error does little harm to document processing like search,
however.
6.5 Comparison between Models
To compare the performance of different machine learning models,
we conducted another experiment. Again, we perform 4-fold cross
151
validation on the first data set (MS). Table 7, 8 shows the results
of all the four models.
It turns out that Perceptron and PMM perform the best, followed
by MEMM, and ME performs the worst. In general, the
Markovian models perform better than or as well as their classifier
counterparts. This seems to be because the Markovian models are
trained globally, while the classifiers are trained locally. The
Perceptron based models perform better than the ME based
counterparts. This seems to be because the Perceptron based
models are created to make better classifications, while ME
models are constructed for better prediction.
Table 7. Comparison between different learning models for
title extraction with Word
Model Precision Recall F1
Perceptron 0.810 0.837 0.823
MEMM 0.797 0.824 0.810
PMM 0.827 0.823 0.825
ME 0.801 0.621 0.699
Table 8. Comparison between different learning models for
title extraction with PowerPoint
Model Precision Recall F1
Perceptron 0.875 0. 895 0. 885
MEMM 0.841 0.861 0.851
PMM 0.873 0.896 0.885
ME 0.753 0.766 0.759
6.6 Domain Adaptation
We apply the model trained with the first data set (MS) to the
second data set (DotCom and DotGov). Tables 9-12 show the
results.
Table 9. Accuracies of title extraction with Word in DotGov
Precision Recall F1
Model Perceptron 0.716 0.759 0.737
Largest font size 0.549 0.619 0.582Baselines
First line 0.462 0.521 0.490
Table 10. Accuracies of title extraction with PowerPoint in
DotGov
Precision Recall F1
Model Perceptron 0.900 0.906 0.903
Largest font size 0.871 0.888 0.879Baselines
First line 0.554 0.564 0.559
Table 11. Accuracies of title extraction with Word in DotCom
Precisio
n
Recall F1
Model Perceptron 0.832 0.880 0.855
Largest font size 0.676 0.753 0.712Baselines
First line 0.577 0.643 0.608
Table 12. Performance of PowerPoint document title
extraction in DotCom
Precisio
n
Recall F1
Model Perceptron 0.910 0.903 0.907
Largest font size 0.864 0.886 0.875Baselines
First line 0.570 0.585 0.577
From the results, we see that the models can be adapted to
different domains well. There is almost no drop in accuracy. The
results indicate that the patterns of title formats exist across
different domains, and it is possible to construct a domain
independent model by mainly using formatting information.
6.7 Language Adaptation
We apply the model trained with the data in English (MS) to the
data set in Chinese.
Tables 13-14 show the results.
Table 13. Accuracies of title extraction with Word in Chinese
Precision Recall F1
Model Perceptron 0.817 0.805 0.811
Largest font size 0.722 0.755 0.738Baselines
First line 0.743 0.777 0.760
Table 14. Accuracies of title extraction with PowerPoint in
Chinese
Precision Recall F1
Model Perceptron 0.766 0.812 0.789
Largest font size 0.753 0.813 0.782Baselines
First line 0.627 0.676 0.650
We see that the models can be adapted to a different language.
There are only small drops in accuracy. Obviously, the linguistic
features do not work for Chinese, but the effect of not using them
is negligible. The results indicate that the patterns of title formats
exist across different languages.
From the domain adaptation and language adaptation results, we
conclude that the use of formatting information is the key to a
successful extraction from general documents.
6.8 Search with Extracted Titles
We performed experiments on using title extraction for document
retrieval. As a baseline, we employed BM25 without using
extracted titles. The ranking mechanism was as described in
Section 5. The weights were heuristically set. We did not conduct
optimization on the weights.
The evaluation was conducted on a corpus of 1.3 M documents
crawled from the intranet of Microsoft using 100 evaluation
queries obtained from this intranet"s search engine query logs. 50
queries were from the most popular set, while 50 queries other
were chosen randomly. Users were asked to provide judgments of
the degree of document relevance from a scale of 1to 5 (1
meaning detrimental, 2 - bad, 3 - fair, 4 - good and 5 - excellent).
152
Figure 10 shows the results. In the chart two sets of precision
results were obtained by either considering good or excellent
documents as relevant (left 3 bars with relevance threshold 0.5), or
by considering only excellent documents as relevant (right 3 bars
with relevance threshold 1.0)
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
P@10 P@5 Reciprocal P@10 P@5 Reciprocal
0.5 1
BM25 Anchor, Title, Body
BM25 Anchor, Title, Body, ExtractedTitle
Name All
RelevanceThreshold Data
Description
Figure 10. Search ranking results.
Figure 10 shows different document retrieval results with different
ranking functions in terms of precision @10, precision @5 and
reciprocal rank:
• Blue bar - BM25 including the fields body, title (file
property), and anchor text.
• Purple bar - BM25 including the fields body, title (file
property), anchor text, and extracted title.
With the additional field of extracted title included in BM25 the
precision @10 increased from 0.132 to 0.145, or by ~10%. Thus,
it is safe to say that the use of extracted title can indeed improve
the precision of document retrieval.
7. CONCLUSION
In this paper, we have investigated the problem of automatically
extracting titles from general documents. We have tried using a
machine learning approach to address the problem.
Previous work showed that the machine learning approach can
work well for metadata extraction from research papers. In this
paper, we showed that the approach can work for extraction from
general documents as well. Our experimental results indicated that
the machine learning approach can work significantly better than
the baselines in title extraction from Office documents. Previous
work on metadata extraction mainly used linguistic features in
documents, while we mainly used formatting information. It
appeared that using formatting information is a key for
successfully conducting title extraction from general documents.
We tried different machine learning models including Perceptron,
Maximum Entropy, Maximum Entropy Markov Model, and Voted
Perceptron. We found that the performance of the Perceptorn
models was the best. We applied models constructed in one
domain to another domain and applied models trained in one
language to another language. We found that the accuracies did
not drop substantially across different domains and across
different languages, indicating that the models were generic. We
also attempted to use the extracted titles in document retrieval. We
observed a significant improvement in document ranking
performance for search when using extracted title information. All
the above investigations were not conducted in previous work, and
through our investigations we verified the generality and the
significance of the title extraction approach.
8. ACKNOWLEDGEMENTS
We thank Chunyu Wei and Bojuan Zhao for their work on data
annotation. We acknowledge Jinzhu Li for his assistance in
conducting the experiments. We thank Ming Zhou, John Chen,
Jun Xu, and the anonymous reviewers of JCDL"05 for their
valuable comments on this paper.
9. REFERENCES
[1] Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. A
maximum entropy approach to natural language processing.
Computational Linguistics, 22:39-71, 1996.
[2] Collins, M. Discriminative training methods for hidden
markov models: theory and experiments with perceptron
algorithms. In Proceedings of Conference on Empirical
Methods in Natural Language Processing, 1-8, 2002.
[3] Cortes, C. and Vapnik, V. Support-vector networks. Machine
Learning, 20:273-297, 1995.
[4] Chieu, H. L. and Ng, H. T. A maximum entropy approach to
information extraction from semi-structured and free text. In
Proceedings of the Eighteenth National Conference on
Artificial Intelligence, 768-791, 2002.
[5] Evans, D. K., Klavans, J. L., and McKeown, K. R. Columbia
newsblaster: multilingual news summarization on the Web.
In Proceedings of Human Language Technology conference /
North American chapter of the Association for
Computational Linguistics annual meeting, 1-4, 2004.
[6] Ghahramani, Z. and Jordan, M. I. Factorial hidden markov
models. Machine Learning, 29:245-273, 1997.
[7] Gheel, J. and Anderson, T. Data and metadata for finding and
reminding, In Proceedings of the 1999 International
Conference on Information Visualization, 446-451,1999.
[8] Giles, C. L., Petinot, Y., Teregowda P. B., Han, H.,
Lawrence, S., Rangaswamy, A., and Pal, N. eBizSearch: a
niche search engine for e-Business. In Proceedings of the
26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
413414, 2003.
[9] Giuffrida, G., Shek, E. C., and Yang, J. Knowledge-based
metadata extraction from PostScript files. In Proceedings of
the Fifth ACM Conference on Digital Libraries, 77-84, 2000.
[10] Han, H., Giles, C. L., Manavoglu, E., Zha, H., Zhang, Z., and
Fox, E. A. Automatic document metadata extraction using
support vector machines. In Proceedings of the Third
ACM/IEEE-CS Joint Conference on Digital Libraries, 37-48,
2003.
[11] Kobayashi, M., and Takeda, K. Information retrieval on the
Web. ACM Computing Surveys, 32:144-173, 2000.
[12] Lafferty, J., McCallum, A., and Pereira, F. Conditional
random fields: probabilistic models for segmenting and
153
labeling sequence data. In Proceedings of the Eighteenth
International Conference on Machine Learning, 282-289,
2001.
[13] Li, Y., Zaragoza, H., Herbrich, R., Shawe-Taylor J., and
Kandola, J. S. The perceptron algorithm with uneven margins.
In Proceedings of the Nineteenth International Conference
on Machine Learning, 379-386, 2002.
[14] Liddy, E. D., Sutton, S., Allen, E., Harwell, S., Corieri, S.,
Yilmazel, O., Ozgencil, N. E., Diekema, A., McCracken, N.,
and Silverstein, J. Automatic Metadata generation &
evaluation. In Proceedings of the 25th Annual International
ACM SIGIR Conference on Research and Development in
Information Retrieval, 401-402, 2002.
[15] Littlefield, A. Effective enterprise information retrieval
across new content formats. In Proceedings of the Seventh
Search Engine Conference,
http://www.infonortics.com/searchengines/sh02/02prog.html,
2002.
[16] Mao, S., Kim, J. W., and Thoma, G. R. A dynamic feature
generation system for automated metadata extraction in
preservation of digital materials. In Proceedings of the First
International Workshop on Document Image Analysis for
Libraries, 225-232, 2004.
[17] McCallum, A., Freitag, D., and Pereira, F. Maximum entropy
markov models for information extraction and segmentation.
In Proceedings of the Seventeenth International Conference
on Machine Learning, 591-598, 2000.
[18] Murphy, L. D. Digital document metadata in organizations:
roles, analytical approaches, and future research directions.
In Proceedings of the Thirty-First Annual Hawaii
International Conference on System Sciences, 267-276, 1998.
[19] Pinto, D., McCallum, A., Wei, X., and Croft, W. B. Table
extraction using conditional random fields. In Proceedings of
the 26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
235242, 2003.
[20] Ratnaparkhi, A. Unsupervised statistical models for
prepositional phrase attachment. In Proceedings of the
Seventeenth International Conference on Computational
Linguistics. 1079-1085, 1998.
[21] Robertson, S., Zaragoza, H., and Taylor, M. Simple BM25
extension to multiple weighted fields, In Proceedings of
ACM Thirteenth Conference on Information and Knowledge
Management, 42-49, 2004.
[22] Yi, J. and Sundaresan, N. Metadata based Web mining for
relevance, In Proceedings of the 2000 International
Symposium on Database Engineering & Applications,
113121, 2000.
[23] Yilmazel, O., Finneran, C. M., and Liddy, E. D. MetaExtract:
An NLP system to automatically assign metadata. In
Proceedings of the 2004 Joint ACM/IEEE Conference on
Digital Libraries, 241-242, 2004.
[24] Zhang, J. and Dimitroff, A. Internet search engines' response
to metadata Dublin Core implementation. Journal of
Information Science, 30:310-320, 2004.
[25] Zhang, L., Pan, Y., and Zhang, T. Recognising and using
named entities: focused named entity recognition using
machine learning. In Proceedings of the 27th Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval, 281-288, 2004.
[26] http://dublincore.org/groups/corporate/Seattle/
154 | machine learn;formatting information;metada of document;title extraction;language independence;linguistic feature;usefulness of extracted title;extracted title usefulness;genre;classifier;automatic title extraction;information extraction;model generality;metada extraction;search;document retrieval;comparison between model;generality of model |
train_H-79 | Beyond PageRank: Machine Learning for Static Ranking | Since the publication of Brin and Page"s paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). | 1. INTRODUCTION
Over the past decade, the Web has grown exponentially in size.
Unfortunately, this growth has not been isolated to good-quality
pages. The number of incorrect, spamming, and malicious (e.g.,
phishing) sites has also grown rapidly. The sheer number of both
good and bad pages on the Web has led to an increasing reliance
on search engines for the discovery of useful information. Users
rely on search engines not only to return pages related to their
search query, but also to separate the good from the bad, and
order results so that the best pages are suggested first.
To date, most work on Web page ranking has focused on
improving the ordering of the results returned to the user
(querydependent ranking, or dynamic ranking). However, having a good
query-independent ranking (static ranking) is also crucially
important for a search engine. A good static ranking algorithm
provides numerous benefits:
• Relevance: The static rank of a page provides a general
indicator to the overall quality of the page. This is a
useful input to the dynamic ranking algorithm.
• Efficiency: Typically, the search engine"s index is
ordered by static rank. By traversing the index from
highquality to low-quality pages, the dynamic ranker may
abort the search when it determines that no later page
will have as high of a dynamic rank as those already
found. The more accurate the static rank, the better this
early-stopping ability, and hence the quicker the search
engine may respond to queries.
• Crawl Priority: The Web grows and changes as quickly
as search engines can crawl it. Search engines need a way
to prioritize their crawl-to determine which pages to
recrawl, how frequently, and how often to seek out new
pages. Among other factors, the static rank of a page is
used to determine this prioritization. A better static rank
thus provides the engine with a higher quality, more
upto-date index.
Google is often regarded as the first commercially successful
search engine. Their ranking was originally based on the
PageRank algorithm [5][27]. Due to this (and possibly due to
Google"s promotion of PageRank to the public), PageRank is
widely regarded as the best method for the static ranking of Web
pages.
Though PageRank has historically been thought to perform quite
well, there has yet been little academic evidence to support this
claim. Even worse, there has recently been work showing that
PageRank may not perform any better than other simple measures
on certain tasks. Upstill et al. have found that for the task of
finding home pages, the number of pages linking to a page and the
type of URL were as, or more, effective than PageRank [32]. They
found similar results for the task of finding high quality
companies [31]. PageRank has also been used in systems for
TREC"s very large collection and Web track competitions,
but with much less success than had been expected [17]. Finally,
Amento et al. [1] found that simple features, such as the number
of pages on a site, performed as well as PageRank.
Despite these, the general belief remains among many, both
academic and in the public, that PageRank is an essential factor
for a good static rank. Failing this, it is still assumed that using the
link structure is crucial, in the form of the number of inlinks or the
amount of anchor text.
In this paper, we show there are a number of simple url- or
pagebased features that significantly outperform PageRank (for the
purposes of statically ranking Web pages) despite ignoring the
structure of the Web. We combine these and other static features
using machine learning to achieve a ranking system that is
significantly better than PageRank (in pairwise agreement with
human labels).
A machine learning approach for static ranking has other
advantages besides the quality of the ranking. Because the
measure consists of many features, it is harder for malicious users
to manipulate it (i.e., to raise their page"s static rank to an
undeserved level through questionable techniques, also known as
Web spamming). This is particularly true if the feature set is not
known. In contrast, a single measure like PageRank can be easier
to manipulate because spammers need only concentrate on one
goal: how to cause more pages to point to their page. With an
algorithm that learns, a feature that becomes unusable due to
spammer manipulation will simply be reduced or removed from
the final computation of rank. This flexibility allows a ranking
system to rapidly react to new spamming techniques.
A machine learning approach to static ranking is also able to take
advantage of any advances in the machine learning field. For
example, recent work on adversarial classification [12] suggests
that it may be possible to explicitly model the Web page
spammer"s (the adversary) actions, adjusting the ranking model in
advance of the spammer"s attempts to circumvent it. Another
example is the elimination of outliers in constructing the model,
which helps reduce the effect that unique sites may have on the
overall quality of the static rank. By moving static ranking to a
machine learning framework, we not only gain in accuracy, but
also gain in the ability to react to spammer"s actions, to rapidly
add new features to the ranking algorithm, and to leverage
advances in the rapidly growing field of machine learning.
Finally, we believe there will be significant advantages to using
this technique for other domains, such as searching a local hard
drive or a corporation"s intranet. These are domains where the
link structure is particularly weak (or non-existent), but there are
other domain-specific features that could be just as powerful. For
example, the author of an intranet page and his/her position in the
organization (e.g., CEO, manager, or developer) could provide
significant clues as to the importance of that page. A machine
learning approach thus allows rapid development of a good static
algorithm in new domains.
This paper"s contribution is a systematic study of static features,
including PageRank, for the purposes of (statically) ranking Web
pages. Previous studies on PageRank typically used subsets of the
Web that are significantly smaller (e.g., the TREC VLC2 corpus,
used by many, contains only 19 million pages). Also, the
performance of PageRank and other static features has typically
been evaluated in the context of a complete system for dynamic
ranking, or for other tasks such as question answering. In contrast,
we explore the use of PageRank and other features for the direct
task of statically ranking Web pages.
We first briefly describe the PageRank algorithm. In Section 3 we
introduce RankNet, the machine learning technique used to
combine static features into a final ranking. Section 4 describes
the static features. The heart of the paper is in Section 5, which
presents our experiments and results. We conclude with a
discussion of related and future work.
2. PAGERANK
The basic idea behind PageRank is simple: a link from a Web
page to another can be seen as an endorsement of that page. In
general, links are made by people. As such, they are indicative of
the quality of the pages to which they point - when creating a
page, an author presumably chooses to link to pages deemed to be
of good quality. We can take advantage of this linkage
information to order Web pages according to their perceived
quality.
Imagine a Web surfer who jumps from Web page to Web page,
choosing with uniform probability which link to follow at each
step. In order to reduce the effect of dead-ends or endless cycles
the surfer will occasionally jump to a random page with some
small probability α, or when on a page with no out-links. If
averaged over a sufficient number of steps, the probability the
surfer is on page j at some point in time is given by the formula:
∑∈
+
−
=
ji i
iP
N
jP
B F
)()1(
)( α
α (1)
Where Fi is the set of pages that page i links to, and Bj is the set of
pages that link to page j. The PageRank score for node j is defined
as this probability: PR(j)=P(j). Because equation (1) is recursive,
it must be iteratively evaluated until P(j) converges (typically, the
initial distribution for P(j) is uniform). The intuition is, because a
random surfer would end up at the page more frequently, it is
likely a better page. An alternative view for equation (1) is that
each page is assigned a quality, P(j). A page gives an equal
share of its quality to each page it points to.
PageRank is computationally expensive. Our collection of 5
billion pages contains approximately 370 billion links. Computing
PageRank requires iterating over these billions of links multiple
times (until convergence). It requires large amounts of memory
(or very smart caching schemes that slow the computation down
even further), and if spread across multiple machines, requires
significant communication between them. Though much work has
been done on optimizing the PageRank computation (see e.g.,
[25] and [6]), it remains a relatively slow, computationally
expensive property to compute.
3. RANKNET
Much work in machine learning has been done on the problems of
classification and regression. Let X={xi} be a collection of feature
vectors (typically, a feature is any real valued number), and
Y={yi} be a collection of associated classes, where yi is the class
of the object described by feature vector xi. The classification
problem is to learn a function f that maps yi=f(xi), for all i. When
yi is real-valued as well, this is called regression.
Static ranking can be seen as a regression problem. If we let xi
represent features of page i, and yi be a value (say, the rank) for
each page, we could learn a regression function that mapped each
page"s features to their rank. However, this over-constrains the
problem we wish to solve. All we really care about is the order of
the pages, not the actual value assigned to them.
Recent work on this ranking problem [7][13][18] directly
attempts to optimize the ordering of the objects, rather than the
value assigned to them. For these, let Z={<i,j>} be a collection of
pairs of items, where item i should be assigned a higher value than
item j. The goal of the ranking problem, then, is to learn a
function f such that,
)()(,, ji ffji xxZ >∈∀
708
Note that, as with learning a regression function, the result of this
process is a function (f) that maps feature vectors to real values.
This function can still be applied anywhere that a
regressionlearned function could be applied. The only difference is the
technique used to learn the function. By directly optimizing the
ordering of objects, these methods are able to learn a function that
does a better job of ranking than do regression techniques.
We used RankNet [7], one of the aforementioned techniques for
learning ranking functions, to learn our static rank function.
RankNet is a straightforward modification to the standard neural
network back-prop algorithm. As with back-prop, RankNet
attempts to minimize the value of a cost function by adjusting
each weight in the network according to the gradient of the cost
function with respect to that weight. The difference is that, while a
typical neural network cost function is based on the difference
between the network output and the desired output, the RankNet
cost function is based on the difference between a pair of network
outputs. That is, for each pair of feature vectors <i,j> in the
training set, RankNet computes the network outputs oi and oj.
Since vector i is supposed to be ranked higher than vector j, the
larger is oj-oi, the larger the cost.
RankNet also allows the pairs in Z to be weighted with a
confidence (posed as the probability that the pair satisfies the
ordering induced by the ranking function). In this paper, we used
a probability of one for all pairs. In the next section, we will
discuss the features used in our feature vectors, xi.
4. FEATURES
To apply RankNet (or other machine learning techniques) to the
ranking problem, we needed to extract a set of features from each
page. We divided our feature set into four, mutually exclusive,
categories: page-level (Page), domain-level (Domain), anchor text
and inlinks (Anchor), and popularity (Popularity). We also
optionally used the PageRank of a page as a feature. Below, we
describe each of these feature categories in more detail.
PageRank
We computed PageRank on a Web graph of 5 billion crawled
pages (and 20 billion known URLs linked to by these pages).
This represents a significant portion of the Web, and is
approximately the same number of pages as are used by
Google, Yahoo, and MSN for their search engines.
Because PageRank is a graph-based algorithm, it is important
that it be run on as large a subset of the Web as possible. Most
previous studies on PageRank used subsets of the Web that are
significantly smaller (e.g. the TREC VLC2 corpus, used by
many, contains only 19 million pages)
We computed PageRank using the standard value of 0.85 for α.
Popularity
Another feature we used is the actual popularity of a Web page,
measured as the number of times that it has been visited by
users over some period of time. We have access to such data
from users who have installed the MSN toolbar and have opted
to provide it to MSN. The data is aggregated into a count, for
each Web page, of the number of users who viewed that page.
Though popularity data is generally unavailable, there are two
other sources for it. The first is from proxy logs. For example, a
university that requires its students to use a proxy has a record
of all the pages they have visited while on campus.
Unfortunately, proxy data is quite biased and relatively small.
Another source, internal to search engines, are records of which
results their users clicked on. Such data was used by the search
engine Direct Hit, and has recently been explored for
dynamic ranking purposes [20]. An advantage of the toolbar
data over this is that it contains information about URL visits
that are not just the result of a search.
The raw popularity is processed into a number of features such
as the number of times a page was viewed and the number of
times any page in the domain was viewed. More details are
provided in section 5.5.
Anchor text and inlinks
These features are based on the information associated with
links to the page in question. It includes features such as the
total amount of text in links pointing to the page (anchor
text), the number of unique words in that text, etc.
Page
This category consists of features which may be determined by
looking at the page (and its URL) alone. We used only eight,
simple features such as the number of words in the body, the
frequency of the most common term, etc.
Domain
This category contains features that are computed as averages
across all pages in the domain. For example, the average
number of outlinks on any page and the average PageRank.
Many of these features have been used by others for ranking Web
pages, particularly the anchor and page features. As mentioned,
the evaluation is typically for dynamic ranking, and we wish to
evaluate the use of them for static ranking. Also, to our
knowledge, this is the first study on the use of actual page
visitation popularity for static ranking. The closest similar work is
on using click-through behavior (that is, which search engine
results the users click on) to affect dynamic ranking (see e.g.,
[20]).
Because we use a wide variety of features to come up with a static
ranking, we refer to this as fRank (for feature-based ranking).
fRank uses RankNet and the set of features described in this
section to learn a ranking function for Web pages. Unless
otherwise specified, fRank was trained with all of the features.
5. EXPERIMENTS
In this section, we will demonstrate that we can out perform
PageRank by applying machine learning to a straightforward set
of features. Before the results, we first discuss the data, the
performance metric, and the training method.
5.1 Data
In order to evaluate the quality of a static ranking, we needed a
gold standard defining the correct ordering for a set of pages.
For this, we employed a dataset which contains human judgments
for 28000 queries. For each query, a number of results are
manually assigned a rating, from 0 to 4, by human judges. The
rating is meant to be a measure of how relevant the result is for
the query, where 0 means poor and 4 means excellent. There
are approximately 500k judgments in all, or an average of 18
ratings per query.
The queries are selected by randomly choosing queries from
among those issued to the MSN search engine. The probability
that a query is selected is proportional to its frequency among all
709
of the queries. As a result, common queries are more likely to be
judged than uncommon queries. As an example of how diverse
the queries are, the first four queries in the training set are chef
schools, chicagoland speedway, eagles fan club, and
Turkish culture. The documents selected for judging are those
that we expected would, on average, be reasonably relevant (for
example, the top ten documents returned by MSN"s search
engine). This provides significantly more information than
randomly selecting documents on the Web, the vast majority of
which would be irrelevant to a given query.
Because of this process, the judged pages tend to be of higher
quality than the average page on the Web, and tend to be pages
that will be returned for common search queries. This bias is good
when evaluating the quality of static ranking for the purposes of
index ordering and returning relevant documents. This is because
the most important portion of the index to be well-ordered and
relevant is the portion that is frequently returned for search
queries. Because of this bias, however, the results in this paper are
not applicable to crawl prioritization. In order to obtain
experimental results on crawl prioritization, we would need
ratings on a random sample of Web pages.
To convert the data from query-dependent to query-independent,
we simply removed the query, taking the maximum over
judgments for a URL that appears in more than one query. The
reasoning behind this is that a page that is relevant for some query
and irrelevant for another is probably a decent page and should
have a high static rank. Because we evaluated the pages on
queries that occur frequently, our data indicates the correct index
ordering, and assigns high value to pages that are likely to be
relevant to a common query.
We randomly assigned queries to a training, validation, or test set,
such that they contained 84%, 8%, and 8% of the queries,
respectively. Each set contains all of the ratings for a given query,
and no query appears in more than one set. The training set was
used to train fRank. The validation set was used to select the
model that had the highest performance. The test set was used for
the final results.
This data gives us a query-independent ordering of pages. The
goal for a static ranking algorithm will be to reproduce this
ordering as closely as possible. In the next section, we describe
the measure we used to evaluate this.
5.2 Measure
We chose to use pairwise accuracy to evaluate the quality of a
static ranking. The pairwise accuracy is the fraction of time that
the ranking algorithm and human judges agree on the ordering of
a pair of Web pages.
If S(x) is the static ranking assigned to page x, and H(x) is the
human judgment of relevance for x, then consider the following
sets:
)}()(:,{ yHxHyx >=pH and )}()(:,{ ySxSyx >=pS
The pairwise accuracy is the portion of Hp that is also contained
in Sp:
p
pp
H
SH ∩
=accuracypairwise
This measure was chosen for two reasons. First, the discrete
human judgments provide only a partial ordering over Web pages,
making it difficult to apply a measure such as the Spearman rank
order correlation coefficient (in the pairwise accuracy measure, a
pair of documents with the same human judgment does not affect
the score). Second, the pairwise accuracy has an intuitive
meaning: it is the fraction of pairs of documents that, when the
humans claim one is better than the other, the static rank
algorithm orders them correctly.
5.3 Method
We trained fRank (a RankNet based neural network) using the
following parameters. We used a fully connected 2 layer network.
The hidden layer had 10 hidden nodes. The input weights to this
layer were all initialized to be zero. The output layer (just a
single node) weights were initialized using a uniform random
distribution in the range [-0.1, 0.1]. We used tanh as the transfer
function from the inputs to the hidden layer, and a linear function
from the hidden layer to the output. The cost function is the
pairwise cross entropy cost function as discussed in section 3.
The features in the training set were normalized to have zero mean
and unit standard deviation. The same linear transformation was
then applied to the features in the validation and test sets.
For training, we presented the network with 5 million pairings of
pages, where one page had a higher rating than the other. The
pairings were chosen uniformly at random (with replacement)
from all possible pairings. When forming the pairs, we ignored the
magnitude of the difference between the ratings (the rating spread)
for the two URLs. Hence, the weight for each pair was constant
(one), and the probability of a pair being selected was
independent of its rating spread.
We trained the network for 30 epochs. On each epoch, the
training pairs were randomly shuffled. The initial training rate was
0.001. At each epoch, we checked the error on the training set. If
the error had increased, then we decreased the training rate, under
the hypothesis that the network had probably overshot. The
training rate at each epoch was thus set to:
Training rate =
1+ε
κ
Where κ is the initial rate (0.001), and ε is the number of times
the training set error has increased. After each epoch, we
measured the performance of the neural network on the validation
set, using 1 million pairs (chosen randomly with replacement).
The network with the highest pairwise accuracy on the validation
set was selected, and then tested on the test set. We report the
pairwise accuracy on the test set, calculated using all possible
pairs.
These parameters were determined and fixed before the static rank
experiments in this paper. In particular, the choice of initial
training rate, number of epochs, and training rate decay function
were taken directly from Burges et al [7].
Though we had the option of preprocessing any of the features
before they were input to the neural network, we refrained from
doing so on most of them. The only exception was the popularity
features. As with most Web phenomenon, we found that the
distribution of site popularity is Zipfian. To reduce the dynamic
range, and hopefully make the feature more useful, we presented
the network with both the unpreprocessed, as well as the
logarithm, of the popularity features (As with the others, the
logarithmic feature values were also normalized to have zero
mean and unit standard deviation).
710
Applying fRank to a document is computationally efficient, taking
time that is only linear in the number of input features; it is thus
within a constant factor of other simple machine learning methods
such as naïve Bayes. In our experiments, computing the fRank for
all five billion Web pages was approximately 100 times faster
than computing the PageRank for the same set.
5.4 Results
As Table 1 shows, fRank significantly outperforms PageRank for
the purposes of static ranking. With a pairwise accuracy of 67.4%,
fRank more than doubles the accuracy of PageRank (relative to
the baseline of 50%, which is the accuracy that would be achieved
by a random ordering of Web pages). Note that one of fRank"s
input features is the PageRank of the page, so we would expect it
to perform no worse than PageRank. The significant increase in
accuracy implies that the other features (anchor, popularity, etc.)
do in fact contain useful information regarding the overall quality
of a page.
Table 1: Basic Results
Technique Accuracy (%)
None (Baseline) 50.00
PageRank 56.70
fRank 67.43
There are a number of decisions that go into the computation of
PageRank, such as how to deal with pages that have no outlinks,
the choice of α, numeric precision, convergence threshold, etc.
We were able to obtain a computation of PageRank from a
completely independent implementation (provided by Marc
Najork) that varied somewhat in these parameters. It achieved a
pairwise accuracy of 56.52%, nearly identical to that obtained by
our implementation. We thus concluded that the quality of the
PageRank is not sensitive to these minor variations in algorithm,
nor was PageRank"s low accuracy due to problems with our
implementation of it.
We also wanted to find how well each feature set performed. To
answer this, for each feature set, we trained and tested fRank
using only that set of features. The results are shown in Table 2.
As can be seen, every single feature set individually outperformed
PageRank on this test. Perhaps the most interesting result is that
the Page-level features had the highest performance out of all the
feature sets. This is surprising because these are features that do
not depend on the overall graph structure of the Web, nor even on
what pages point to a given page. This is contrary to the common
belief that the Web graph structure is the key to finding a good
static ranking of Web pages.
Table 2: Results for individual feature sets.
Feature Set Accuracy (%)
PageRank 56.70
Popularity 60.82
Anchor 59.09
Page 63.93
Domain 59.03
All Features 67.43
Because we are using a two-layer neural network, the features in
the learned network can interact with each other in interesting,
nonlinear ways. This means that a particular feature that appears
to have little value in isolation could actually be very important
when used in combination with other features. To measure the
final contribution of a feature set, in the context of all the other
features, we performed an ablation study. That is, for each set of
features, we trained a network to contain all of the features except
that set. We then compared the performance of the resulting
network to the performance of the network with all of the features.
Table 3 shows the results of this experiment, where the decrease
in accuracy is the difference in pairwise accuracy between the
network trained with all of the features, and the network missing
the given feature set.
Table 3: Ablation study. Shown is the decrease in accuracy
when we train a network that has all but the given set of
features. The last line is shows the effect of removing the
anchor, PageRank, and domain features, hence a model
containing no network or link-based information whatsoever.
Feature Set Decrease in
Accuracy
PageRank 0.18
Popularity 0.78
Anchor 0.47
Page 5.42
Domain
Anchor, PageRank & Domain
0.10
0.60
The results of the ablation study are consistent with the individual
feature set study. Both show that the most important feature set is
the Page-level feature set, and the second most important is the
popularity feature set.
Finally, we wished to see how the performance of fRank
improved as we added features; we wanted to find at what point
adding more feature sets became relatively useless. Beginning
with no features, we greedily added the feature set that improved
performance the most. The results are shown in Table 4. For
example, the fourth line of the table shows that fRank using the
page, popularity, and anchor features outperformed any network
that used the page, popularity, and some other feature set, and that
the performance of this network was 67.25%.
Table 4: fRank performance as feature sets are added. At each
row, the feature set that gave the greatest increase in accuracy
was added to the list of features (i.e., we conducted a greedy
search over feature sets).
Feature Set Accuracy (%)
None 50.00
+Page 63.93
+Popularity 66.83
+Anchor 67.25
+PageRank 67.31
+Domain 67.43
711
Finally, we present a qualitative comparison of PageRank vs.
fRank. In Table 5 are the top ten URLs returned for PageRank and
for fRank. PageRank"s results are heavily weighted towards
technology sites. It contains two QuickTime URLs (Apple"s video
playback software), as well as Internet Explorer and FireFox
URLs (both of which are Web browsers). fRank, on the other
hand, contains more consumer-oriented sites such as American
Express, Target, Dell, etc. PageRank"s bias toward technology can
be explained through two processes. First, there are many pages
with buttons at the bottom suggesting that the site is optimized
for Internet Explorer, or that the visitor needs QuickTime. These
generally link back to, in these examples, the Internet Explorer
and QuickTime download sites. Consequently, PageRank ranks
those pages highly. Though these pages are important, they are
not as important as it may seem by looking at the link structure
alone. One fix for this is to add information about the link to the
PageRank computation, such as the size of the text, whether it was
at the bottom of the page, etc.
The other bias comes from the fact that the population of Web site
authors is different than the population of Web users. Web
authors tend to be technologically-oriented, and thus their linking
behavior reflects those interests. fRank, by knowing the actual
visitation popularity of a site (the popularity feature set), is able to
eliminate some of that bias. It has the ability to depend more on
where actual Web users visit rather than where the Web site
authors have linked.
The results confirm that fRank outperforms PageRank in pairwise
accuracy. The two most important feature sets are the page and
popularity features. This is surprising, as the page features
consisted only of a few (8) simple features. Further experiments
found that, of the page features, those based on the text of the
page (as opposed to the URL) performed the best. In the next
section, we explore the popularity feature in more detail.
5.5 Popularity Data
As mentioned in section 4, our popularity data came from MSN
toolbar users. For privacy reasons, we had access only to an
aggregate count of, for each URL, how many times it was visited
by any toolbar user. This limited the possible features we could
derive from this data. For possible extensions, see section 6.3,
future work.
For each URL in our train and test sets, we provided a feature to
fRank which was how many times it had been visited by a toolbar
user. However, this feature was quite noisy and sparse,
particularly for URLs with query parameters (e.g.,
http://search.msn.com/results.aspx?q=machine+learning&form=QBHP). One
solution was to provide an additional feature which was the
number of times any URL at the given domain was visited by a
toolbar user. Adding this feature dramatically improved the
performance of fRank.
We took this one step further and used the built-in hierarchical
structure of URLs to construct many levels of backoff between the
full URL and the domain. We did this by using the set of features
shown in Table 6.
Table 6: URL functions used to compute the Popularity
feature set.
Function Example
Exact URL cnn.com/2005/tech/wikipedia.html?v=mobile
No Params cnn.com/2005/tech/wikipedia.html
Page wikipedia.html
URL-1 cnn.com/2005/tech
URL-2 cnn.com/2005
…
Domain cnn.com
Domain+1 cnn.com/2005
…
Each URL was assigned one feature for each function shown in
the table. The value of the feature was the count of the number of
times a toolbar user visited a URL, where the function applied to
that URL matches the function applied to the URL in question.
For example, a user"s visit to cnn.com/2005/sports.html would
increment the Domain and Domain+1 features for the URL
cnn.com/2005/tech/wikipedia.html.
As seen in Table 7, adding the domain counts significantly
improved the quality of the popularity feature, and adding the
numerous backoff functions listed in Table 6 improved the
accuracy even further.
Table 7: Effect of adding backoff to the popularity feature set
Features Accuracy (%)
URL count 58.15
URL and Domain counts 59.31
All backoff functions (Table 6) 60.82
Table 5: Top ten URLs for PageRank vs. fRank
PageRank fRank
google.com google.com
apple.com/quicktime/download yahoo.com
amazon.com americanexpress.com
yahoo.com hp.com
microsoft.com/windows/ie target.com
apple.com/quicktime bestbuy.com
mapquest.com dell.com
ebay.com autotrader.com
mozilla.org/products/firefox dogpile.com
ftc.gov bankofamerica.com
712
Backing off to subsets of the URL is one technique for dealing
with the sparsity of data. It is also informative to see how the
performance of fRank depends on the amount of popularity data
that we have collected. In Figure 1 we show the performance of
fRank trained with only the popularity feature set vs. the amount
of data we have for the popularity feature set. Each day, we
receive additional popularity data, and as can be seen in the plot,
this increases the performance of fRank. The relation is
logarithmic: doubling the amount of popularity data provides a
constant improvement in pairwise accuracy.
In summary, we have found that the popularity features provide a
useful boost to the overall fRank accuracy. Gathering more
popularity data, as well as employing simple backoff strategies,
improve this boost even further.
5.6 Summary of Results
The experiments provide a number of conclusions. First, fRank
performs significantly better than PageRank, even without any
information about the Web graph. Second, the page level and
popularity features were the most significant contributors to
pairwise accuracy. Third, by collecting more popularity data, we
can continue to improve fRank"s performance.
The popularity data provides two benefits to fRank. First, we see
that qualitatively, fRank"s ordering of Web pages has a more
favorable bias than PageRank"s. fRank"s ordering seems to
correspond to what Web users, rather than Web page authors,
prefer. Second, the popularity data is more timely than
PageRank"s link information. The toolbar provides information
about which Web pages people find interesting right now,
whereas links are added to pages more slowly, as authors find the
time and interest.
6. RELATED AND FUTURE WORK
6.1 Improvements to PageRank
Since the original PageRank paper, there has been work on
improving it. Much of that work centers on speeding up and
parallelizing the computation [15][25].
One recognized problem with PageRank is that of topic drift: A
page about dogs will have high PageRank if it is linked to by
many pages that themselves have high rank, regardless of their
topic. In contrast, a search engine user looking for good pages
about dogs would likely prefer to find pages that are pointed to by
many pages that are themselves about dogs. Hence, a link that is
on topic should have higher weight than a link that is not.
Richardson and Domingos"s Query Dependent PageRank [29]
and Haveliwala"s Topic-Sensitive PageRank [16] are two
approaches that tackle this problem.
Other variations to PageRank include differently weighting links
for inter- vs. intra-domain links, adding a backwards step to the
random surfer to simulate the back button on most browsers
[24] and modifying the jump probability (α) [3]. See Langville
and Meyer [23] for a good survey of these, and other
modifications to PageRank.
6.2 Other related work
PageRank is not the only link analysis algorithm used for ranking
Web pages. The most well-known other is HITS [22], which is
used by the Teoma search engine [30]. HITS produces a list of
hubs and authorities, where hubs are pages that point to many
authority pages, and authorities are pages that are pointed to by
many hubs. Previous work has shown HITS to perform
comparably to PageRank [1].
One field of interest is that of static index pruning (see e.g.,
Carmel et al. [8]). Static index pruning methods reduce the size of
the search engine"s index by removing documents that are
unlikely to be returned by a search query. The pruning is typically
done based on the frequency of query terms. Similarly, Pandey
and Olston [28] suggest crawling pages frequently if they are
likely to incorrectly appear (or not appear) as a result of a search.
Similar methods could be incorporated into the static rank (e.g.,
how many frequent queries contain words found on this page).
Others have investigated the effect that PageRank has on the Web
at large [9]. They argue that pages with high PageRank are more
likely to be found by Web users, thus more likely to be linked to,
and thus more likely to maintain a higher PageRank than other
pages. The same may occur for the popularity data. If we increase
the ranking for popular pages, they are more likely to be clicked
on, thus further increasing their popularity. Cho et al. [10] argue
that a more appropriate measure of Web page quality would
depend on not only the current link structure of the Web, but also
on the change in that link structure. The same technique may be
applicable to popularity data: the change in popularity of a page
may be more informative than the absolute popularity.
One interesting related work is that of Ivory and Hearst [19].
Their goal was to build a model of Web sites that are considered
high quality from the perspective of content, structure and
navigation, visual design, functionality, interactivity, and overall
experience. They used over 100 page level features, as well as
features encompassing the performance and structure of the site.
This let them qualitatively describe the qualities of a page that
make it appear attractive (e.g., rare use of italics, at least 9 point
font, …), and (in later work) to build a system that assists novel
Web page authors in creating quality pages by evaluating it
according to these features. The primary differences between this
work and ours are the goal (discovering what constitutes a good
Web page vs. ordering Web pages for the purposes of Web
search), the size of the study (they used a dataset of less than 6000
pages vs. our set of 468,000), and our comparison with PageRank.
y = 0.577Ln(x) + 58.283
R
2
= 0.9822
58
58.5
59
59.5
60
60.5
61
1 10 100
Days of Toolbar Data
PairwiseAccuracy
Figure 1: Relation between the amount of popularity data and
the performance of the popularity feature set. Note the x-axis
is a logarithmic scale.
713
Nevertheless, their work provides insights to additional useful
static features that we could incorporate into fRank in the future.
Recent work on incorporating novel features into dynamic ranking
includes that by Joachims et al. [21], who investigate the use of
implicit feedback from users, in the form of which search engine
results are clicked on. Craswell et al. [11] present a method for
determining the best transformation to apply to query independent
features (such as those used in this paper) for the purposes of
improving dynamic ranking. Other work, such as Boyan et al. [4]
and Bartell et al. [2] apply machine learning for the purposes of
improving the overall relevance of a search engine (i.e., the
dynamic ranking). They do not apply their techniques to the
problem of static ranking.
6.3 Future work
There are many ways in which we would like to extend this work.
First, fRank uses only a small number of features. We believe we
could achieve even more significant results with more features. In
particular the existence, or lack thereof, of certain words could
prove very significant (for instance, under construction
probably signifies a low quality page). Other features could
include the number of images on a page, size of those images,
number of layout elements (tables, divs, and spans), use of style
sheets, conforming to W3C standards (like XHTML 1.0 Strict),
background color of a page, etc.
Many pages are generated dynamically, the contents of which may
depend on parameters in the URL, the time of day, the user
visiting the site, or other variables. For such pages, it may be
useful to apply the techniques found in [26] to form a static
approximation for the purposes of extracting features. The
resulting grammar describing the page could itself be a source of
additional features describing the complexity of the page, such as
how many non-terminal nodes it has, the depth of the grammar
tree, etc.
fRank allows one to specify a confidence in each pairing of
documents. In the future, we will experiment with probabilities
that depend on the difference in human judgments between the
two items in the pair. For example, a pair of documents where one
was rated 4 and the other 0 should have a higher confidence than
a pair of documents rated 3 and 2.
The experiments in this paper are biased toward pages that have
higher than average quality. Also, fRank with all of the features
can only be applied to pages that have already been crawled.
Thus, fRank is primarily useful for index ordering and improving
relevance, not for directing the crawl. We would like to
investigate a machine learning approach for crawl prioritization as
well. It may be that a combination of methods is best: for
example, using PageRank to select the best 5 billion of the 20
billion pages on the Web, then using fRank to order the index and
affect search relevancy.
Another interesting direction for exploration is to incorporate
fRank and page-level features directly into the PageRank
computation itself. Work on biasing the PageRank jump vector
[16], and transition matrix [29], have demonstrated the feasibility
and advantages of such an approach. There is reason to believe
that a direct application of [29], using the fRank of a page for its
relevance, could lead to an improved overall static rank.
Finally, the popularity data can be used in other interesting ways.
The general surfing and searching habits of Web users varies by
time of day. Activity in the morning, daytime, and evening are
often quite different (e.g., reading the news, solving problems,
and accessing entertainment, respectively). We can gain insight
into these differences by using the popularity data, divided into
segments of the day. When a query is issued, we would then use
the popularity data matching the time of query in order to do the
ranking of Web pages. We also plan to explore popularity features
that use more than just the counts of how often a page was visited.
For example, how long users tended to dwell on a page, did they
leave the page by clicking a link or by hitting the back button, etc.
Fox et al. did a study that showed that features such as this can be
valuable for the purposes of dynamic ranking [14]. Finally, the
popularity data could be used as the label rather than as a feature.
Using fRank in this way to predict the popularity of a page may
useful for the tasks of relevance, efficiency, and crawl priority.
There is also significantly more popularity data than human
labeled data, potentially enabling more complex machine learning
methods, and significantly more features.
7. CONCLUSIONS
A good static ranking is an important component for today"s
search engines and information retrieval systems. We have
demonstrated that PageRank does not provide a very good static
ranking; there are many simple features that individually out
perform PageRank. By combining many static features, fRank
achieves a ranking that has a significantly higher pairwise
accuracy than PageRank alone. A qualitative evaluation of the top
documents shows that fRank is less technology-biased than
PageRank; by using popularity data, it is biased toward pages that
Web users, rather than Web authors, visit. The machine learning
component of fRank gives it the additional benefit of being more
robust against spammers, and allows it to leverage further
developments in the machine learning community in areas such as
adversarial classification. We have only begun to explore the
options, and believe that significant strides can be made in the
area of static ranking by further experimentation with additional
features, other machine learning techniques, and additional
sources of data.
8. ACKNOWLEDGMENTS
Thank you to Marc Najork for providing us with additional
PageRank computations and to Timo Burkard for assistance with
the popularity data. Many thanks to Chris Burges for providing
code and significant support in using training RankNets. Also, we
thank Susan Dumais and Nick Craswell for their edits and
suggestions.
9. REFERENCES
[1] B. Amento, L. Terveen, and W. Hill. Does authority mean
quality? Predicting expert quality ratings of Web documents.
In Proceedings of the 23rd
Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval, 2000.
[2] B. Bartell, G. Cottrell, and R. Belew. Automatic combination
of multiple ranked retrieval systems. In Proceedings of the
17th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, 1994.
[3] P. Boldi, M. Santini, and S. Vigna. PageRank as a function
of the damping factor. In Proceedings of the International
World Wide Web Conference, May 2005.
714
[4] J. Boyan, D. Freitag, and T. Joachims. A machine learning
architecture for optimizing web search engines. In AAAI
Workshop on Internet Based Information Systems, August
1996.
[5] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proceedings of the
Seventh International Wide Web Conference, Brisbane,
Australia, 1998. Elsevier.
[6] A. Broder, R. Lempel, F. Maghoul, and J. Pederson.
Efficient PageRank approximation via graph aggregation. In
Proceedings of the International World Wide Web
Conference, May 2004.
[7] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N.
Hamilton, G. Hullender. Learning to rank using gradient
descent. In Proceedings of the 22nd
International Conference
on Machine Learning, Bonn, Germany, 2005.
[8] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y.
S. Maarek, and A. Soffer. Static index pruning for
information retrieval systems. In Proceedings of the 24th
Annual International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages 43-50,
New Orleans, Louisiana, USA, September 2001.
[9] J. Cho and S. Roy. Impact of search engines on page
popularity. In Proceedings of the International World Wide
Web Conference, May 2004.
[10]J. Cho, S. Roy, R. Adams. Page Quality: In search of an
unbiased web ranking. In Proceedings of the ACM SIGMOD
2005 Conference. Baltimore, Maryland. June 2005.
[11]N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor.
Relevance weighting for query independent evidence. In
Proceedings of the 28th
Annual Conference on Research and
Development in Information Retrieval (SIGIR), August,
2005.
[12]N. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma.
Adversarial Classification. In Proceedings of the Tenth
International Conference on Knowledge Discovery and Data
Mining (pp. 99-108), Seattle, WA, 2004.
[13]O. Dekel, C. Manning, and Y. Singer. Log-linear models for
label-ranking. In Advances in Neural Information Processing
Systems 16. Cambridge, MA: MIT Press, 2003.
[14]S. Fox, K S. Fox, K. Karnawat, M. Mydland, S. T. Dumais
and T. White (2005). Evaluating implicit measures to
improve the search experiences. In the ACM Transactions on
Information Systems, 23(2), pp. 147-168. April 2005.
[15]T. Haveliwala. Efficient computation of PageRank. Stanford
University Technical Report, 1999.
[16]T. Haveliwala. Topic-sensitive PageRank. In Proceedings of
the International World Wide Web Conference, May 2002.
[17]D. Hawking and N. Craswell. Very large scale retrieval and
Web search. In D. Harman and E. Voorhees (eds), The
TREC Book. MIT Press.
[18]R. Herbrich, T. Graepel, and K. Obermayer. Support vector
learning for ordinal regression. In Proceedings of the Ninth
International Conference on Artificial Neural Networks, pp.
97-102. 1999.
[19]M. Ivory and M. Hearst. Statistical profiles of highly-rated
Web sites. In Proceedings of the ACM SIGCHI Conference
on Human Factors in Computing Systems, 2002.
[20]T. Joachims. Optimizing search engines using clickthrough
data. In Proceedings of the ACM Conference on Knowledge
Discovery and Data Mining (KDD), 2002.
[21]T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G.
Gay. Accurately Interpreting Clickthrough Data as Implicit
Feedback. In Proceedings of the Conference on Research and
Development in Information Retrieval (SIGIR), 2005.
[22]J. Kleinberg. Authoritative sources in a hyperlinked
environment. Journal of the ACM 46:5, pp. 604-32. 1999.
[23]A. Langville and C. Meyer. Deeper inside PageRank.
Internet Mathematics 1(3):335-380, 2004.
[24]F. Matthieu and M. Bouklit. The effect of the back button in
a random walk: application for PageRank. In Alternate track
papers and posters of the Thirteenth International World
Wide Web Conference, 2004.
[25]F. McSherry. A uniform approach to accelerated PageRank
computation. In Proceedings of the International World
Wide Web Conference, May 2005.
[26]Y. Minamide. Static approximation of dynamically generated
Web pages. In Proceedings of the International World Wide
Web Conference, May 2005.
[27]L. Page, S. Brin, R. Motwani, and T. Winograd. The
PageRank citation ranking: Bringing order to the web.
Technical report, Stanford University, Stanford, CA, 1998.
[28]S. Pandey and C. Olston. User-centric Web crawling. In
Proceedings of the International World Wide Web
Conference, May 2005.
[29]M. Richardson and P. Domingos. The intelligent surfer:
probabilistic combination of link and content information in
PageRank. In Advances in Neural Information Processing
Systems 14, pp. 1441-1448. Cambridge, MA: MIT Press,
2002.
[30]C. Sherman. Teoma vs. Google, Round 2. Available from
World Wide Web (http://dc.internet.com/news/article.php/
1002061), 2002.
[31]T. Upstill, N. Craswell, and D. Hawking. Predicting fame
and fortune: PageRank or indegree?. In the Eighth
Australasian Document Computing Symposium. 2003.
[32]T. Upstill, N. Craswell, and D. Hawking. Query-independent
evidence in home page finding. In ACM Transactions on
Information Systems. 2003.
715 | ranknet;relevance;static rank;visitation popularity;search engine;pagerank;regression;adversarial classification;feature-based ranking;information retrieval;machine learning;static ranking |
train_H-81 | Distance Measures for MPEG-7-based Retrieval | In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. | 1. INTRODUCTION
The MPEG-7 standard defines - among others - a set of
descriptors for visual media. Each descriptor consists of a feature
extraction mechanism, a description (in binary and XML format)
and guidelines that define how to apply the descriptor on different
kinds of media (e.g. on temporal media). The MPEG-7 descriptors
have been carefully designed to meet - partially
complementaryrequirements of different application domains: archival, browsing,
retrieval, etc. [9]. In the following, we will exclusively deal with
the visual MPEG-7 descriptors in the context of media retrieval.
The visual MPEG-7 descriptors fall in five groups: colour,
texture, shape, motion and others (e.g. face description) and sum
up to 16 basic descriptors. For retrieval applications, a rule for
each descriptor is mandatory that defines how to measure the
similarity of two descriptions. Common rules are distance
functions, like the Euclidean distance and the Mahalanobis
distance. Unfortunately, the MPEG-7 standard does not include
distance measures in the normative part, because it was not
designed to be (and should not exclusively understood to be)
retrieval-specific. However, the MPEG-7 authors give
recommendations, which distance measure to use on a particular
descriptor. These recommendations are based on accurate
knowledge of the descriptors' behaviour and the description
structures.
In the present study a large number of successful distance
measures from different areas (statistics, psychology, medicine,
social and economic sciences, etc.) were implemented and applied
on MPEG-7 data vectors to verify whether or not the
recommended MPEG-7 distance measures are really the best for
any reasonable class of media objects. From the MPEG-7 tests
and the recommendations it does not become clear, how many and
which distance measures have been tested on the visual
descriptors and the MPEG-7 test datasets. The hypothesis is that
analytically derived distance measures may be good in general but
only a quantitative analysis is capable to identify the best distance
measure for a specific feature extraction method.
The paper is organised as follows. Section 2 gives a minimum of
background information on the MPEG-7 descriptors and distance
measurement in visual information retrieval (VIR, see [3], [16]).
Section 3 gives an overview over the implemented distance
measures. Section 4 describes the test setup, including the test
data and the implemented evaluation methods. Finally, Section 5
presents the results per descriptor and over all descriptors.
2. BACKGROUND
2.1 MPEG-7: visual descriptors
The visual part of the MPEG-7 standard defines several
descriptors. Not all of them are really descriptors in the sense that
they extract properties from visual media. Some of them are just
structures for descriptor aggregation or localisation. The basic
descriptors are Color Layout, Color Structure, Dominant Color,
Scalable Color, Edge Histogram, Homogeneous Texture, Texture
Browsing, Region-based Shape, Contour-based Shape, Camera
Motion, Parametric Motion and Motion Activity.
Other descriptors are based on low-level descriptors or semantic
information: Group-of-Frames/Group-of-Pictures Color (based on
Scalable Color), Shape 3D (based on 3D mesh information),
Motion Trajectory (based on object segmentation) and Face
Recognition (based on face extraction).
Descriptors for spatiotemporal aggregation and localisation are:
Spatial 2D Coordinates, Grid Layout, Region Locator (spatial),
Time Series, Temporal Interpolation (temporal) and
SpatioTemporal Locator (combined). Finally, other structures
exist for colour spaces, colour quantisation and multiple 2D views
of 3D objects.
These additional structures allow combining the basic descriptors
in multiple ways and on different levels. But they do not change
the characteristics of the extracted information. Consequently,
structures for aggregation and localisation were not considered in
the work described in this paper.
2.2 Similarity measurement on visual data
Generally, similarity measurement on visual information aims at
imitating human visual similarity perception. Unfortunately,
human perception is much more complex than any of the existing
similarity models (it includes perception, recognition and
subjectivity).
The common approach in visual information retrieval is
measuring dis-similarity as distance. Both, query object and
candidate object are represented by their corresponding feature
vectors. The distance between these objects is measured by
computing the distance between the two vectors. Consequently,
the process is independent of the employed querying paradigm
(e.g. query by example). The query object may be natural (e.g. a
real object) or artificial (e.g. properties of a group of objects).
Goal of the measurement process is to express a relationship
between the two objects by their distance. Iteration for multiple
candidates allows then to define a partial order over the
candidates and to address those in a (to be defined)
neighbourhood being similar to the query object. At this point, it
has to be mentioned that in a multi-descriptor
environmentespecially in MPEG-7 - we are only half way towards a statement
on similarity. If multiple descriptors are used (e.g. a descriptor
scheme), a rule has to be defined how to combine all distances to
a global value for each object. Still, distance measurement is the
most important first step in similarity measurement.
Obviously, the main task of good distance measures is to
reorganise descriptor space in a way that media objects with the
highest similarity are nearest to the query object. If distance is
defined minimal, the query object is always in the origin of
distance space and similar candidates should form clusters around
the origin that are as large as possible. Consequently, many well
known distance measures are based on geometric assumptions of
descriptor space (e.g. Euclidean distance is based on the metric
axioms). Unfortunately, these measures do not fit ideally with
human similarity perception (e.g. due to human subjectivity). To
overcome this shortage, researchers from different areas have
developed alternative models that are mostly predicate-based
(descriptors are assumed to contain just binary elements, e.g.
Tversky's Feature Contrast Model [17]) and fit better with human
perception. In the following distance measures of both groups of
approaches will be considered.
3. DISTANCE MEASURES
The distance measures used in this work have been collected from
various areas (Subsection 3.1). Because they work on differently
quantised data, Subsection 3.2 sketches a model for unification on
the basis of quantitative descriptions. Finally, Subsection 3.3
introduces the distance measures as well as their origin and the
idea they implement.
3.1 Sources
Distance measurement is used in many research areas such as
psychology, sociology (e.g. comparing test results), medicine (e.g.
comparing parameters of test persons), economics (e.g. comparing
balance sheet ratios), etc. Naturally, the character of data available
in these areas differs significantly. Essentially, there are two
extreme cases of data vectors (and distance measures):
predicatebased (all vector elements are binary, e.g. {0, 1}) and quantitative
(all vector elements are continuous, e.g. [0, 1]).
Predicates express the existence of properties and represent
highlevel information while quantitative values can be used to measure
and mostly represent low-level information. Predicates are often
employed in psychology, sociology and other human-related
sciences and most predicate-based distance measures were
therefore developed in these areas. Descriptions in visual
information retrieval are nearly ever (if they do not integrate
semantic information) quantitative. Consequently, mostly
quantitative distance measures are used in visual information
retrieval.
The goal of this work is to compare the MPEG-7 distance
measures with the most powerful distance measures developed in
other areas. Since MPEG-7 descriptions are purely quantitative
but some of the most sophisticated distance measures are defined
exclusively on predicates, a model is mandatory that allows the
application of predicate-based distance measures on quantitative
data. The model developed for this purpose is presented in the
next section.
3.2 Quantisation model
The goal of the quantisation model is to redefine the set operators
that are usually used in predicate-based distance measures on
continuous data. The first in visual information retrieval to follow
this approach were Santini and Jain, who tried to apply Tversky's
Feature Contrast Model [17] to content-based image retrieval
[12], [13]. They interpreted continuous data as fuzzy predicates
and used fuzzy set operators. Unfortunately, their model suffered
from several shortcomings they described in [12], [13] (for
example, the quantitative model worked only for one specific
version of the original predicate-based measure).
The main idea of the presented quantisation model is that set
operators are replaced by statistical functions. In [5] the authors
could show that this interpretation of set operators is reasonable.
The model offers a solution for the descriptors considered in the
evaluation. It is not specific to one distance measure, but can be
applied to any predicate-based measure. Below, it will be shown
that the model does not only work for predicate data but for
quantitative data as well. Each measure implementing the model
can be used as a substitute for the original predicate-based measure.
Generally, binary properties of two objects (e.g. media objects)
can exist in both objects (denoted as a), in just one (b, c) or in
none of them (d). The operator needed for these relationships are
UNION, MINUS and NOT. In the quantisation model they are
replaced as follows (see [5] for further details).
131
∑
≤
+
−
+
==∩=
k
jkikjkik
kkji
else
xx
Mif
xx
ssXXa
0
22, 1ε
( )
( )
∑
∑
∑
≤
++
−==¬∩¬=
≤−−−
==−=
≤−−−
==−=
k
jkikjkik
kkji
k
ikjkikjk
kkij
k
jkikjkik
kkji
else
xx
if
xx
MssXXd
else
xxMifxx
ssXXc
else
xxMifxx
ssXXb
0
22,
0
,
0
,
1
2
2
ε
ε
ε
with:
( ) [ ]
( )
{ }0\
.
0
1
.
0
1
,
2
2
1
minmax
maxmin
+
∈
−
=
≥
−
=
=
≥
−
=
−=
∈=
∑ ∑
∑ ∑
Rp
ki
x
where
else
pif
p
M
ki
x
where
else
pif
p
M
xxM
xxxwithxX
i k
ik
i k
ik
ikiki
µ
σ
σ
σ
ε
µ
µ
µ
ε
a selects properties that are present in both data vectors (Xi, Xj
representing media objects), b and c select properties that are
present in just one of them and d selects properties that are present
in neither of the two data vectors. Every property is selected by
the extent to which it is present (a and d: mean, b and c:
difference) and only if the amount to which it is present exceeds a
certain threshold (depending on the mean and standard deviation
over all elements of descriptor space).
The implementation of these operators is based on one assumption.
It is assumed that vector elements measure on interval scale. That
means, each element expresses that the measured property is
"more or less" present ("0": not at all, "M": fully present). This is
true for most visual descriptors and all MPEG-7 descriptors. A
natural origin as it is assumed here ("0") is not needed.
Introducing p (called discriminance-defining parameter) for the
thresholds 21 ,εε has the positive consequence that a, b, c, d can
then be controlled through a single parameter. p is an additional
criterion for the behaviour of a distance measure and determines
the thresholds used in the operators. It expresses how accurate
data items are present (quantisation) and consequently, how
accurate they should be investigated. p can be set by the user or
automatically. Interesting are the limits:
1. Mp →⇒∞→ 21 ,εε
In this case, all elements (=properties) are assumed to be
continuous (high quantisation). In consequence, all properties of a
descriptor are used by the operators. Then, the distance measure is
not discriminant for properties.
2. 0,0 21 →⇒→ εεp
In this case, all properties are assumed to be predicates. In
consequence, only binary elements (=predicates) are used by the
operators (1-bit quantisation). The distance measure is then highly
discriminant for properties.
Between these limits, a distance measure that uses the
quantisation model is - depending on p - more or less
discriminant for properties. This means, it selects a subset of all
available description vector elements for distance measurement.
For both predicate data and quantitative data it can be shown that
the quantisation model is reasonable. If description vectors consist
of binary elements only, p should be used as follows (for example,
p can easily be set automatically):
( )σµεε ,min..,0,0 21 ==⇒→ pgep
In this case, a, b, c, d measure like the set operators they replace.
For example, Table 1 shows their behaviour for two
onedimensional feature vectors Xi and Xj. As can be seen, the
statistical measures work like set operators. Actually, the
quantisation model works accurate on predicate data for any p≠∞.
To show that the model is reasonable for quantitative data the
following fact is used. It is easy to show that for predicate data
some quantitative distance measures degenerate to
predicatebased measures. For example, the L1
metric (Manhattan metric)
degenerates to the Hamming distance (from [9], without weights):
distanceHammingcbxxL
k
jkik =+≡−= ∑1
If it can be shown that the quantisation model is able to
reconstruct the quantitative measure from the degenerated
predicate-based measure, the model is obviously able to extend
predicate-based measures to the quantitative domain. This is easy
to illustrate. For purely quantitative feature vectors, p should be
used as follows (again, p can easily be set automatically):
1, 21 =⇒∞→ εεp
Then, a and d become continuous functions:
∑
∑
+
−==⇒≡≤
+
+
==⇒≡≤
+
−
k
jkik
kk
jkik
k
jkik
kk
jkik
xx
MswheresdtrueM
xx
xx
swheresatrueM
xx
M
22
22
b and c can be made continuous for the following expressions:
( )
( )
∑
∑
∑
−==+⇒
≥−−
==⇒
≥−≡≤−−
≥−−
==⇒
≥−≡≤−−
k
jkikkk
k
ikjkikjk
kk
ikjkikjk
k
jkikjkik
kk
jkikjkik
xxswherescb
else
xxifxx
swheresc
xxMxxM
else
xxifxx
swheresb
xxMxxM
0
0
0
0
0
0
Table 1. Quantisation model on predicate vectors.
Xi Xj a b c d
(1) (1) 1 0 0 0
(1) (0) 0 1 0 0
(0) (1) 0 0 1 0
(0) (0) 0 0 0 1
132
∑
∑
−==−
−==−
k
ikjkkk
k
jkikkk
xxswheresbc
xxswherescb
This means, for sufficiently high p every predicate-based distance
measure that is either not using b and c or just as b+c, b-c or c-b,
can be transformed into a continuous quantitative distance
measure. For example, the Hamming distance (again, without
weights):
1
Lxxxxswherescb
k
jkik
k
jkikkk =−=−==+ ∑∑
The quantisation model successfully reconstructs the L1
metric
and no distance measure-specific modification has to be made to
the model. This demonstrates that the model is reasonable. In the
following it will be used to extend successful predicate-based
distance measures on the quantitative domain.
The major advantages of the quantisation model are: (1) it is
application domain independent, (2) the implementation is
straightforward, (3) the model is easy to use and finally, (4) the
new parameter p allows to control the similarity measurement
process in a new way (discriminance on property level).
3.3 Implemented measures
For the evaluation described in this work next to predicate-based
(based on the quantisation model) and quantitative measures, the
distance measures recommended in the MPEG-7 standard were
implemented (all together 38 different distance measures).
Table 2 summarises those predicate-based measures that
performed best in the evaluation (in sum 20 predicate-based
measures were investigated). For these measures, K is the number
of predicates in the data vectors Xi and Xj. In P1, the sum is used
for Tversky's f() (as Tversky himself does in [17]) and α, β are
weights for element b and c. In [5] the author's investigated
Tversky's Feature Contrast Model and found α=1, β=0 to be the
optimum parameters.
Some of the predicate-based measures are very simple (e.g. P2,
P4) but have been heavily exploited in psychological research.
Pattern difference (P6) - a very powerful measure - is used in the
statistics package SPSS for cluster analysis. P7 is a correlation
coefficient for predicates developed by Pearson.
Table 3 shows the best quantitative distance measures that were
used. Q1 and Q2 are metric-based and were implemented as
representatives for the entire group of Minkowski distances. The
wi are weights. In Q5, ii σµ , are mean and standard deviation
for the elements of descriptor Xi. In Q6, m is
2
M
(=0.5). Q3, the
Canberra metric, is a normalised form of Q1. Similarly, Q4,
Clark's divergence coefficient is a normalised version of Q2. Q6 is
a further-developed correlation coefficient that is invariant against
sign changes. This measure is used even though its particular
properties are of minor importance for this application domain.
Finally, Q8 is a measure that takes the differences between
adjacent vector elements into account. This makes it structurally
different from all other measures.
Obviously, one important distance measure is missing. The
Mahalanobis distance was not considered, because different
descriptors would require different covariance matrices and for
some descriptors it is simply impossible to define a covariance
matrix. If the identity matrix was used in this case, the
Mahalanobis distance would degenerate to a Minkowski distance.
Additionally, the recommended MPEG-7 distances were
implemented with the following parameters: In the distance
measure of the Color Layout descriptor all weights were set to "1"
(as in all other implemented measures). In the distance measure of
the Dominant Color descriptor the following parameters were
used: 20,1,3.0,7.0 21 ==== dTww α (as recommended). In the
Homogeneous Texture descriptor's distance all ( )kα were set to
"1" and matching was done rotation- and scale-invariant.
Important! Some of the measures presented in this section are
distance measures while others are similarity measures. For the
tests, it is important to notice, that all similarity measures were
inverted to distance measures.
4. TEST SETUP
Subsection 4.1 describes the descriptors (including parameters)
and the collections (including ground truth information) that were
used in the evaluation. Subsection 4.2 discusses the evaluation
method that was implemented and Subsection 4.3 sketches the test
environment used for the evaluation process.
4.1 Test data
For the evaluation eight MPEG-7 descriptors were used. All
colour descriptors: Color Layout, Color Structure, Dominant
Color, Scalable Color, all texture descriptors: Edge Histogram,
Homogeneous Texture, Texture Browsing and one shape
descriptor: Region-based Shape. Texture Browsing was used even
though the MPEG-7 standard suggests that it is not suitable for
retrieval. The other basic shape descriptor, Contour-based Shape,
was not used, because it produces structurally different
descriptions that cannot be transformed to data vectors with
elements measuring on interval-scales. The motion descriptors
were not used, because they integrate the temporal dimension of
visual media and would only be comparable, if the basic colour,
texture and shape descriptors would be aggregated over time. This
was not done. Finally, no high-level descriptors were used
(Localisation, Face Recognition, etc., see Subsection 2.1),
because - to the author's opinion - the behaviour of the basic
descriptors on elementary media objects should be evaluated
before conclusions on aggregated structures can be drawn.
Table 2. Predicate-based distance measures.
No. Measure Comment
P1 cba .. βα −− Feature Contrast Model,
Tversky 1977 [17]
P2 a No. of co-occurrences
P3 cb + Hamming distance
P4
K
a Russel 1940 [14]
P5
cb
a
+
Kulczvnski 1927 [14]
P6
2
K
bc Pattern difference [14]
P7
( )( )( )( )dcdbcaba
bcad
++++
− Pearson 1926 [11]
133
The Texture Browsing descriptions had to be transformed from
five bins to an eight bin representation in order that all elements
of the descriptor measure on an interval scale. A Manhattan metric
was used to measure proximity (see [6] for details).
Descriptor extraction was performed using the MPEG-7 reference
implementation. In the extraction process each descriptor was
applied on the entire content of each media object and the
following extraction parameters were used. Colour in Color
Structure was quantised to 32 bins. For Dominant Color colour
space was set to YCrCb, 5-bit default quantisation was used and
the default value for spatial coherency was used. Homogeneous
Texture was quantised to 32 components. Scalable Color values
were quantised to sizeof(int)-3 bits and 64 bins were used. Finally,
Texture Browsing was used with five components.
These descriptors were applied on three media collections with
image content: the Brodatz dataset (112 images, 512x512 pixel), a
subset of the Corel dataset (260 images, 460x300 pixel, portrait
and landscape) and a dataset with coats-of-arms images (426
images, 200x200 pixel). Figure 1 shows examples from the three
collections.
Designing appropriate test sets for a visual evaluation is a highly
difficult task (for example, see the TREC video 2002 report [15]).
Of course, for identifying the best distance measure for a
descriptor, it should be tested on an infinite number of media
objects. But this is not the aim of this study. It is just evaluated if
- for likely image collections - better proximity measures than
those suggested by the MPEG-7 group can be found. Collections
of this relatively small size were used in the evaluation, because
the applied evaluation methods are above a certain minimum size
invariant against collection size and for smaller collections it is
easier to define a high-quality ground truth. Still, the average ratio
of ground truth size to collection size is at least 1:7. Especially, no
collection from the MPEG-7 dataset was used in the evaluation
because the evaluations should show, how well the descriptors
and the recommended distance measures perform on "unknown"
material.
When the descriptor extraction was finished, the resulting XML
descriptions were transformed into a data matrix with 798 lines
(media objects) and 314 columns (descriptor elements). To be
usable with distance measures that do not integrate domain
knowledge, the elements of this data matrix were normalised to
[0, 1].
For the distance evaluation - next to the normalised data
matrixhuman similarity judgement is needed. In this work, the ground
truth is built of twelve groups of similar images (four for each
dataset). Group membership was rated by humans based on
semantic criterions. Table 4 summarises the twelve groups and the
underlying descriptions. It has to be noticed, that some of these
groups (especially 5, 7 and 10) are much harder to find with
lowlevel descriptors than others.
4.2 Evaluation method
Usually, retrieval evaluation is performed based on a ground truth
with recall and precision (see, for example, [3], [16]). In
multidescriptor environments this leads to a problem, because the
resulting recall and precision values are strongly influenced by the
method used to merge the distance values for one media object.
Even though it is nearly impossible to say, how big the influence
of a single distance measure was on the resulting recall and
precision values, this problem has been almost ignored so far.
In Subsection 2.2 it was stated that the major task of a distance
measure is to bring the relevant media objects as close to the
origin (where the query object lies) as possible. Even in a
multidescriptor environment it is then simple to identify the similar
objects in a large distance space. Consequently, it was decided to
Table 3. Quantitative distance measures.
No. Measure Comment No. Measure Comment
Q1
∑ −
k
jkiki xxw
City block
distance (L1
)
Q2
( )∑ −
k
jkiki xxw
2
Euclidean
distance (L2
)
Q3
∑ +
−
k jkik
jkik
xx
xx Canberra metric,
Lance, Williams
1967 [8]
Q4 ( )
∑ +
−
k jkik
jkik
xx
xx
K
2
1
Divergence
coefficient,
Clark 1952 [1]
Q5 ( )( )
( ) ( )∑ ∑
∑
−−
−−
k k
jjkiik
k
jjkiik
xx
xx
22
µµ
µµ Correlation
coefficient
Q6
−+
−−
+−−
∑∑∑
∑ ∑∑
k
ik
k
jkik
k
ik
k k
jk
k
ikjkik
xmKmxxmKmx
xxmKmxx
2..2 2222
Cohen 1969 [2]
Q7
∑ ∑
∑
k k
jkik
k
jkik
xx
xx
22
Angular distance,
Gower 1967 [7]
Q8
( ) ( )( )∑
−
++ −−−
1
2
11
K
k
jkjkikik xxxx
Meehl Index
[10]
Table 4. Ground truth information.
Coll. No Images Description
1 19 Regular, chequered patterns
2 38 Dark white noise
3 33 Moon-like surfaces
Brodatz
4 35 Water-like surfaces
5 73 Humans in nature (difficult)
6 17 Images with snow (mountains, skiing)
7 76 Animals in nature (difficult)
Corel
8 27 Large coloured flowers
9 12 Bavarian communal arms
10 10 All Bavarian arms (difficult)
11 18 Dark objects / light unsegmented shield
Arms
12 14 Major charges on blue or red shield
134
use indicators measuring the distribution in distance space of
candidates similar to the query object for this evaluation instead
of recall and precision. Identifying clusters of similar objects
(based on the given ground truth) is relatively easy, because the
resulting distance space for one descriptor and any distance
measure is always one-dimensional. Clusters are found by
searching from the origin of distance space to the first similar
object, grouping all following similar objects in the cluster,
breaking off the cluster with the first un-similar object and so
forth.
For the evaluation two indicators were defined. The first measures
the average distance of all cluster means to the origin:
distanceavgclustersno
sizecluster
distanceclustersno
i i
sizecluster
j
ij
d
i
_._
_
_
_
∑
∑
=µ
where distanceij is the distance value of the j-th element in the i-th
cluster,
∑
∑ ∑
= CLUSTERS
i
i
CLUSTERS
i
sizecluster
j
ij
sizecluster
distance
distanceavg
i
_
_
_
, no_clusters is the
number of found clusters and cluster_sizei is the size of the i-th
cluster. The resulting indicator is normalised by the distribution
characteristics of the distance measure (avg_distance).
Additionally, the standard deviation is used. In the evaluation
process this measure turned out to produce valuable results and to
be relatively robust against parameter p of the quantisation model.
In Subsection 3.2 we noted that p affects the discriminance of a
predicate-based distance measure: The smaller p is set the larger
are the resulting clusters because the quantisation model is then
more discriminant against properties and less elements of the data
matrix are used. This causes a side-effect that is measured by the
second indicator: more and more un-similar objects come out with
exactly the same distance value as similar objects (a problem that
does not exist for large p's) and become indiscernible from similar
objects. Consequently, they are (false) cluster members. This
phenomenon (conceptually similar to the "false negatives"
indicator) was named "cluster pollution" and the indicator
measures the average cluster pollution over all clusters:
clustersno
doublesno
cp
clustersno
i
sizecluster
j
ij
i
_
_
_ _
∑ ∑
=
where no_doublesij is the number of indiscernible un-similar
objects associated with the j-th element of cluster i.
Remark: Even though there is a certain influence, it could be
proven in [5] that no significant correlation exists between
parameter p of the quantisation model and cluster pollution.
4.3 Test environment
As pointed out above, to generate the descriptors, the MPEG-7
reference implementation in version 5.6 was used (provided by
TU Munich). Image processing was done with Adobe Photoshop
and normalisation and all evaluations were done with Perl. The
querying process was performed in the following steps: (1)
random selection of a ground truth group, (2) random selection of
a query object from this group, (3) distance comparison for all
other objects in the dataset, (4) clustering of the resulting distance
space based on the ground truth and finally, (5) evaluation.
For each combination of dataset and distance measure 250 queries
were issued and evaluations were aggregated over all datasets and
descriptors. The next section shows the - partially
surprisingresults.
5. RESULTS
In the results presented below the first indicator from Subsection
4.2 was used to evaluate distance measures. In a first step
parameter p had to be set in a way that all measures are equally
discriminant. Distance measurement is fair if the following
condition holds true for any predicate-based measure dP and any
continuous measure dC:
( ) ( )CP dcppdcp ≈,
Then, it is guaranteed that predicate-based measures do not create
larger clusters (with a higher number of similar objects) for the
price of higher cluster pollution. In more than 1000 test queries
the optimum value was found to be p=1.
Results are organised as follows: Subsection 5.1 summarises the
Figure 1. Test datasets. Left: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset.
135
best distance measures per descriptor, Section 5.2 shows the best
overall distance measures and Section 5.3 points out other
interesting results (for example, distance measures that work
particularly good on specific ground truth groups).
5.1 Best measure per descriptor
Figure 2 shows the evaluation results for the first indicator. For
each descriptor the best measure and the performance of the
MPEG-7 recommendation are shown. The results are aggregated
over the tested datasets.
On first sight, it becomes clear that the MPEG-7
recommendations are mostly relatively good but never the best.
For Color Layout the difference between MP7 and the best
measure, the Meehl index (Q8), is just 4% and the MPEG-7
measure has a smaller standard deviation. The reason why the
Meehl index is better may be that this descriptors generates
descriptions with elements that have very similar variance.
Statistical analysis confirmed that (see [6]).
For Color Structure, Edge Histogram, Homogeneous Texture,
Region-based Shape and Scalable Color by far the best measure is
pattern difference (P6). Psychological research on human visual
perception has revealed that in many situation differences between
the query object and a candidate weigh much stronger than
common properties. The pattern difference measure implements
this insight in the most consequent way. In the author's opinion,
the reason why pattern difference performs so extremely well on
many descriptors is due to this fact. Additional advantages of
pattern difference are that it usually has a very low variance
andbecause it is a predicate-based measure - its discriminance (and
cluster structure) can be tuned with parameter p.
The best measure for Dominant Color turned out to be Clark's
Divergence coefficient (Q4). This is a similar measure to pattern
difference on the continuous domain. The Texture Browsing
descriptor is a special problem. In the MPEG-7 standard it is
recommended to use it exclusively for browsing. After testing it
for retrieval on various distance measures the author supports this
opinion. It is very difficult to find a good distance measure for
Texture Browsing. The proposed Manhattan metric, for example,
performs very bad. The best measure is predicate-based (P7). It
works on common properties (a, d) but produces clusters with
very high cluster pollution. For this descriptor the second
indicator is up to eight times higher than for predicate-based
measures on other descriptors.
5.2 Best overall measures
Figure 3 summarises the results over all descriptors and media
collections. The diagram should give an indication on the general
potential of the investigated distance measures for visual
information retrieval.
It can be seen that the best overall measure is a predicate-based
one. The top performance of pattern difference (P6) proves that
the quantisation model is a reasonable method to extend
predicate-based distance measures on the continuous domain. The
second best group of measures are the MPEG-7
recommendations, which have a slightly higher mean but a lower
standard deviation than pattern difference. The third best measure
is the Meehl index (Q8), a measure developed for psychological
applications but because of its characteristic properties
tailormade for certain (homogeneous) descriptors.
Minkowski metrics are also among the best measures: the average
mean and variance of the Manhattan metric (Q1) and the
Euclidean metric (Q2) are in the range of Q8. Of course, these
measures do not perform particularly well for any of the
descriptors. Remarkably for a predicate-based measure, Tversky's
Feature Contrast Model (P1) is also in the group of very good
measures (even though it is not among the best) that ends with
Q5, the correlation coefficient. The other measures either have a
significantly higher mean or a very large standard deviation.
5.3 Other interesting results
Distance measures that perform in average worse than others may
in certain situations (e.g. on specific content) still perform better.
For Color Layout, for example, Q7 is a very good measure on
colour photos. It performs as good as Q8 and has a lower standard
deviation. For artificial images the pattern difference and the
Hamming distance produce comparable results as well.
If colour information is available in media objects, pattern
difference performs well on Dominant Color (just 20% worse Q4)
and in case of difficult ground truth (group 5, 7, 10) the Meehl
index is as strong as P6.
0,000
0,001
0,002
0,003
0,004
0,005
0,006
0,007
0,008
Q8
MP7
P6
MP7
Q4
MP7
P6
MP7
P6
MP7
P6
MP7
P6
MP7
P7
Q2
Color
Layout
Color
Structure
Dominant
Color
Edge
Histogram
Homog.
Texture
Region
Shape
Scalable
Color
Texture
Browsing
Figure 2. Results per measure and descriptor. The horizontal axis shows the best measure and the performance of the MPEG-7
recommendation for each descriptor. The vertical axis shows the values for the first indicator (smaller value = better cluster structure).
Shades have the following meaning: black=µ-σ (good cases), black + dark grey=µ (average) and black + dark grey + light grey=µ+σ (bad).
136
6. CONCLUSION
The evaluation presented in this paper aims at testing the
recommended distance measures and finding better ones for the
basic visual MPEG-7 descriptors. Eight descriptors were selected,
38 distance measures were implemented, media collections were
created and assessed, performance indicators were defined and
more than 22500 tests were performed. To be able to use
predicate-based distance measures next to quantitative measures a
quantisation model was defined that allows the application of
predicate-based measures on continuous data.
In the evaluation the best overall distance measures for visual
content - as extracted by the visual MPEG-7 descriptors - turned
out to be the pattern difference measure and the Meehl index (for
homogeneous descriptions). Since these two measures perform
significantly better than the MPEG-7 recommendations they
should be further tested on large collections of image and video
content (e.g. from [15]).
The choice of the right distance function for similarity
measurement depends on the descriptor, the queried media
collection and the semantic level of the user's idea of similarity.
This work offers suitable distance measures for various situations.
In consequence, the distance measures identified as the best will
be implemented in the open MPEG-7 based visual information
retrieval framework VizIR [4].
ACKNOWLEDGEMENTS
The author would like to thank Christian Breiteneder for his
valuable comments and suggestions for improvement. The work
presented in this paper is part of the VizIR project funded by the
Austrian Scientific Research Fund FWF under grant no. P16111.
REFERENCES
[1] Clark, P.S. An extension of the coefficient of divergence for
use with multiple characters. Copeia, 2 (1952), 61-64.
[2] Cohen, J. A profile similarity coefficient invariant over
variable reflection. Psychological Bulletin, 71 (1969),
281284.
[3] Del Bimbo, A. Visual information retrieval. Morgan
Kaufmann Publishers, San Francisco CA, 1999.
[4] Eidenberger, H., and Breiteneder, C. A framework for visual
information retrieval. In Proceedings Visual Information
Systems Conference (HSinChu Taiwan, March 2002), LNCS
2314, Springer Verlag, 105-116.
[5] Eidenberger, H., and Breiteneder, C. Visual similarity
measurement with the Feature Contrast Model. In
Proceedings SPIE Storage and Retrieval for Media Databases
Conference (Santa Clara CA, January 2003), SPIE Vol.
5021, 64-76.
[6] Eidenberger, H., How good are the visual MPEG-7 features?
In Proceedings SPIE Visual Communications and Image
Processing Conference (Lugano Switzerland, July 2003),
SPIE Vol. 5150, 476-488.
[7] Gower, J.G. Multivariate analysis and multidimensional
geometry. The Statistician, 17 (1967),13-25.
[8] Lance, G.N., and Williams, W.T. Mixed data classificatory
programs. Agglomerative Systems Australian Comp. Journal,
9 (1967), 373-380.
[9] Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., and Yamada,
A. Color and texture descriptors. In Special Issue on
MPEG7. IEEE Transactions on Circuits and Systems for Video
Technology, 11/6 (June 2001), 703-715.
[10] Meehl, P. E. The problem is epistemology, not statistics:
Replace significance tests by confidence intervals and
quantify accuracy of risky numerical predictions. In Harlow,
L.L., Mulaik, S.A., and Steiger, J.H. (Eds.). What if there
were no significance tests? Erlbaum, Mahwah NJ, 393-425.
[11] Pearson, K. On the coefficients of racial likeness. Biometrica,
18 (1926), 105-117.
[12] Santini, S., and Jain, R. Similarity is a geometer. Multimedia
Tools and Application, 5/3 (1997), 277-306.
[13] Santini, S., and Jain, R. Similarity measures. IEEE
Transactions on Pattern Analysis and Machine Intelligence,
21/9 (September 1999), 871-883.
[14] Sint, P.P. Similarity structures and similarity measures.
Austrian Academy of Sciences Press, Vienna Austria, 1975
(in German).
[15] Smeaton, A.F., and Over, P. The TREC-2002 video track
report. NIST Special Publication SP 500-251 (March 2003),
available from: http://trec.nist.gov/pubs/trec11/papers/
VIDEO.OVER.pdf (last visited: 2003-07-29)
[16] Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and
Jain, R. Content-based image retrieval at the end of the early
years. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 22/12 (December 2000), 1349-1380.
[17] Tversky, A. Features of similarity. Psychological Review,
84/4 (July 1977), 327-351.
0,000
0,002
0,004
0,006
0,008
0,010
0,012
0,014
0,016
0,018
0,020
P6
MP7
Q8
Q1
Q4
Q2
P2
P4
Q6
Q3
Q7
P1
Q5
P3
P5
P7
Figure 3. Overall results (ordered by the first indicator). The vertical axis shows the values for the first indicator (smaller value = better
cluster structure). Shades have the following meaning: black=µ-σ, black + dark grey=µ and black + dark grey + light grey=µ+σ.
137 | similarity measurement;performance indicator;visual media;content-base video retrieval;media collection;distance measurement;distance measure;mpeg-7;visual descriptor;mpeg-7-based retrieval;meehl index;visual information retrieval;similarity perception;human similarity perception;content-base image retrieval;predicate-based model |
train_H-82 | Downloading Textual Hidden Web Content Through Keyword Queries | An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only entry point to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries. | 1. INTRODUCTION
Recent studies show that a significant fraction of Web content
cannot be reached by following links [7, 12]. In particular, a large
part of the Web is hidden behind search forms and is reachable
only when users type in a set of keywords, or queries, to the forms.
These pages are often referred to as the Hidden Web [17] or the
Deep Web [7], because search engines typically cannot index the
pages and do not return them in their results (thus, the pages are
essentially hidden from a typical Web user).
According to many studies, the size of the Hidden Web increases
rapidly as more organizations put their valuable content online
through an easy-to-use Web interface [7]. In [12], Chang et al.
estimate that well over 100,000 Hidden-Web sites currently exist
on the Web. Moreover, the content provided by many Hidden-Web
sites is often of very high quality and can be extremely valuable
to many users [7]. For example, PubMed hosts many high-quality
papers on medical research that were selected from careful
peerreview processes, while the site of the US Patent and Trademarks
Office1
makes existing patent documents available, helping
potential inventors examine prior art.
In this paper, we study how we can build a Hidden-Web crawler2
that can automatically download pages from the Hidden Web, so
that search engines can index them. Conventional crawlers rely
on the hyperlinks on the Web to discover pages, so current search
engines cannot index the Hidden-Web pages (due to the lack of
links). We believe that an effective Hidden-Web crawler can have
a tremendous impact on how users search information on the Web:
• Tapping into unexplored information: The Hidden-Web
crawler will allow an average Web user to easily explore the
vast amount of information that is mostly hidden at present.
Since a majority of Web users rely on search engines to discover
pages, when pages are not indexed by search engines, they are
unlikely to be viewed by many Web users. Unless users go
directly to Hidden-Web sites and issue queries there, they cannot
access the pages at the sites.
• Improving user experience: Even if a user is aware of a
number of Hidden-Web sites, the user still has to waste a significant
amount of time and effort, visiting all of the potentially relevant
sites, querying each of them and exploring the result. By making
the Hidden-Web pages searchable at a central location, we can
significantly reduce the user"s wasted time and effort in
searching the Hidden Web.
• Reducing potential bias: Due to the heavy reliance of many Web
users on search engines for locating information, search engines
influence how the users perceive the Web [28]. Users do not
necessarily perceive what actually exists on the Web, but what
is indexed by search engines [28]. According to a recent
article [5], several organizations have recognized the importance of
bringing information of their Hidden Web sites onto the surface,
and committed considerable resources towards this effort. Our
1
US Patent Office: http://www.uspto.gov
2
Crawlers are the programs that traverse the Web automatically and
download pages for search engines.
100
Figure 1: A single-attribute search interface
Hidden-Web crawler attempts to automate this process for
Hidden Web sites with textual content, thus minimizing the
associated costs and effort required.
Given that the only entry to Hidden Web pages is through
querying a search form, there are two core challenges to
implementing an effective Hidden Web crawler: (a) The crawler has to
be able to understand and model a query interface, and (b) The
crawler has to come up with meaningful queries to issue to the
query interface. The first challenge was addressed by Raghavan
and Garcia-Molina in [29], where a method for learning search
interfaces was presented. Here, we present a solution to the second
challenge, i.e. how a crawler can automatically generate queries so
that it can discover and download the Hidden Web pages.
Clearly, when the search forms list all possible values for a query
(e.g., through a drop-down list), the solution is straightforward. We
exhaustively issue all possible queries, one query at a time. When
the query forms have a free text input, however, an infinite
number of queries are possible, so we cannot exhaustively issue all
possible queries. In this case, what queries should we pick? Can the
crawler automatically come up with meaningful queries without
understanding the semantics of the search form?
In this paper, we provide a theoretical framework to investigate
the Hidden-Web crawling problem and propose effective ways of
generating queries automatically. We also evaluate our proposed
solutions through experiments conducted on real Hidden-Web sites.
In summary, this paper makes the following contributions:
• We present a formal framework to study the problem of
HiddenWeb crawling. (Section 2).
• We investigate a number of crawling policies for the Hidden
Web, including the optimal policy that can potentially download
the maximum number of pages through the minimum number of
interactions. Unfortunately, we show that the optimal policy is
NP-hard and cannot be implemented in practice (Section 2.2).
• We propose a new adaptive policy that approximates the optimal
policy. Our adaptive policy examines the pages returned from
previous queries and adapts its query-selection policy
automatically based on them (Section 3).
• We evaluate various crawling policies through experiments on
real Web sites. Our experiments will show the relative
advantages of various crawling policies and demonstrate their
potential. The results from our experiments are very promising. In
one experiment, for example, our adaptive policy downloaded
more than 90% of the pages within PubMed (that contains 14
million documents) after it issued fewer than 100 queries.
2. FRAMEWORK
In this section, we present a formal framework for the study of
the Hidden-Web crawling problem. In Section 2.1, we describe our
assumptions on Hidden-Web sites and explain how users interact
with the sites. Based on this interaction model, we present a
highlevel algorithm for a Hidden-Web crawler in Section 2.2. Finally in
Section 2.3, we formalize the Hidden-Web crawling problem.
2.1 Hidden-Web database model
There exists a variety of Hidden Web sources that provide
information on a multitude of topics. Depending on the type of
information, we may categorize a Hidden-Web site either as a textual
database or a structured database. A textual database is a site that
Figure 2: A multi-attribute search interface
mainly contains plain-text documents, such as PubMed and
LexisNexis (an online database of legal documents [1]). Since
plaintext documents do not usually have well-defined structure, most
textual databases provide a simple search interface where users
type a list of keywords in a single search box (Figure 1). In
contrast, a structured database often contains multi-attribute relational
data (e.g., a book on the Amazon Web site may have the fields
title=‘Harry Potter", author=‘J.K. Rowling" and
isbn=‘0590353403") and supports multi-attribute search
interfaces (Figure 2). In this paper, we will mainly focus on
textual databases that support single-attribute keyword queries. We
discuss how we can extend our ideas for the textual databases to
multi-attribute structured databases in Section 6.1.
Typically, the users need to take the following steps in order to
access pages in a Hidden-Web database:
1. Step 1. First, the user issues a query, say liver, through the
search interface provided by the Web site (such as the one shown
in Figure 1).
2. Step 2. Shortly after the user issues the query, she is presented
with a result index page. That is, the Web site returns a list of
links to potentially relevant Web pages, as shown in Figure 3(a).
3. Step 3. From the list in the result index page, the user identifies
the pages that look interesting and follows the links. Clicking
on a link leads the user to the actual Web page, such as the one
shown in Figure 3(b), that the user wants to look at.
2.2 A generic Hidden Web crawling algorithm
Given that the only entry to the pages in a Hidden-Web site
is its search from, a Hidden-Web crawler should follow the three
steps described in the previous section. That is, the crawler has
to generate a query, issue it to the Web site, download the result
index page, and follow the links to download the actual pages. In
most cases, a crawler has limited time and network resources, so
the crawler repeats these steps until it uses up its resources.
In Figure 4 we show the generic algorithm for a Hidden-Web
crawler. For simplicity, we assume that the Hidden-Web crawler
issues single-term queries only.3
The crawler first decides which
query term it is going to use (Step (2)), issues the query, and
retrieves the result index page (Step (3)). Finally, based on the links
found on the result index page, it downloads the Hidden Web pages
from the site (Step (4)). This same process is repeated until all the
available resources are used up (Step (1)).
Given this algorithm, we can see that the most critical decision
that a crawler has to make is what query to issue next. If the
crawler can issue successful queries that will return many matching
pages, the crawler can finish its crawling early on using minimum
resources. In contrast, if the crawler issues completely irrelevant
queries that do not return any matching pages, it may waste all
of its resources simply issuing queries without ever retrieving
actual pages. Therefore, how the crawler selects the next query can
greatly affect its effectiveness. In the next section, we formalize
this query selection problem.
3
For most Web sites that assume AND for multi-keyword
queries, single-term queries return the maximum number of results.
Extending our work to multi-keyword queries is straightforward.
101
(a) List of matching pages for query liver. (b) The first matching page for liver.
Figure 3: Pages from the PubMed Web site.
ALGORITHM 2.1. Crawling a Hidden Web site
Procedure
(1) while ( there are available resources ) do
// select a term to send to the site
(2) qi = SelectTerm()
// send query and acquire result index page
(3) R(qi) = QueryWebSite( qi )
// download the pages of interest
(4) Download( R(qi) )
(5) done
Figure 4: Algorithm for crawling a Hidden Web site.
S
q1
q
qq
2
34
Figure 5: A set-formalization of the optimal query selection
problem.
2.3 Problem formalization
Theoretically, the problem of query selection can be formalized
as follows: We assume that the crawler downloads pages from a
Web site that has a set of pages S (the rectangle in Figure 5). We
represent each Web page in S as a point (dots in Figure 5). Every
potential query qi that we may issue can be viewed as a subset of S,
containing all the points (pages) that are returned when we issue qi
to the site. Each subset is associated with a weight that represents
the cost of issuing the query. Under this formalization, our goal is to
find which subsets (queries) cover the maximum number of points
(Web pages) with the minimum total weight (cost). This problem
is equivalent to the set-covering problem in graph theory [16].
There are two main difficulties that we need to address in this
formalization. First, in a practical situation, the crawler does not
know which Web pages will be returned by which queries, so the
subsets of S are not known in advance. Without knowing these
subsets the crawler cannot decide which queries to pick to
maximize the coverage. Second, the set-covering problem is known to
be NP-Hard [16], so an efficient algorithm to solve this problem
optimally in polynomial time has yet to be found.
In this paper, we will present an approximation algorithm that
can find a near-optimal solution at a reasonable computational cost.
Our algorithm leverages the observation that although we do not
know which pages will be returned by each query qi that we issue,
we can predict how many pages will be returned. Based on this
information our query selection algorithm can then select the best
queries that cover the content of the Web site. We present our
prediction method and our query selection algorithm in Section 3.
2.3.1 Performance Metric
Before we present our ideas for the query selection problem, we
briefly discuss some of our notation and the cost/performance
metrics.
Given a query qi, we use P(qi) to denote the fraction of pages
that we will get back if we issue query qi to the site. For example, if
a Web site has 10,000 pages in total, and if 3,000 pages are returned
for the query qi = medicine, then P(qi) = 0.3. We use P(q1 ∧
q2) to represent the fraction of pages that are returned from both
q1 and q2 (i.e., the intersection of P(q1) and P(q2)). Similarly, we
use P(q1 ∨ q2) to represent the fraction of pages that are returned
from either q1 or q2 (i.e., the union of P(q1) and P(q2)).
We also use Cost(qi) to represent the cost of issuing the query
qi. Depending on the scenario, the cost can be measured either in
time, network bandwidth, the number of interactions with the site,
or it can be a function of all of these. As we will see later, our
proposed algorithms are independent of the exact cost function.
In the most common case, the query cost consists of a number
of factors, including the cost for submitting the query to the site,
retrieving the result index page (Figure 3(a)) and downloading the
actual pages (Figure 3(b)). We assume that submitting a query
incurs a fixed cost of cq. The cost for downloading the result index
page is proportional to the number of matching documents to the
query, while the cost cd for downloading a matching document is
also fixed. Then the overall cost of query qi is
Cost(qi) = cq + crP(qi) + cdP(qi). (1)
In certain cases, some of the documents from qi may have already
been downloaded from previous queries. In this case, the crawler
may skip downloading these documents and the cost of qi can be
Cost(qi) = cq + crP(qi) + cdPnew(qi). (2)
Here, we use Pnew(qi) to represent the fraction of the new
documents from qi that have not been retrieved from previous queries.
Later in Section 3.1 we will study how we can estimate P(qi) and
Pnew(qi) to estimate the cost of qi.
Since our algorithms are independent of the exact cost function,
we will assume a generic cost function Cost(qi) in this paper. When
we need a concrete cost function, however, we will use Equation 2.
Given the notation, we can formalize the goal of a Hidden-Web
crawler as follows:
102
PROBLEM 1. Find the set of queries q1, . . . , qn that maximizes
P(q1 ∨ · · · ∨ qn)
under the constraint
n
i=1
Cost(qi) ≤ t.
Here, t is the maximum download resource that the crawler has.
3. KEYWORD SELECTION
How should a crawler select the queries to issue? Given that the
goal is to download the maximum number of unique documents
from a textual database, we may consider one of the following
options:
• Random: We select random keywords from, say, an English
dictionary and issue them to the database. The hope is that a random
query will return a reasonable number of matching documents.
• Generic-frequency: We analyze a generic document corpus
collected elsewhere (say, from the Web) and obtain the generic
frequency distribution of each keyword. Based on this generic
distribution, we start with the most frequent keyword, issue it to the
Hidden-Web database and retrieve the result. We then continue
to the second-most frequent keyword and repeat this process
until we exhaust all download resources. The hope is that the
frequent keywords in a generic corpus will also be frequent in the
Hidden-Web database, returning many matching documents.
• Adaptive: We analyze the documents returned from the previous
queries issued to the Hidden-Web database and estimate which
keyword is most likely to return the most documents. Based on
this analysis, we issue the most promising query, and repeat
the process.
Among these three general policies, we may consider the
random policy as the base comparison point since it is expected to
perform the worst. Between the generic-frequency and the
adaptive policies, both policies may show similar performance if the
crawled database has a generic document collection without a
specialized topic. The adaptive policy, however, may perform
significantly better than the generic-frequency policy if the database has a
very specialized collection that is different from the generic corpus.
We will experimentally compare these three policies in Section 4.
While the first two policies (random and generic-frequency
policies) are easy to implement, we need to understand how we can
analyze the downloaded pages to identify the most promising query
in order to implement the adaptive policy. We address this issue in
the rest of this section.
3.1 Estimating the number of matching pages
In order to identify the most promising query, we need to
estimate how many new documents we will download if we issue the
query qi as the next query. That is, assuming that we have issued
queries q1, . . . , qi−1 we need to estimate P(q1∨· · ·∨qi−1∨qi), for
every potential next query qi and compare this value. In estimating
this number, we note that we can rewrite P(q1 ∨ · · · ∨ qi−1 ∨ qi)
as:
P((q1 ∨ · · · ∨ qi−1) ∨ qi)
= P(q1 ∨ · · · ∨ qi−1) + P(qi) − P((q1 ∨ · · · ∨ qi−1) ∧ qi)
= P(q1 ∨ · · · ∨ qi−1) + P(qi)
− P(q1 ∨ · · · ∨ qi−1)P(qi|q1 ∨ · · · ∨ qi−1) (3)
In the above formula, note that we can precisely measure P(q1 ∨
· · · ∨ qi−1) and P(qi | q1 ∨ · · · ∨ qi−1) by analyzing
previouslydownloaded pages: We know P(q1 ∨ · · · ∨ qi−1), the union of
all pages downloaded from q1, . . . , qi−1, since we have already
issued q1, . . . , qi−1 and downloaded the matching pages.4
We can
also measure P(qi | q1 ∨ · · · ∨ qi−1), the probability that qi
appears in the pages from q1, . . . , qi−1, by counting how many times
qi appears in the pages from q1, . . . , qi−1. Therefore, we only need
to estimate P(qi) to evaluate P(q1 ∨ · · · ∨ qi). We may consider a
number of different ways to estimate P(qi), including the
following:
1. Independence estimator: We assume that the appearance of the
term qi is independent of the terms q1, . . . , qi−1. That is, we
assume that P(qi) = P(qi|q1 ∨ · · · ∨ qi−1).
2. Zipf estimator: In [19], Ipeirotis et al. proposed a method to
estimate how many times a particular term occurs in the entire
corpus based on a subset of documents from the corpus. Their
method exploits the fact that the frequency of terms inside text
collections follows a power law distribution [30, 25]. That is,
if we rank all terms based on their occurrence frequency (with
the most frequent term having a rank of 1, second most frequent
a rank of 2 etc.), then the frequency f of a term inside the text
collection is given by:
f = α(r + β)−γ
(4)
where r is the rank of the term and α, β, and γ are constants that
depend on the text collection.
Their main idea is (1) to estimate the three parameters, α, β and
γ, based on the subset of documents that we have downloaded
from previous queries, and (2) use the estimated parameters to
predict f given the ranking r of a term within the subset. For
a more detailed description on how we can use this method to
estimate P(qi), we refer the reader to the extended version of
this paper [27].
After we estimate P(qi) and P(qi|q1 ∨ · · · ∨ qi−1) values, we
can calculate P(q1 ∨ · · · ∨ qi). In Section 3.3, we explain how
we can efficiently compute P(qi|q1 ∨ · · · ∨ qi−1) by maintaining a
succinct summary table. In the next section, we first examine how
we can use this value to decide which query we should issue next
to the Hidden Web site.
3.2 Query selection algorithm
The goal of the Hidden-Web crawler is to download the
maximum number of unique documents from a database using its
limited download resources. Given this goal, the Hidden-Web crawler
has to take two factors into account. (1) the number of new
documents that can be obtained from the query qi and (2) the cost of
issuing the query qi. For example, if two queries, qi and qj, incur
the same cost, but qi returns more new pages than qj, qi is more
desirable than qj. Similarly, if qi and qj return the same number
of new documents, but qi incurs less cost then qj, qi is more
desirable. Based on this observation, the Hidden-Web crawler may
use the following efficiency metric to quantify the desirability of
the query qi:
Efficiency(qi) =
Pnew(qi)
Cost(qi)
Here, Pnew(qi) represents the amount of new documents returned
for qi (the pages that have not been returned for previous queries).
Cost(qi) represents the cost of issuing the query qi.
Intuitively, the efficiency of qi measures how many new
documents are retrieved per unit cost, and can be used as an indicator of
4
For exact estimation, we need to know the total number of pages in
the site. However, in order to compare only relative values among
queries, this information is not actually needed.
103
ALGORITHM 3.1. Greedy SelectTerm()
Parameters:
T: The list of potential query keywords
Procedure
(1) Foreach tk in T do
(2) Estimate Efficiency(tk) = Pnew(tk)
Cost(tk)
(3) done
(4) return tk with maximum Efficiency(tk)
Figure 6: Algorithm for selecting the next query term.
how well our resources are spent when issuing qi. Thus, the
Hidden Web crawler can estimate the efficiency of every candidate qi,
and select the one with the highest value. By using its resources
more efficiently, the crawler may eventually download the
maximum number of unique documents. In Figure 6, we show the query
selection function that uses the concept of efficiency. In principle,
this algorithm takes a greedy approach and tries to maximize the
potential gain in every step.
We can estimate the efficiency of every query using the
estimation method described in Section 3.1. That is, the size of the new
documents from the query qi, Pnew(qi), is
Pnew(qi)
= P(q1 ∨ · · · ∨ qi−1 ∨ qi) − P(q1 ∨ · · · ∨ qi−1)
= P(qi) − P(q1 ∨ · · · ∨ qi−1)P(qi|q1 ∨ · · · ∨ qi−1)
from Equation 3, where P(qi) can be estimated using one of the
methods described in section 3. We can also estimate Cost(qi)
similarly. For example, if Cost(qi) is
Cost(qi) = cq + crP(qi) + cdPnew(qi)
(Equation 2), we can estimate Cost(qi) by estimating P(qi) and
Pnew(qi).
3.3 Efficient calculation of query statistics
In estimating the efficiency of queries, we found that we need to
measure P(qi|q1∨· · ·∨qi−1) for every potential query qi. This
calculation can be very time-consuming if we repeat it from scratch for
every query qi in every iteration of our algorithm. In this section,
we explain how we can compute P(qi|q1 ∨ · · · ∨ qi−1) efficiently
by maintaining a small table that we call a query statistics table.
The main idea for the query statistics table is that P(qi|q1 ∨· · ·∨
qi−1) can be measured by counting how many times the keyword
qi appears within the documents downloaded from q1, . . . , qi−1.
We record these counts in a table, as shown in Figure 7(a). The
left column of the table contains all potential query terms and the
right column contains the number of previously-downloaded
documents containing the respective term. For example, the table in
Figure 7(a) shows that we have downloaded 50 documents so far, and
the term model appears in 10 of these documents. Given this
number, we can compute that P(model|q1 ∨ · · · ∨ qi−1) = 10
50
= 0.2.
We note that the query statistics table needs to be updated
whenever we issue a new query qi and download more documents. This
update can be done efficiently as we illustrate in the following
example.
EXAMPLE 1. After examining the query statistics table of
Figure 7(a), we have decided to use the term computer as our next
query qi. From the new query qi = computer, we downloaded
20 more new pages. Out of these, 12 contain the keyword model
Term tk N(tk)
model 10
computer 38
digital 50
Term tk N(tk)
model 12
computer 20
disk 18
Total pages: 50 New pages: 20
(a) After q1, . . . , qi−1 (b) New from qi = computer
Term tk N(tk)
model 10+12 = 22
computer 38+20 = 58
disk 0+18 = 18
digital 50+0 = 50
Total pages: 50 + 20 = 70
(c) After q1, . . . , qi
Figure 7: Updating the query statistics table.
q
i1 i−1
q\/ ... \/q
q
i
/
S
Figure 8: A Web site that does not return all the results.
and 18 the keyword disk. The table in Figure 7(b) shows the
frequency of each term in the newly-downloaded pages.
We can update the old table (Figure 7(a)) to include this new
information by simply adding corresponding entries in Figures 7(a)
and (b). The result is shown on Figure 7(c). For example, keyword
model exists in 10 + 12 = 22 pages within the pages retrieved
from q1, . . . , qi. According to this new table, P(model|q1∨· · ·∨qi)
is now 22
70
= 0.3.
3.4 Crawling sites that limit the number of
results
In certain cases, when a query matches a large number of pages,
the Hidden Web site returns only a portion of those pages. For
example, the Open Directory Project [2] allows the users to see only
up to 10, 000 results after they issue a query. Obviously, this kind
of limitation has an immediate effect on our Hidden Web crawler.
First, since we can only retrieve up to a specific number of pages
per query, our crawler will need to issue more queries (and
potentially will use up more resources) in order to download all the
pages. Second, the query selection method that we presented in
Section 3.2 assumes that for every potential query qi, we can find
P(qi|q1 ∨ · · · ∨ qi−1). That is, for every query qi we can find the
fraction of documents in the whole text database that contains qi
with at least one of q1, . . . , qi−1. However, if the text database
returned only a portion of the results for any of the q1, . . . , qi−1 then
the value P(qi|q1 ∨ · · · ∨ qi−1) is not accurate and may affect our
decision for the next query qi, and potentially the performance of
our crawler. Since we cannot retrieve more results per query than
the maximum number the Web site allows, our crawler has no other
choice besides submitting more queries. However, there is a way
to estimate the correct value for P(qi|q1 ∨ · · · ∨ qi−1) in the case
where the Web site returns only a portion of the results.
104
Again, assume that the Hidden Web site we are currently
crawling is represented as the rectangle on Figure 8 and its pages as
points in the figure. Assume that we have already issued queries
q1, . . . , qi−1 which returned a number of results less than the
maximum number than the site allows, and therefore we have
downloaded all the pages for these queries (big circle in Figure 8). That
is, at this point, our estimation for P(qi|q1 ∨· · ·∨qi−1) is accurate.
Now assume that we submit query qi to the Web site, but due to a
limitation in the number of results that we get back, we retrieve the
set qi (small circle in Figure 8) instead of the set qi (dashed circle
in Figure 8). Now we need to update our query statistics table so
that it has accurate information for the next step. That is, although
we got the set qi back, for every potential query qi+1 we need to
find P(qi+1|q1 ∨ · · · ∨ qi):
P(qi+1|q1 ∨ · · · ∨ qi)
=
1
P(q1 ∨ · · · ∨ qi)
· [P(qi+1 ∧ (q1 ∨ · · · ∨ qi−1))+
P(qi+1 ∧ qi) − P(qi+1 ∧ qi ∧ (q1 ∨ · · · ∨ qi−1))] (5)
In the previous equation, we can find P(q1 ∨· · ·∨qi) by
estimating P(qi) with the method shown in Section 3. Additionally, we
can calculate P(qi+1 ∧ (q1 ∨ · · · ∨ qi−1)) and P(qi+1 ∧ qi ∧ (q1 ∨
· · · ∨ qi−1)) by directly examining the documents that we have
downloaded from queries q1, . . . , qi−1. The term P(qi+1 ∧ qi)
however is unknown and we need to estimate it. Assuming that qi
is a random sample of qi, then:
P(qi+1 ∧ qi)
P(qi+1 ∧ qi)
=
P(qi)
P(qi)
(6)
From Equation 6 we can calculate P(qi+1 ∧ qi) and after we
replace this value to Equation 5 we can find P(qi+1|q1 ∨ · · · ∨ qi).
4. EXPERIMENTAL EVALUATION
In this section we experimentally evaluate the performance of
the various algorithms for Hidden Web crawling presented in this
paper. Our goal is to validate our theoretical analysis through
realworld experiments, by crawling popular Hidden Web sites of
textual databases. Since the number of documents that are discovered
and downloaded from a textual database depends on the selection
of the words that will be issued as queries5
to the search interface
of each site, we compare the various selection policies that were
described in section 3, namely the random, generic-frequency, and
adaptive algorithms.
The adaptive algorithm learns new keywords and terms from the
documents that it downloads, and its selection process is driven by
a cost model as described in Section 3.2. To keep our experiment
and its analysis simple at this point, we will assume that the cost for
every query is constant. That is, our goal is to maximize the number
of downloaded pages by issuing the least number of queries. Later,
in Section 4.4 we will present a comparison of our policies based
on a more elaborate cost model. In addition, we use the
independence estimator (Section 3.1) to estimate P(qi) from downloaded
pages. Although the independence estimator is a simple estimator,
our experiments will show that it can work very well in practice.6
For the generic-frequency policy, we compute the frequency
distribution of words that appear in a 5.5-million-Web-page corpus
5
Throughout our experiments, once an algorithm has submitted a
query to a database, we exclude the query from subsequent
submissions to the same database from the same algorithm.
6
We defer the reporting of results based on the Zipf estimation to a
future work.
downloaded from 154 Web sites of various topics [26]. Keywords
are selected based on their decreasing frequency with which they
appear in this document set, with the most frequent one being
selected first, followed by the second-most frequent keyword, etc.7
Regarding the random policy, we use the same set of words
collected from the Web corpus, but in this case, instead of selecting
keywords based on their relative frequency, we choose them
randomly (uniform distribution). In order to further investigate how
the quality of the potential query-term list affects the random-based
algorithm, we construct two sets: one with the 16, 000 most
frequent words of the term collection used in the generic-frequency
policy (hereafter, the random policy with the set of 16,000 words
will be referred to as random-16K), and another set with the 1
million most frequent words of the same collection as above (hereafter,
referred to as random-1M). The former set has frequent words that
appear in a large number of documents (at least 10, 000 in our
collection), and therefore can be considered of high-quality terms.
The latter set though contains a much larger collection of words,
among which some might be bogus, and meaningless.
The experiments were conducted by employing each one of the
aforementioned algorithms (adaptive, generic-frequency,
random16K, and random-1M) to crawl and download contents from three
Hidden Web sites: The PubMed Medical Library,8
Amazon,9
and
the Open Directory Project[2]. According to the information on
PubMed"s Web site, its collection contains approximately 14
million abstracts of biomedical articles. We consider these abstracts
as the documents in the site, and in each iteration of the adaptive
policy, we use these abstracts as input to the algorithm. Thus our
goal is to discover as many unique abstracts as possible by
repeatedly querying the Web query interface provided by PubMed. The
Hidden Web crawling on the PubMed Web site can be considered
as topic-specific, due to the fact that all abstracts within PubMed
are related to the fields of medicine and biology.
In the case of the Amazon Web site, we are interested in
downloading all the hidden pages that contain information on books.
The querying to Amazon is performed through the Software
Developer"s Kit that Amazon provides for interfacing to its Web site,
and which returns results in XML form. The generic keyword
field is used for searching, and as input to the adaptive policy we
extract the product description and the text of customer reviews
when present in the XML reply. Since Amazon does not provide
any information on how many books it has in its catalogue, we use
random sampling on the 10-digit ISBN number of the books to
estimate the size of the collection. Out of the 10, 000 random ISBN
numbers queried, 46 are found in the Amazon catalogue, therefore
the size of its book collection is estimated to be 46
10000
· 1010
= 4.6
million books. It"s also worth noting here that Amazon poses an
upper limit on the number of results (books in our case) returned
by each query, which is set to 32, 000.
As for the third Hidden Web site, the Open Directory Project
(hereafter also referred to as dmoz), the site maintains the links to
3.8 million sites together with a brief summary of each listed site.
The links are searchable through a keyword-search interface. We
consider each indexed link together with its brief summary as the
document of the dmoz site, and we provide the short summaries
to the adaptive algorithm to drive the selection of new keywords
for querying. On the dmoz Web site, we perform two Hidden Web
crawls: the first is on its generic collection of 3.8-million indexed
7
We did not manually exclude stop words (e.g., the, is, of, etc.)
from the keyword list. As it turns out, all Web sites except PubMed
return matching documents for the stop words, such as the.
8
PubMed Medical Library: http://www.pubmed.org
9
Amazon Inc.: http://www.amazon.com
105
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 50 100 150 200
fractionofdocuments
query number
Cumulative fraction of unique documents - PubMed website
adaptive
generic-frequency
random-16K
random-1M
Figure 9: Coverage of policies for Pubmed
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 100 200 300 400 500 600 700
fractionofdocuments
query number
Cumulative fraction of unique documents - Amazon website
adaptive
generic-frequency
random-16K
random-1M
Figure 10: Coverage of policies for Amazon
sites, regardless of the category that they fall into. The other crawl
is performed specifically on the Arts section of dmoz (http://
dmoz.org/Arts), which comprises of approximately 429, 000
indexed sites that are relevant to Arts, making this crawl
topicspecific, as in PubMed. Like Amazon, dmoz also enforces an upper
limit on the number of returned results, which is 10, 000 links with
their summaries.
4.1 Comparison of policies
The first question that we seek to answer is the evolution of the
coverage metric as we submit queries to the sites. That is, what
fraction of the collection of documents stored in the Hidden Web
site can we download as we continuously query for new words
selected using the policies described above? More formally, we are
interested in the value of P(q1 ∨ · · · ∨ qi−1 ∨ qi), after we submit
q1, . . . , qi queries, and as i increases.
In Figures 9, 10, 11, and 12 we present the coverage metric for
each policy, as a function of the query number, for the Web sites
of PubMed, Amazon, general dmoz and the art-specific dmoz,
respectively. On the y-axis the fraction of the total documents
downloaded from the website is plotted, while the x-axis represents the
query number. A first observation from these graphs is that in
general, the generic-frequency and the adaptive policies perform much
better than the random-based algorithms. In all of the figures, the
graphs for the random-1M and the random-16K are significantly
below those of other policies.
Between the generic-frequency and the adaptive policies, we can
see that the latter outperforms the former when the site is topic
spe0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 100 200 300 400 500 600 700
fractionofdocuments
query number
Cumulative fraction of unique documents - dmoz website
adaptive
generic-frequency
random-16K
random-1M
Figure 11: Coverage of policies for general dmoz
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 50 100 150 200 250 300 350 400 450
fractionofdocuments
query number
Cumulative fraction of unique documents - dmoz/Arts website
adaptive
generic-frequency
random-16K
random-1M
Figure 12: Coverage of policies for the Arts section of dmoz
cific. For example, for the PubMed site (Figure 9), the adaptive
algorithm issues only 83 queries to download almost 80% of the
documents stored in PubMed, but the generic-frequency algorithm
requires 106 queries for the same coverage,. For the dmoz/Arts
crawl (Figure 12), the difference is even more substantial: the
adaptive policy is able to download 99.98% of the total sites indexed in
the Directory by issuing 471 queries, while the frequency-based
algorithm is much less effective using the same number of queries,
and discovers only 72% of the total number of indexed sites. The
adaptive algorithm, by examining the contents of the pages that it
downloads at each iteration, is able to identify the topic of the site as
expressed by the words that appear most frequently in the result-set.
Consequently, it is able to select words for subsequent queries that
are more relevant to the site, than those preferred by the
genericfrequency policy, which are drawn from a large, generic collection.
Table 1 shows a sample of 10 keywords out of 211 chosen and
submitted to the PubMed Web site by the adaptive algorithm, but not
by the other policies. For each keyword, we present the number of
the iteration, along with the number of results that it returned. As
one can see from the table, these keywords are highly relevant to
the topics of medicine and biology of the Public Medical Library,
and match against numerous articles stored in its Web site.
In both cases examined in Figures 9, and 12, the random-based
policies perform much worse than the adaptive algorithm, and the
generic-frequency. It is worthy noting however, that the
randombased policy with the small, carefully selected set of 16, 000
quality words manages to download a considerable fraction of 42.5%
106
Iteration Keyword Number of Results
23 department 2, 719, 031
34 patients 1, 934, 428
53 clinical 1, 198, 322
67 treatment 4, 034, 565
69 medical 1, 368, 200
70 hospital 503, 307
146 disease 1, 520, 908
172 protein 2, 620, 938
Table 1: Sample of keywords queried to PubMed exclusively by
the adaptive policy
from the PubMed Web site after 200 queries, while the coverage
for the Arts section of dmoz reaches 22.7%, after 471 queried
keywords. On the other hand, the random-based approach that makes
use of the vast collection of 1 million words, among which a large
number is bogus keywords, fails to download even a mere 1% of the
total collection, after submitting the same number of query words.
For the generic collections of Amazon and the dmoz sites, shown
in Figures 10 and 11 respectively, we get mixed results: The
genericfrequency policy shows slightly better performance than the
adaptive policy for the Amazon site (Figure 10), and the adaptive method
clearly outperforms the generic-frequency for the general dmoz site
(Figure 11). A closer look at the log files of the two Hidden Web
crawlers reveals the main reason: Amazon was functioning in a
very flaky way when the adaptive crawler visited it, resulting in
a large number of lost results. Thus, we suspect that the slightly
poor performance of the adaptive policy is due to this
experimental variance. We are currently running another experiment to
verify whether this is indeed the case. Aside from this experimental
variance, the Amazon result indicates that if the collection and the
words that a Hidden Web site contains are generic enough, then the
generic-frequency approach may be a good candidate algorithm for
effective crawling.
As in the case of topic-specific Hidden Web sites, the
randombased policies also exhibit poor performance compared to the other
two algorithms when crawling generic sites: for the Amazon Web
site, random-16K succeeds in downloading almost 36.7% after
issuing 775 queries, alas for the generic collection of dmoz, the
fraction of the collection of links downloaded is 13.5% after the 770th
query. Finally, as expected, random-1M is even worse than
random16K, downloading only 14.5% of Amazon and 0.3% of the generic
dmoz.
In summary, the adaptive algorithm performs remarkably well in
all cases: it is able to discover and download most of the documents
stored in Hidden Web sites by issuing the least number of queries.
When the collection refers to a specific topic, it is able to identify
the keywords most relevant to the topic of the site and consequently
ask for terms that is most likely that will return a large number of
results . On the other hand, the generic-frequency policy proves to
be quite effective too, though less than the adaptive: it is able to
retrieve relatively fast a large portion of the collection, and when the
site is not topic-specific, its effectiveness can reach that of
adaptive (e.g. Amazon). Finally, the random policy performs poorly in
general, and should not be preferred.
4.2 Impact of the initial query
An interesting issue that deserves further examination is whether
the initial choice of the keyword used as the first query issued by
the adaptive algorithm affects its effectiveness in subsequent
iterations. The choice of this keyword is not done by the selection of the
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20 30 40 50 60
fractionofdocuments
query number
Convergence of adaptive under different initial queries - PubMed website
pubmed
data
information
return
Figure 13: Convergence of the adaptive algorithm using
different initial queries for crawling the PubMed Web site
adaptive algorithm itself and has to be manually set, since its query
statistics tables have not been populated yet. Thus, the selection is
generally arbitrary, so for purposes of fully automating the whole
process, some additional investigation seems necessary.
For this reason, we initiated three adaptive Hidden Web crawlers
targeting the PubMed Web site with different seed-words: the word
data, which returns 1,344,999 results, the word information
that reports 308, 474 documents, and the word return that
retrieves 29, 707 pages, out of 14 million. These keywords
represent varying degrees of term popularity in PubMed, with the first
one being of high popularity, the second of medium, and the third
of low. We also show results for the keyword pubmed, used in
the experiments for coverage of Section 4.1, and which returns 695
articles. As we can see from Figure 13, after a small number of
queries, all four crawlers roughly download the same fraction of
the collection, regardless of their starting point: Their coverages
are roughly equivalent from the 25th query. Eventually, all four
crawlers use the same set of terms for their queries, regardless of
the initial query. In the specific experiment, from the 36th query
onward, all four crawlers use the same terms for their queries in each
iteration, or the same terms are used off by one or two query
numbers. Our result confirms the observation of [11] that the choice of
the initial query has minimal effect on the final performance. We
can explain this intuitively as follows: Our algorithm approximates
the optimal set of queries to use for a particular Web site. Once
the algorithm has issued a significant number of queries, it has an
accurate estimation of the content of the Web site, regardless of
the initial query. Since this estimation is similar for all runs of the
algorithm, the crawlers will use roughly the same queries.
4.3 Impact of the limit in the number of results
While the Amazon and dmoz sites have the respective limit of
32,000 and 10,000 in their result sizes, these limits may be larger
than those imposed by other Hidden Web sites. In order to
investigate how a tighter limit in the result size affects the
performance of our algorithms, we performed two additional crawls to
the generic-dmoz site: we ran the generic-frequency and adaptive
policies but we retrieved only up to the top 1,000 results for
every query. In Figure 14 we plot the coverage for the two policies
as a function of the number of queries. As one might expect, by
comparing the new result in Figure 14 to that of Figure 11 where
the result limit was 10,000, we conclude that the tighter limit
requires a higher number of queries to achieve the same coverage.
For example, when the result limit was 10,000, the adaptive
pol107
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 500 1000 1500 2000 2500 3000 3500
FractionofUniquePages
Query Number
Cumulative fraction of unique pages downloaded per query - Dmoz Web site (cap limit 1000)
adaptive
generic-frequency
Figure 14: Coverage of general dmoz after limiting the number
of results to 1,000
icy could download 70% of the site after issuing 630 queries, while
it had to issue 2,600 queries to download 70% of the site when
the limit was 1,000. On the other hand, our new result shows that
even with a tight result limit, it is still possible to download most
of a Hidden Web site after issuing a reasonable number of queries.
The adaptive policy could download more than 85% of the site
after issuing 3,500 queries when the limit was 1,000. Finally, our
result shows that our adaptive policy consistently outperforms the
generic-frequency policy regardless of the result limit. In both
Figure 14 and Figure 11, our adaptive policy shows significantly larger
coverage than the generic-frequency policy for the same number of
queries.
4.4 Incorporating the document download
cost
For brevity of presentation, the performance evaluation results
provided so far assumed a simplified cost-model where every query
involved a constant cost. In this section we present results regarding
the performance of the adaptive and generic-frequency algorithms
using Equation 2 to drive our query selection process. As we
discussed in Section 2.3.1, this query cost model includes the cost for
submitting the query to the site, retrieving the result index page,
and also downloading the actual pages. For these costs, we
examined the size of every result in the index page and the sizes of the
documents, and we chose cq = 100, cr = 100, and cd = 10000,
as values for the parameters of Equation 2, and for the particular
experiment that we ran on the PubMed website. The values that
we selected imply that the cost for issuing one query and retrieving
one result from the result index page are roughly the same, while
the cost for downloading an actual page is 100 times larger. We
believe that these values are reasonable for the PubMed Web site.
Figure 15 shows the coverage of the adaptive and
genericfrequency algorithms as a function of the resource units used
during the download process. The horizontal axis is the amount of
resources used, and the vertical axis is the coverage. As it is
evident from the graph, the adaptive policy makes more efficient use of
the available resources, as it is able to download more articles than
the generic-frequency, using the same amount of resource units.
However, the difference in coverage is less dramatic in this case,
compared to the graph of Figure 9. The smaller difference is due
to the fact that under the current cost metric, the download cost of
documents constitutes a significant portion of the cost. Therefore,
when both policies downloaded the same number of documents,
the saving of the adaptive policy is not as dramatic as before. That
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 5000 10000 15000 20000 25000 30000
FractionofUniquePages
Total Cost (cq=100, cr=100, cd=10000)
Cumulative fraction of unique pages downloaded per cost unit - PubMed Web site
adaptive
frequency
Figure 15: Coverage of PubMed after incorporating the
document download cost
is, the savings in the query cost and the result index download cost
is only a relatively small portion of the overall cost. Still, we
observe noticeable savings from the adaptive policy. At the total cost
of 8000, for example, the coverage of the adaptive policy is roughly
0.5 while the coverage of the frequency policy is only 0.3.
5. RELATED WORK
In a recent study, Raghavan and Garcia-Molina [29] present an
architectural model for a Hidden Web crawler. The main focus of
this work is to learn Hidden-Web query interfaces, not to
generate queries automatically. The potential queries are either provided
manually by users or collected from the query interfaces. In
contrast, our main focus is to generate queries automatically without
any human intervention.
The idea of automatically issuing queries to a database and
examining the results has been previously used in different contexts.
For example, in [10, 11], Callan and Connel try to acquire an
accurate language model by collecting a uniform random sample from
the database. In [22] Lawrence and Giles issue random queries to
a number of Web Search Engines in order to estimate the fraction
of the Web that has been indexed by each of them. In a similar
fashion, Bharat and Broder [8] issue random queries to a set of
Search Engines in order to estimate the relative size and overlap of
their indexes. In [6], Barbosa and Freire experimentally evaluate
methods for building multi-keyword queries that can return a large
fraction of a document collection. Our work differs from the
previous studies in two ways. First, it provides a theoretical framework
for analyzing the process of generating queries for a database and
examining the results, which can help us better understand the
effectiveness of the methods presented in the previous work. Second,
we apply our framework to the problem of Hidden Web crawling
and demonstrate the efficiency of our algorithms.
Cope et al. [15] propose a method to automatically detect whether
a particular Web page contains a search form. This work is
complementary to ours; once we detect search interfaces on the Web
using the method in [15], we may use our proposed algorithms to
download pages automatically from those Web sites.
Reference [4] reports methods to estimate what fraction of a
text database can be eventually acquired by issuing queries to the
database. In [3] the authors study query-based techniques that can
extract relational data from large text databases. Again, these works
study orthogonal issues and are complementary to our work.
In order to make documents in multiple textual databases
searchable at a central place, a number of harvesting approaches have
108
been proposed (e.g., OAI [21], DP9 [24]). These approaches
essentially assume cooperative document databases that willingly share
some of their metadata and/or documents to help a third-party search
engine to index the documents. Our approach assumes
uncooperative databases that do not share their data publicly and whose
documents are accessible only through search interfaces.
There exists a large body of work studying how to identify the
most relevant database given a user query [20, 19, 14, 23, 18]. This
body of work is often referred to as meta-searching or database
selection problem over the Hidden Web. For example, [19]
suggests the use of focused probing to classify databases into a topical
category, so that given a query, a relevant database can be selected
based on its topical category. Our vision is different from this body
of work in that we intend to download and index the Hidden pages
at a central location in advance, so that users can access all the
information at their convenience from one single location.
6. CONCLUSION AND FUTURE WORK
Traditional crawlers normally follow links on the Web to
discover and download pages. Therefore they cannot get to the Hidden
Web pages which are only accessible through query interfaces. In
this paper, we studied how we can build a Hidden Web crawler that
can automatically query a Hidden Web site and download pages
from it. We proposed three different query generation policies for
the Hidden Web: a policy that picks queries at random from a list
of keywords, a policy that picks queries based on their frequency
in a generic text collection, and a policy which adaptively picks a
good query based on the content of the pages downloaded from the
Hidden Web site. Experimental evaluation on 4 real Hidden Web
sites shows that our policies have a great potential. In particular, in
certain cases the adaptive policy can download more than 90% of
a Hidden Web site after issuing approximately 100 queries. Given
these results, we believe that our work provides a potential
mechanism to improve the search-engine coverage of the Web and the
user experience of Web search.
6.1 Future Work
We briefly discuss some future-research avenues.
Multi-attribute Databases We are currently investigating how
to extend our ideas to structured multi-attribute databases. While
generating queries for multi-attribute databases is clearly a more
difficult problem, we may exploit the following observation to
address this problem: When a site supports multi-attribute queries,
the site often returns pages that contain values for each of the query
attributes. For example, when an online bookstore supports queries
on title, author and isbn, the pages returned from a query
typically contain the title, author and ISBN of corresponding books.
Thus, if we can analyze the returned pages and extract the values
for each field (e.g, title = ‘Harry Potter", author =
‘J.K. Rowling", etc), we can apply the same idea that we
used for the textual database: estimate the frequency of each
attribute value and pick the most promising one. The main challenge
is to automatically segment the returned pages so that we can
identify the sections of the pages that present the values corresponding
to each attribute. Since many Web sites follow limited formatting
styles in presenting multiple attributes - for example, most book
titles are preceded by the label Title: - we believe we may learn
page-segmentation rules automatically from a small set of training
examples.
Other Practical Issues In addition to the automatic query
generation problem, there are many practical issues to be addressed
to build a fully automatic Hidden-Web crawler. For example, in
this paper we assumed that the crawler already knows all query
interfaces for Hidden-Web sites. But how can the crawler discover
the query interfaces? The method proposed in [15] may be a good
starting point. In addition, some Hidden-Web sites return their
results in batches of, say, 20 pages, so the user has to click on a
next button in order to see more results. In this case, a fully
automatic Hidden-Web crawler should know that the first result index
page contains only a partial result and press the next button
automatically. Finally, some Hidden Web sites may contain an infinite
number of Hidden Web pages which do not contribute much
significant content (e.g. a calendar with links for every day). In this
case the Hidden-Web crawler should be able to detect that the site
does not have much more new content and stop downloading pages
from the site. Page similarity detection algorithms may be useful
for this purpose [9, 13].
7. REFERENCES
[1] Lexisnexis http://www.lexisnexis.com.
[2] The Open Directory Project, http://www.dmoz.org.
[3] E. Agichtein and L. Gravano. Querying text databases for efficient information
extraction. In ICDE, 2003.
[4] E. Agichtein, P. Ipeirotis, and L. Gravano. Modeling query-based access to text
databases. In WebDB, 2003.
[5] Article on New York Times. Old Search Engine, the Library, Tries to Fit Into a
Google World. Available at: http:
//www.nytimes.com/2004/06/21/technology/21LIBR.html,
June 2004.
[6] L. Barbosa and J. Freire. Siphoning hidden-web data through keyword-based
interfaces. In SBBD, 2004.
[7] M. K. Bergman. The deep web: Surfacing hidden value,http:
//www.press.umich.edu/jep/07-01/bergman.html.
[8] K. Bharat and A. Broder. A technique for measuring the relative size and
overlap of public web search engines. In WWW, 1998.
[9] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic
clustering of the web. In WWW, 1997.
[10] J. Callan, M. Connell, and A. Du. Automatic discovery of language models for
text databases. In SIGMOD, 1999.
[11] J. P. Callan and M. E. Connell. Query-based sampling of text databases.
Information Systems, 19(2):97-130, 2001.
[12] K. C.-C. Chang, B. He, C. Li, and Z. Zhang. Structured databases on the web:
Observations and implications. Technical report, UIUC.
[13] J. Cho, N. Shivakumar, and H. Garcia-Molina. Finding replicated web
collections. In SIGMOD, 2000.
[14] W. Cohen and Y. Singer. Learning to query the web. In AAAI Workshop on
Internet-Based Information Systems, 1996.
[15] J. Cope, N. Craswell, and D. Hawking. Automated discovery of search
interfaces on the web. In 14th Australasian conference on Database
technologies, 2003.
[16] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms,
2nd Edition. MIT Press/McGraw Hill, 2001.
[17] D. Florescu, A. Y. Levy, and A. O. Mendelzon. Database techniques for the
world-wide web: A survey. SIGMOD Record, 27(3):59-74, 1998.
[18] B. He and K. C.-C. Chang. Statistical schema matching across web query
interfaces. In SIGMOD Conference, 2003.
[19] P. Ipeirotis and L. Gravano. Distributed search over the hidden web:
Hierarchical database sampling and selection. In VLDB, 2002.
[20] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe, count, and classify:
Categorizing hidden web databases. In SIGMOD, 2001.
[21] C. Lagoze and H. V. Sompel. The Open Archives Initiative: Building a
low-barrier interoperability framework In JCDL, 2001.
[22] S. Lawrence and C. L. Giles. Searching the World Wide Web. Science,
280(5360):98-100, 1998.
[23] V. Z. Liu, J. C. Richard C. Luo and, and W. W. Chu. Dpro: A probabilistic
approach for hidden web database selection using dynamic probing. In ICDE,
2004.
[24] X. Liu, K. Maly, M. Zubair and M. L. Nelson. DP9-An OAI Gateway Service
for Web Crawlers. In JCDL, 2002.
[25] B. B. Mandelbrot. Fractal Geometry of Nature. W. H. Freeman & Co.
[26] A. Ntoulas, J. Cho, and C. Olston. What"s new on the web? the evolution of the
web from a search engine perspective. In WWW, 2004.
[27] A. Ntoulas, P. Zerfos, and J. Cho. Downloading hidden web content. Technical
report, UCLA, 2004.
[28] S. Olsen. Does search engine"s power threaten web"s independence?
http://news.com.com/2009-1023-963618.html.
[29] S. Raghavan and H. Garcia-Molina. Crawling the hidden web. In VLDB, 2001.
[30] G. K. Zipf. Human Behavior and the Principle of Least-Effort.
Addison-Wesley, Cambridge, MA, 1949.
109 | textual database;independence estimator;hidden web;hide web crawl;generic-frequency policy;deep web crawler;query selection;query-selection policy;adaptive algorithm;crawling policy;adaptive policy;keyword query;deep web;hidden web crawler |
train_H-83 | Estimating the Global PageRank of Web Communities | Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. | 1. INTRODUCTION
Localized search engines are small-scale search engines
that index only a single community of the web. Such
communities can be site-specific domains, such as pages within
the cs.utexas.edu domain, or topic-related
communitiesfor example, political websites. Compared to the web graph
crawled and indexed by large-scale search engines, the size
of such local communities is typically orders of magnitude
smaller. Consequently, the computational resources needed
to build such a search engine are also similarly lighter. By
restricting themselves to smaller, more manageable sections
of the web, localized search engines can also provide more
precise and complete search capabilities over their respective
domains.
One drawback of localized indexes is the lack of global
information needed to compute link-based rankings. The
PageRank algorithm [3], has proven to be an effective such
measure. In general, the PageRank of a given page is
dependent on pages throughout the entire web graph. In the
context of a localized search engine, if the PageRanks are
computed using only the local subgraph, then we would
expect the resulting PageRanks to reflect the perceived
popularity within the local community and not of the web as a
whole. For example, consider a localized search engine that
indexes political pages with conservative views. A person
wishing to research the opinions on global warming within
the conservative political community may encounter
numerous such opinions across various websites. If only local
PageRank values are available, then the search results will reflect
only strongly held beliefs within the community. However, if
global PageRanks are also available, then the results can
additionally reflect outsiders" views of the conservative
community (those documents that liberals most often access within
the conservative community).
Thus, for many localized search engines, incorporating
global PageRanks can improve the quality of search results.
However, the number of pages a local search engine indexes
is typically orders of magnitude smaller than the number of
pages indexed by their large-scale counterparts. Localized
search engines do not have the bandwidth, storage capacity,
or computational power to crawl, download, and compute
the global PageRanks of the entire web. In this work, we
present a method of approximating the global PageRanks of
a local domain while only using resources of the same
order as those needed to compute the PageRanks of the local
subgraph.
Our proposed method looks for a supergraph of our local
subgraph such that the local PageRanks within this
supergraph are close to the true global PageRanks. We construct
this supergraph by iteratively crawling global pages on the
current web frontier-i.e., global pages with inlinks from
pages that have already been crawled. In order to provide
116
Research Track Paper
a good approximation to the global PageRanks, care must
be taken when choosing which pages to crawl next; in this
paper, we present a well-motivated page selection algorithm
that also performs well empirically. This algorithm is
derived from a well-defined problem objective and has a
running time linear in the number of local nodes.
We experiment across several types of local subgraphs,
including four topic related communities and several
sitespecific domains. To evaluate performance, we measure the
difference between the current global PageRank estimate
and the global PageRank, as a function of the number of
pages crawled. We compare our algorithm against several
heuristics and also against a baseline algorithm that chooses
pages at random, and we show that our method outperforms
these other methods. Finally, we empirically demonstrate
that, given a local domain of size n, we can provide good
approximations to the global PageRank values by crawling
at most n or 2n additional pages.
The paper is organized as follows. Section 2 gives an
overview of localized search engines and outlines their
advantages over global search. Section 3 provides background
on the PageRank algorithm. Section 4 formally defines our
problem, and section 5 presents our page selection criteria
and derives our algorithms. Section 6 provides
experimental results, section 7 gives an overview of related work, and,
finally, conclusions are given in section 8.
2. LOCALIZED SEARCH ENGINES
Localized search engines index a single community of the
web, typically either a site-specific community, or a
topicspecific community. Localized search engines enjoy three
major advantages over their large-scale counterparts: they
are relatively inexpensive to build, they can offer more
precise search capability over their local domain, and they can
provide a more complete index.
The resources needed to build a global search engine are
enormous. A 2003 study by Lyman et al. [13] found that
the ‘surface web" (publicly available static sites) consists of
8.9 billion pages, and that the average size of these pages is
approximately 18.7 kilobytes. To download a crawl of this
size, approximately 167 terabytes of space is needed. For a
researcher who wishes to build a search engine with access
to a couple of workstations or a small server, storage of this
magnitude is simply not available. However, building a
localized search engine over a web community of a hundred
thousand pages would only require a few gigabytes of
storage. The computational burden required to support search
queries over a database this size is more manageable as well.
We note that, for topic-specific search engines, the relevant
community can be efficiently identified and downloaded by
using a focused crawler [21, 4].
For site-specific domains, the local domain is readily
available on their own web server. This obviates the need for
crawling or spidering, and a complete and up-to-date
index of the domain can thus be guaranteed. This is in
contrast to their large-scale counterparts, which suffer from
several shortcomings. First, crawling dynamically generated
pages-pages in the ‘hidden web"-has been the subject of
research [20] and is a non-trivial task for an external crawler.
Second, site-specific domains can enable the robots
exclusion policy. This prohibits external search engines" crawlers
from downloading content from the domain, and an external
search engine must instead rely on outside links and anchor
text to index these restricted pages.
By restricting itself to only a specific domain of the
internet, a localized search engine can provide more precise
search results. Consider the canonical ambiguous search
query, ‘jaguar", which can refer to either the car
manufacturer or the animal. A scientist trying to research the
habitat and evolutionary history of a jaguar may have better
success using a finely tuned zoology-specific search engine
than querying Google with multiple keyword searches and
wading through irrelevant results. A method to learn
better ranking functions for retrieval was recently proposed by
Radlinski and Joachims [19] and has been applied to various
local domains, including Cornell University"s website [8].
3. PAGERANK OVERVIEW
The PageRank algorithm defines the importance of web
pages by analyzing the underlying hyperlink structure of a
web graph. The algorithm works by building a Markov chain
from the link structure of the web graph and computing its
stationary distribution. One way to compute the
stationary distribution of a Markov chain is to find the limiting
distribution of a random walk over the chain. Thus, the
PageRank algorithm uses what is sometimes referred to as
the ‘random surfer" model. In each step of the random walk,
the ‘surfer" either follows an outlink from the current page
(i.e. the current node in the chain), or jumps to a random
page on the web.
We now precisely define the PageRank problem. Let U
be an m × m adjacency matrix for a given web graph such
that Uji = 1 if page i links to page j and Uji = 0 otherwise.
We define the PageRank matrix PU to be:
PU = αUD−1
U + (1 − α)veT
, (1)
where DU is the (unique) diagonal matrix such that UD−1
U
is column stochastic, α is a given scalar such that 0 ≤ α ≤ 1,
e is the vector of all ones, and v is a non-negative,
L1normalized vector, sometimes called the ‘random surfer"
vector. Note that the matrix D−1
U is well-defined only if each
column of U has at least one non-zero entry-i.e., each page
in the webgraph has at least one outlink. In the presence of
such ‘dangling nodes" that have no outlinks, one commonly
used solution, proposed by Brin et al. [3], is to replace each
zero column of U by a non-negative, L1-normalized vector.
The PageRank vector r is the dominant eigenvector of the
PageRank matrix, r = PU r. We will assume, without loss of
generality, that r has an L1-norm of one. Computationally,
r can be computed using the power method. This method
first chooses a random starting vector r(0)
, and iteratively
multiplies the current vector by the PageRank matrix PU ;
see Algorithm 1. In general, each iteration of the power
method can take O(m2
) operations when PU is a dense
matrix. However, in practice, the number of links in a web
graph will be of the order of the number of pages. By
exploiting the sparsity of the PageRank matrix, the work per
iteration can be reduced to O(km), where k is the average
number of links per web page. It has also been shown that
the total number of iterations needed for convergence is
proportional to α and does not depend on the size of the web
graph [11, 7]. Finally, the total space needed is also O(km),
mainly to store the matrix U.
117
Research Track Paper
Algorithm 1: A linear time (per iteration) algorithm for
computing PageRank.
ComputePR(U)
Input: U: Adjacency matrix.
Output: r: PageRank vector.
Choose (randomly) an initial non-negative vector r(0)
such that r(0)
1 = 1.
i ← 0
repeat
i ← i + 1
ν ← αUD−1
U r(i−1)
{α is the random surfing
probability}
r(i)
← ν + (1 − α)v {v is the random surfer vector.}
until r(i)
− r(i−1)
< δ {δ is the convergence threshold.}
r ← r(i)
4. PROBLEM DEFINITION
Given a local domain L, let G be an N × N adjacency
matrix for the entire connected component of the web that
contains L, such that Gji = 1 if page i links to page j
and Gji = 0 otherwise. Without loss of generality, we will
partition G as:
G =
L Gout
Lout Gwithin
, (2)
where L is the n × n local subgraph corresponding to links
inside the local domain, Lout is the subgraph that
corresponds to links from the local domain pointing out to the
global domain, Gout is the subgraph containing links from
the global domain into the local domain, and Gwithin
contains links within the global domain. We assume that when
building a localized search engine, only pages inside the
local domain are crawled, and the links between these pages
are represented by the subgraph L. The links in Lout are
also known, as these point from crawled pages in the local
domain to uncrawled pages in the global domain.
As defined in equation (1), PG is the PageRank matrix
formed from the global graph G, and we define the global
PageRank vector of this graph to be g. Let the n-length
vector p∗
be the L1-normalized vector corresponding to the
global PageRank of the pages in the local domain L:
p∗
=
EL g
ELg 1
,
where EL = [ I | 0 ] is the restriction matrix that selects
the components from g corresponding to nodes in L. Let p
denote the PageRank vector constructed from the local
domain subgraph L. In practice, the observed local PageRank
p and the global PageRank p∗
will be quite different. One
would expect that as the size of local matrix L approaches
the size of global matrix G, the global PageRank and the
observed local PageRank will become more similar. Thus, one
approach to estimating the global PageRank is to crawl the
entire global domain, compute its PageRank, and extract
the PageRanks of the local domain.
Typically, however, n N , i.e., the number of global
pages is much larger than the number of local pages.
Therefore, crawling all global pages will quickly exhaust all local
resources (computational, storage, and bandwidth) available
to create the local search engine. We instead seek a
supergraph ˆF of our local subgraph L with size O(n). Our goal
Algorithm 2: The FindGlobalPR algorithm.
FindGlobalPR(L, Lout, T, k)
Input: L: zero-one adjacency matrix for the local
domain, Lout: zero-one outlink matrix from L to global
subgraph as in (2), T: number of iterations, k: number of
pages to crawl per iteration.
Output: ˆp: an improved estimate of the global
PageRank of L.
F ← L
Fout ← Lout
f ← ComputePR(F )
for (i = 1 to T)
{Determine which pages to crawl next}
pages ← SelectNodes(F , Fout, f, k)
Crawl pages, augment F and modify Fout
{Update PageRanks for new local domain}
f ← ComputePR(F )
end
{Extract PageRanks of original local domain & normalize}
ˆp ← ELf
ELf 1
is to find such a supergraph ˆF with PageRank ˆf, so that
ˆf when restricted to L is close to p∗
. Formally, we seek to
minimize
GlobalDiff( ˆf) =
EL
ˆf
EL
ˆf 1
− p∗
1
. (3)
We choose the L1 norm for measuring the error as it does
not place excessive weight on outliers (as the L2 norm does,
for example), and also because it is the most commonly used
distance measure in the literature for comparing PageRank
vectors, as well as for detecting convergence of the
algorithm [3].
We propose a greedy framework, given in Algorithm 2,
for constructing ˆF . Initially, F is set to the local subgraph
L, and the PageRank f of this graph is computed. The
algorithm then proceeds as follows. First, the SelectNodes
algorithm (which we discuss in the next section) is called
and it returns a set of k nodes to crawl next from the set
of nodes in the current crawl frontier, Fout. These selected
nodes are then crawled to expand the local subgraph, F , and
the PageRanks of this expanded graph are then recomputed.
These steps are repeated for each of T iterations. Finally,
the PageRank vector ˆp, which is restricted to pages within
the original local domain, is returned. Given our
computation, bandwidth, and memory restrictions, we will assume
that the algorithm will crawl at most O(n) pages. Since the
PageRanks are computed in each iteration of the algorithm,
which is an O(n) operation, we will also assume that the
number of iterations T is a constant. Of course, the main
challenge here is in selecting which set of k nodes to crawl
next. In the next section, we formally define the problem
and give efficient algorithms.
5. NODE SELECTION
In this section, we present node selection algorithms that
operate within the greedy framework presented in the
previous section. We first give a well-defined criteria for the
page selection problem and provide experimental evidence
that this criteria can effectively identify pages that optimize
our problem objective (3). We then present our main
al118
Research Track Paper
gorithmic contribution of the paper, a method with linear
running time that is derived from this page selection
criteria. Finally, we give an intuitive analysis of our algorithm in
terms of ‘leaks" and ‘flows". We show that if only the ‘flow"
is considered, then the resulting method is very similar to a
widely used page selection heuristic [6].
5.1 Formulation
For a given page j in the global domain, we define the
expanded local graph Fj:
Fj =
F s
uT
j 0
, (4)
where uj is the zero-one vector containing the outlinks from
F into page j, and s contains the inlinks from page j into
the local domain. Note that we do not allow self-links in
this framework. In practice, self-links are often removed, as
they only serve to inflate a given page"s PageRank.
Observe that the inlinks into F from node j are not known
until after node j is crawled. Therefore, we estimate this
inlink vector as the expectation over inlink counts among
the set of already crawled pages,
s =
F T
e
F T e 1
. (5)
In practice, for any given page, this estimate may not reflect
the true inlinks from that page. Furthermore, this
expectation is sampled from the set of links within the crawled
domain, whereas a better estimate would also use links from
the global domain. However, the latter distribution is not
known to a localized search engine, and we contend that the
above estimate will, on average, be a better estimate than
the uniform distribution, for example.
Let the PageRank of F be f. We express the PageRank
f+
j of the expanded local graph Fj as
f+
j =
(1 − xj)fj
xj
, (6)
where xj is the PageRank of the candidate global node j,
and fj is the L1-normalized PageRank vector restricted to
the pages in F .
Since directly optimizing our problem goal requires
knowing the global PageRank p∗
, we instead propose to crawl
those nodes that will have the greatest influence on the
PageRanks of pages in the original local domain L:
influence(j) =
k∈L
|fj[k] − f[k]| (7)
= EL (fj − f) 1.
Experimentally, the influence score is a very good predictor
of our problem objective (3). For each candidate global node
j, figure 1(a) shows the objective function value Global Diff(fj)
as a function of the influence of page j. The local domain
used here is a crawl of conservative political pages (we will
provide more details about this dataset in section 6); we
observed similar results in other domains. The correlation
is quite strong, implying that the influence criteria can
effectively identify pages that improve the global PageRank
estimate. As a baseline, figure 1(b) compares our
objective with an alternative criteria, outlink count. The outlink
count is defined as the number of outlinks from the local
domain to page j. The correlation here is much weaker.
.00001 .0001 .001 .01
0.26
0.262
0.264
0.266
Influence
Objective
1 10 100 1000
0.266
0.264
0.262
0.26
Outlink Count
Objective
(a) (b)
Figure 1: (a) The correlation between our influence
page selection criteria (7) and the actual objective
function (3) value is quite strong. (b) This is in
contrast to other criteria, such as outlink count, which
exhibit a much weaker correlation.
5.2 Computation
As described, for each candidate global page j, the
influence score (7) must be computed. If fj is computed
exactly for each global page j, then the PageRank
algorithm would need to be run for each of the O(n) such global
pages j we consider, resulting in an O(n2
) computational
cost for the node selection method. Thus, computing the
exact value of fj will lead to a quadratic algorithm, and we
must instead turn to methods of approximating this vector.
The algorithm we present works by performing one power
method iteration used by the PageRank algorithm
(Algorithm 1). The convergence rate for the PageRank algorithm
has been shown to equal the random surfer probability α [7,
11]. Given a starting vector x(0)
, if k PageRank iterations
are performed, the current PageRank solution x(k)
satisfies:
x(k)
− x∗
1 = O(αk
x(0)
− x∗
1), (8)
where x∗
is the desired PageRank vector. Therefore, if only
one iteration is performed, choosing a good starting vector
is necessary to achieve an accurate approximation.
We partition the PageRank matrix PFj , corresponding to
the × subgraph Fj as:
PFj =
˜F ˜s
˜uT
j w
, (9)
where ˜F = αF (DF + diag(uj))−1
+ (1 − α)
e
+ 1
eT
,
˜s = αs + (1 − α)
e
+ 1
,
˜uj = α(DF + diag(uj))−1
uj + (1 − α)
e
+ 1
,
w =
1 − α
+ 1
,
and diag(uj) is the diagonal matrix with the (i, i)th
entry
equal to one if the ith
element of uj equals one, and is zero
otherwise. We have assumed here that the random surfer
vector is the uniform vector, and that L has no ‘dangling
links". These assumptions are not necessary and serve only
to simplify discussion and analysis.
A simple approach for estimating fj is the following. First,
estimate the PageRank f+
j of Fj by computing one
PageRank iteration over the matrix PFj , using the starting
vector ν =
f
0
. Then, estimate fj by removing the last
119
Research Track Paper
component from our estimate of f+
j (i.e., the component
corresponding to the added node j), and renormalizing.
The problem with this approach is in the starting vector.
Recall from (6) that xj is the PageRank of the added node
j. The difference between the actual PageRank f+
j of PFj
and the starting vector ν is
ν − f+
j 1 = xj + f − (1 − xj)fj 1
≥ xj + | f 1 − (1 − xj) fj 1|
= xj + |xj|
= 2xj.
Thus, by (8), after one PageRank iteration, we expect our
estimate of f+
j to still have an error of about 2αxj. In
particular, for candidate nodes j with relatively high PageRank
xj, this method will yield more inaccurate results. We will
next present a method that eliminates this bias and runs in
O(n) time.
5.2.1 Stochastic Complementation
Since f+
j , as given in (6) is the PageRank of the matrix
PFj , we have:
fj(1 − xj)
xj
=
˜F ˜s
˜uT
j w
fj(1 − xj)
xj
=
˜F fj(1 − xj) + ˜sxj
˜uT
j fj(1 − xj) + wxj
.
Solving the above system for fj can be shown to yield
fj = ( ˜F + (1 − w)−1
˜s˜uT
j )fj. (10)
The matrix S = ˜F +(1−w)−1
˜s˜uT
j is known as the stochastic
complement of the column stochastic matrix PFj with
respect to the sub matrix ˜F . The theory of stochastic
complementation is well studied, and it can be shown the stochastic
complement of an irreducible matrix (such as the PageRank
matrix) is unique. Furthermore, the stochastic complement
is also irreducible and therefore has a unique stationary
distribution as well. For an extensive study, see [15].
It can be easily shown that the sub-dominant eigenvalue
of S is at most +1
α, where is the size of F . For sufficiently
large , this value will be very close to α. This is important,
as other properties of the PageRank algorithm, notably the
algorithm"s sensitivity, are dependent on this value [11].
In this method, we estimate the length vector fj by
computing one PageRank iteration over the × stochastic
complement S, starting at the vector f:
fj ≈ Sf. (11)
This is in contrast to the simple method outlined in the
previous section, which first iterates over the ( + 1) × ( + 1)
matrix PFj to estimate f+
j , and then removes the last
component from the estimate and renormalizes to approximate
fj. The problem with the latter method is in the choice
of the ( + 1) length starting vector, ν. Consequently, the
PageRank estimate given by the simple method differs from
the true PageRank by at least 2αxj, where xj is the
PageRank of page j. By using the stochastic complement, we
can establish a tight lower bound of zero for this difference.
To see this, consider the case in which a node k is added
to F to form the augmented local subgraph Fk, and that
the PageRank of this new graph is
(1 − xk)f
xk
.
Specifically, the addition of page k does not change the PageRanks
of the pages in F , and thus fk = f. By construction of
the stochastic complement, fk = Sfk, so the approximation
given in equation (11) will yield the exact solution.
Next, we present the computational details needed to
efficiently compute the quantity fj −f 1 over all known global
pages j. We begin by expanding the difference fj −f, where
the vector fj is estimated as in (11),
fj − f ≈ Sf − f
= αF (DF + diag(uj))−1
f + (1 − α)
e
+ 1
eT
f
+(1 − w)−1
(˜uT
j f)˜s − f. (12)
Note that the matrix (DF +diag(uj))−1
is diagonal. Letting
o[k] be the outlink count for page k in F , we can express
the kth
diagonal element as:
(DF + diag(uj))−1
[k, k] =
1
o[k]+1
if uj[k] = 1
1
o[k]
if uj[k] = 0
Noting that (o[k] + 1)−1
= o[k]−1
− (o[k](o[k] + 1))−1
and
rewriting this in matrix form yields
(DF +diag(uj))−1
= D−1
F −D−1
F (DF +diag(uj))−1
diag(uj).
(13)
We use the same identity to express
e
+ 1
=
e
−
e
( + 1)
. (14)
Recall that, by definition, we have PF = αF D−1
F +(1−α)e
.
Substituting (13) and (14) in (12) yields
fj − f ≈ (PF f − f)
−αF D−1
F (DF + diag(uj))−1
diag(uj)f
−(1 − α)
e
( + 1)
+ (1 − w)−1
(˜uT
j f)˜s
= x + y + (˜uT
j f)z, (15)
noting that by definition, f = PF f, and defining the vectors
x, y, and z to be
x = −αF D−1
F (DF + diag(uj))−1
diag(uj)f (16)
y = −(1 − α)
e
( + 1)
(17)
z = (1 − w)−1
˜s. (18)
The first term x is a sparse vector, and takes non-zero values
only for local pages k that are siblings of the global page
j. We define (i, j) ∈ F if and only if F [j, i] = 1
(equivalently, page i links to page j) and express the value of the
component x[k ] as:
x[k ] = −α
k:(k,k )∈F ,uj [k]=1
f[k]
o[k](o[k] + 1)
, (19)
where o[k], as before, is the number of outlinks from page k
in the local domain. Note that the last two terms, y and z
are not dependent on the current global node j. Given the
function hj(f) = y + (˜uT
j f)z 1, the quantity fj − f 1
120
Research Track Paper
can be expressed as
fj − f 1 =
k
x[k] + y[k] + (˜uT
j f)z[k]
=
k:x[k]=0
y[k] + (˜uT
j f)z[k]
+
k:x[k]=0
x[k] + y[k] + (˜uT
j f)z[k]
= hj(f) −
k:x[k]=0
y[k] + (˜uT
j f)z[k]
+
k:x[k]=0
x[k] + y[k] + (˜uT
j f)z[k] .(20)
If we can compute the function hj in linear time, then we
can compute each value of fj − f 1 using an additional
amount of time that is proportional to the number of
nonzero components in x. These optimizations are carried out
in Algorithm 3. Note that (20) computes the difference
between all components of f and fj, whereas our node
selection criteria, given in (7), is restricted to the components
corresponding to nodes in the original local domain L.
Let us examine Algorithm 3 in more detail. First, the
algorithm computes the outlink counts for each page in the
local domain. The algorithm then computes the quantity
˜uT
j f for each known global page j. This inner product can
be written as
(1 − α)
1
+ 1
+ α
k:(k,j)∈Fout
f[k]
o[k] + 1
,
where the second term sums over the set of local pages that
link to page j. Since the total number of edges in Fout was
assumed to have size O( ) (recall that is the number of
pages in F ), the running time of this step is also O( ).
The algorithm then computes the vectors y and z, as
given in (17) and (18), respectively. The L1NormDiff
method is called on the components of these vectors which
correspond to the pages in L, and it estimates the value of
EL(y + (˜uT
j f)z) 1 for each page j. The estimation works
as follows. First, the values of ˜uT
j f are discretized uniformly
into c values {a1, ..., ac}. The quantity EL(y + aiz) 1 is
then computed for each discretized value of ai and stored in
a table. To evaluate EL (y + az) 1 for some a ∈ [a1, ac],
the closest discretized value ai is determined, and the
corresponding entry in the table is used. The total running time
for this method is linear in and the discretization
parameter c (which we take to be a constant). We note that if exact
values are desired, we have also developed an algorithm that
runs in O( log ) time that is not described here.
In the main loop, we compute the vector x, as defined
in equation (16). The nested loops iterate over the set of
pages in F that are siblings of page j. Typically, the size
of this set is bounded by a constant. Finally, for each page
j, the scores vector is updated over the set of non-zero
components k of the vector x with k ∈ L. This set has
size equal to the number of local siblings of page j, and is
a subset of the total number of siblings of page j. Thus,
each iteration of the main loop takes constant time, and the
total running time of the main loop is O( ). Since we have
assumed that the size of F will not grow larger than O(n),
the total running time for the algorithm is O(n).
Algorithm 3: Node Selection via Stochastic
Complementation.
SC-Select(F , Fout, f, k)
Input: F : zero-one adjacency matrix of size
corresponding to the current local subgraph, Fout: zero-one
outlink matrix from F to global subgraph, f:
PageRank of F , k: number of pages to return
Output: pages: set of k pages to crawl next
{Compute outlink sums for local subgraph}
foreach (page j ∈ F )
o[j] ← k:(j,k)∈F F[j, k]
end
{Compute scalar ˜uT
j f for each global node j }
foreach (page j ∈ Fout)
g[j] ← (1 − α) 1
+1
foreach (page k : (k, j) ∈ Fout)
g[j] ← g[j] + α f[k]
o[k]+1
end
end
{Compute vectors y and z as in (17) and (18) }
y ← −(1 − α) e
( +1)
z ← (1 − w)−1
˜s
{Approximate y + g[j] ∗ z 1 for all values g[j]}
norm diffs ←L1NormDiffs(g, ELy, ELz)
foreach (page j ∈ Fout)
{Compute sparse vector x as in (19)}
x ← 0
foreach (page k : (k, j) ∈ Fout)
foreach (page k : (k, k ) ∈ F ))
x[k ] ← x[k ] − f[k]
o[k](o[k]+1)
end
end
x ← αx
scores[j] ← norm diffs[j]
foreach (k : x[k] > 0 and page k ∈ L)
scores[j] ← scores[j] − |y[k] + g[j] ∗ z[k]|
+|x[k]+y[k]+g[j]∗z[k])|
end
end
Return k pages with highest scores
5.2.2 PageRank Flows
We now present an intuitive analysis of the stochastic
complementation method by decomposing the change in
PageRank in terms of ‘leaks" and ‘flows". This analysis is
motivated by the decomposition given in (15). PageRank ‘flow" is
the increase in the local PageRanks originating from global
page j. The flows are represented by the non-negative vector
(˜uT
j f)z (equations (15) and (18)). The scalar ˜uT
j f can be
thought of as the total amount of PageRank flow that page
j has available to distribute. The vector z dictates how the
flow is allocated to the local domain; the flow that local
page k receives is proportional to (within a constant factor
due to the random surfer vector) the expected number of its
inlinks.
The PageRank ‘leaks" represent the decrease in PageRank
resulting from the addition of page j. The leakage can
be quantified in terms of the non-positive vectors x and
y (equations (16) and (17)). For vector x, we can see from
equation (19) that the amount of PageRank leaked by a
local page is proportional to the weighted sum of the
Page121
Research Track Paper
Ranks of its siblings. Thus, pages that have siblings with
higher PageRanks (and low outlink counts) will experience
more leakage. The leakage caused by y is an artifact of the
random surfer vector.
We will next show that if only the ‘flow" term, (˜uT
j f)z,
is considered, then the resulting method is very similar to
a heuristic proposed by Cho et al. [6] that has been widely
used for the Crawling Through URL Ordering problem.
This heuristic is computationally cheaper, but as we will see
later, not as effective as the Stochastic Complementation
method.
Our node selection strategy chooses global nodes that
have the largest influence (equation (7)). If this influence is
approximated using only ‘flows", the optimal node j∗
is:
j∗
= argmaxj EL ˜uT
j fz 1
= argmaxj ˜uT
j f EL z 1
= argmaxj ˜uT
j f
= argmaxj α(DF + diag(uj))−1
uj + (1 − α)
e
+ 1
, f
= argmaxjfT
(DF + diag(uj))−1
uj.
The resulting page selection score can be expressed as a sum
of the PageRanks of each local page k that links to j, where
each PageRank value is normalized by o[k]+1. Interestingly,
the normalization that arises in our method differs from the
heuristic given in [6], which normalizes by o[k]. The
algorithm PF-Select, which is omitted due to lack of space,
first computes the quantity fT
(DF +diag(uj))−1
uj for each
global page j, and then returns the pages with the k largest
scores. To see that the running time for this algorithm is
O(n), note that the computation involved in this method is
a subset of that needed for the SC-Select method
(Algorithm 3), which was shown to have a running time of O(n).
6. EXPERIMENTS
In this section, we provide experimental evidence to
verify the effectiveness of our algorithms. We first outline our
experimental methodology and then provide results across
a variety of local domains.
6.1 Methodology
Given the limited resources available at an academic
institution, crawling a section of the web that is of the same
magnitude as that indexed by Google or Yahoo! is clearly
infeasible. Thus, for a given local domain, we approximate
the global graph by crawling a local neighborhood around
the domain that is several orders of magnitude larger than
the local subgraph. Even though such a graph is still orders
of magnitude smaller than the ‘true" global graph, we
contend that, even if there exist some highly influential pages
that are very far away from our local domain, it is
unrealistic for any local node selection algorithm to find them. Such
pages also tend to be highly unrelated to pages within the
local domain.
When explaining our node selection strategies in section
5, we made the simplifying assumption that our local graph
contained no dangling nodes. This assumption was only
made to ease our analysis. Our implementation efficiently
handles dangling links by replacing each zero column of our
adjacency matrix with the uniform vector. We evaluate the
algorithm using the two node selection strategies given in
Section 5.2, and also against the following baseline methods:
• Random: Nodes are chosen uniformly at random among
the known global nodes.
• OutlinkCount: Global nodes with the highest
number of outlinks from the local domain are chosen.
At each iteration of the FindGlobalPR algorithm, we
evaluate performance by computing the difference between the
current PageRank estimate of the local domain, ELf
ELf 1
, and
the global PageRank of the local domain ELg
ELg 1
. All
PageRank calculations were performed using the uniform
random surfer vector. Across all experiments, we set the
random surfer parameter α, to be .85, and used a convergence
threshold of 10−6
. We evaluate the difference between the
local and global PageRank vectors using three different
metrics: the L1 and L∞ norms, and Kendall"s tau. The L1 norm
measures the sum of the absolute value of the differences
between the two vectors, and the L∞ norm measures the
absolute value of the largest difference. Kendall"s tau metric is
a popular rank correlation measure used to compare
PageRanks [2, 11]. This metric can be computed by counting
the number of pairs of pairs that agree in ranking, and
subtracting from that the number of pairs of pairs that disagree
in ranking. The final value is then normalized by the total
number of n
2
such pairs, resulting in a [−1, 1] range, where
a negative score signifies anti-correlation among rankings,
and values near one correspond to strong rank correlation.
6.2 Results
Our experiments are based on two large web crawls and
were downloaded using the web crawler that is part of the
Nutch open source search engine project [18]. All crawls
were restricted to only ‘http" pages, and to limit the
number of dynamically generated pages that we crawl, we
ignored all pages with urls containing any of the characters
‘?", ‘*", ‘@", or ‘=". The first crawl, which we will refer to
as the ‘edu" dataset, was seeded by homepages of the top
100 graduate computer science departments in the USA, as
rated by the US News and World Report [16], and also by
the home pages of their respective institutions. A crawl of
depth 5 was performed, restricted to pages within the ‘.edu"
domain, resulting in a graph with approximately 4.7 million
pages and 22.9 million links. The second crawl was seeded
by the set of pages under the ‘politics" hierarchy in the dmoz
open directory project[17]. We crawled all pages up to four
links away, which yielded a graph with 4.4 million pages and
17.3 million links.
Within the ‘edu" crawl, we identified the five site-specific
domains corresponding to the websites of the top five
graduate computer science departments, as ranked by the US
News and World Report. This yielded local domains of
various sizes, from 10,626 (UIUC) to 59,895 (Berkeley). For each
of these site-specific domains with size n, we performed 50
iterations of the FindGlobalPR algorithm to crawl a total
of 2n additional nodes. Figure 2(a) gives the (L1) difference
from the PageRank estimate at each iteration to the global
PageRank, for the Berkeley local domain.
The performance of this dataset was representative of the
typical performance across the five computer science
sitespecific local domains. Initially, the L1 difference between
the global and local PageRanks ranged from .0469
(Stanford) to .149 (MIT). For the first several iterations, the
122
Research Track Paper
0 5 10 15 20 25 30 35 40 45 50
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
0.055
Number of Iterations
GlobalandLocalPageRankDifference(L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
0 10 20 30 40 50
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Number of Iterations
GlobalandLocalPageRankDifference(L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
0 5 10 15 20 25
0.16
0.18
0.2
0.22
0.24
0.26
0.28
0.3
0.32
0.34
Number of Iterations
GlobalandLocalPageRankDifference(L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
(a) www.cs.berkeley.edu (b) www.enterstageright.com (c) Politics
Figure 2: L1 difference between the estimated and true global PageRanks for (a) Berkeley"s computer science
website, (b) the site-specific domain, www.enterstageright.com, and (c) the ‘politics" topic-specific domain. The
stochastic complement method outperforms all other methods across various domains.
three link-based methods all outperform the random
selection heuristic. After these initial iterations, the random
heuristic tended to be more competitive with (or even
outperform, as in the Berkeley local domain) the outlink count
and PageRank flow heuristics. In all tests, the stochastic
complementation method either outperformed, or was
competitive with, the other methods. Table 1 gives the average
difference between the final estimated global PageRanks and
the true global PageRanks for various distance measures.
Algorithm L1 L∞ Kendall
Stoch. Comp. .0384 .00154 .9257
PR Flow .0470 .00272 .8946
Outlink .0419 .00196 .9053
Random .0407 .00204 .9086
Table 1: Average final performance of various node
selection strategies for the five site-specific
computer science local domains. Note that Kendall"s
Tau measures similarity, while the other metrics are
dissimilarity measures. Stochastic
Complementation clearly outperforms the other methods in all
metrics.
Within the ‘politics" dataset, we also performed two
sitespecific tests for the largest websites in the crawl:
www.adamsmith.org, the website for the London based Adam Smith
Institute, and www.enterstageright.com, an online
conservative journal. As with the ‘edu" local domains, we ran our
algorithm for 50 iterations, crawling a total of 2n nodes.
Figure 2 (b) plots the results for the www.enterstageright.com
domain. In contrast to the ‘edu" local domains, the Random
and OutlinkCount methods were not competitive with
either the SC-Select or the PF-Select methods. Among all
datasets and all node selection methods, the stochastic
complementation method was most impressive in this dataset,
realizing a final estimate that differed only .0279 from the
global PageRank, a ten-fold improvement over the initial
local PageRank difference of .299. For the Adam Smith local
domain, the initial difference between the local and global
PageRanks was .148, and the final estimates given by the
SC-Select, PF-Select, OutlinkCount, and Random
methods were .0208, .0193, .0222, and .0356, respectively.
Within the ‘politics" dataset, we constructed four
topicspecific local domains. The first domain consisted of all
pages in the dmoz politics category, and also all pages within
each of these sites up to two links away. This yielded a local
domain of 90,811 pages, and the results are given in figure 2
(c). Because of the larger size of the topic-specific domains,
we ran our algorithm for only 25 iterations to crawl a total
of n nodes.
We also created topic-specific domains from three
political sub-topics: liberalism, conservatism, and socialism. The
pages in these domains were identified by their
corresponding dmoz categories. For each sub-topic, we set the local
domain to be all pages within three links from the
corresponding dmoz category pages. Table 2 summarizes the
performance of these three topic-specific domains, and also
the larger political domain.
To quantify a global page j"s effect on the global
PageRank values of pages in the local domain, we define page
j"s impact to be its PageRank value, g[j], normalized by the
fraction of its outlinks pointing to the local domain:
impact(j) =
oL[j]
o[j]
· g[j],
where, oL[j] is the number of outlinks from page j to pages
in the local domain L, and o[j] is the total number of j"s
outlinks. In terms of the random surfer model, the impact
of page j is the probability that the random surfer (1) is
currently at global page j in her random walk and (2) takes
an outlink to a local page, given that she has already decided
not to jump to a random page.
For the politics local domain, we found that many of the
pages with high impact were in fact political pages that
should have been included in the dmoz politics topic, but
were not. For example, the two most influential global pages
were the political search engine www.askhenry.com, and the
home page of the online political magazine,
www.policyreview.com. Among non-political pages, the home page of
the journal Education Next was most influential. The
journal is freely available online and contains articles
regarding various aspect of K-12 education in America. To provide
some anecdotal evidence for the effectiveness of our page
selection methods, we note that the SC-Select method chose
11 pages within the www.educationnext.org domain, the
PF-Select method discovered 7 such pages, while the
OutlinkCount and Random methods found only 6 pages each.
For the conservative political local domain, the socialist
website www.ornery.org had a very high impact score. This
123
Research Track Paper
All Politics:
Algorithm L1 L2 Kendall
Stoch. Comp. .1253 .000700 .8671
PR Flow .1446 .000710 .8518
Outlink .1470 .00225 .8642
Random .2055 .00203 .8271
Conservativism:
Algorithm L1 L2 Kendall
Stoch. Comp. .0496 .000990 .9158
PR Flow .0554 .000939 .9028
Outlink .0602 .00527 .9144
Random .1197 .00102 .8843
Liberalism:
Algorithm L1 L2 Kendall
Stoch. Comp. .0622 .001360 .8848
PR Flow .0799 .001378 .8669
Outlink .0763 .001379 .8844
Random .1127 .001899 .8372
Socialism:
Algorithm L1 L∞ Kendall
Stoch. Comp. .04318 .00439 .9604
PR Flow .0450 .004251 .9559
Outlink .04282 .00344 .9591
Random .0631 .005123 .9350
Table 2: Final performance among node selection
strategies for the four political topic-specific crawls.
Note that Kendall"s Tau measures similarity, while
the other metrics are dissimilarity measures.
was largely due to a link from the front page of this site
to an article regarding global warming published by the
National Center for Public Policy Research, a conservative
research group in Washington, DC. Not surprisingly, the
global PageRank of this article (which happens to be on the
home page of the NCCPR, www.nationalresearch.com),
was approximately .002, whereas the local PageRank of this
page was only .00158. The SC-Select method yielded a
global PageRank estimate of approximately .00182, the
PFSelect method estimated a value of .00167, and the
Random and OutlinkCount methods yielded values of .01522
and .00171, respectively.
7. RELATED WORK
The node selection framework we have proposed is similar
to the url ordering for crawling problem proposed by Cho
et al. in [6]. Whereas our framework seeks to minimize the
difference between the global and local PageRank, the
objective used in [6] is to crawl the most highly (globally) ranked
pages first. They propose several node selection algorithms,
including the outlink count heuristic, as well as a variant of
our PF-Select algorithm which they refer to as the
‘PageRank ordering metric". They found this method to be most
effective in optimizing their objective, as did a recent survey
of these methods by Baeza-Yates et al. [1]. Boldi et al. also
experiment within a similar crawling framework in [2], but
quantify their results by comparing Kendall"s rank
correlation between the PageRanks of the current set of crawled
pages and those of the entire global graph. They found that
node selection strategies that crawled pages with the
highest global PageRank first actually performed worse (with
respect to Kendall"s Tau correlation between the local and
global PageRanks) than basic depth first or breadth first
strategies. However, their experiments differ from our work
in that our node selection algorithms do not use (or have
access to) global PageRank values.
Many algorithmic improvements for computing exact
PageRank values have been proposed [9, 10, 14]. If such
algorithms are used to compute the global PageRanks of our
local domain, they would all require O(N) computation,
storage, and bandwidth, where N is the size of the global
domain. This is in contrast to our method, which
approximates the global PageRank and scales linearly with the size
of the local domain.
Wang and Dewitt [22] propose a system where the set of
web servers that comprise the global domain communicate
with each other to compute their respective global
PageRanks. For a given web server hosting n pages, the
computational, bandwidth, and storage requirements are also
linear in n. One drawback of this system is that the
number of distinct web servers that comprise the global domain
can be very large. For example, our ‘edu" dataset contains
websites from over 3,200 different universities; coordinating
such a system among a large number of sites can be very
difficult.
Gan, Chen, and Suel propose a method for estimating the
PageRank of a single page [5] which uses only constant
bandwidth, computation, and space. Their approach relies on the
availability of a remote connectivity server that can supply
the set of inlinks to a given page, an assumption not used in
our framework. They experimentally show that a reasonable
estimate of the node"s PageRank can be obtained by visiting
at most a few hundred nodes. Using their algorithm for our
problem would require that either the entire global domain
first be downloaded or a connectivity server be used, both
of which would lead to very large web graphs.
8. CONCLUSIONS AND FUTURE WORK
The internet is growing exponentially, and in order to
navigate such a large repository as the web, global search
engines have established themselves as a necessity. Along with
the ubiquity of these large-scale search engines comes an
increase in search users" expectations. By providing complete
and isolated coverage of a particular web domain, localized
search engines are an effective outlet to quickly locate
content that could otherwise be difficult to find. In this work,
we contend that the use of global PageRank in a localized
search engine can improve performance.
To estimate the global PageRank, we have proposed an
iterative node selection framework where we select which
pages from the global frontier to crawl next. Our primary
contribution is our stochastic complementation page
selection algorithm. This method crawls nodes that will most
significantly impact the local domain and has running time
linear in the number of nodes in the local domain.
Experimentally, we validate these methods across a diverse set of
local domains, including seven site-specific domains and four
topic-specific domains. We conclude that by crawling an
additional n or 2n pages, our methods find an estimate of the
global PageRanks that is up to ten times better than just
using the local PageRanks. Furthermore, we demonstrate
that our algorithm consistently outperforms other existing
heuristics.
124
Research Track Paper
Often times, topic-specific domains are discovered using
a focused web crawler which considers a page"s content in
conjunction with link anchor text to decide which pages to
crawl next [4]. Although such crawlers have proven to be
quite effective in discovering topic-related content, many
irrelevant pages are also crawled in the process. Typically,
these pages are deleted and not indexed by the localized
search engine. These pages can of course provide valuable
information regarding the global PageRank of the local
domain. One way to integrate these pages into our framework
is to start the FindGlobalPR algorithm with the current
subgraph F equal to the set of pages that were crawled by
the focused crawler.
The global PageRank estimation framework, along with
the node selection algorithms presented, all require O(n)
computation per iteration and bandwidth proportional to
the number of pages crawled, Tk. If the number of
iterations T is relatively small compared to the number of pages
crawled per iteration, k, then the bottleneck of the algorithm
will be the crawling phase. However, as the number of
iterations increases (relative to k), the bottleneck will reside in
the node selection computation. In this case, our algorithms
would benefit from constant factor optimizations. Recall
that the FindGlobalPR algorithm (Algorithm 2) requires
that the PageRanks of the current expanded local domain be
recomputed in each iteration. Recent work by Langville and
Meyer [12] gives an algorithm to quickly recompute
PageRanks of a given webgraph if a small number of nodes are
added. This algorithm was shown to give speedup of five to
ten times on some datasets. We plan to investigate this and
other such optimizations as future work.
In this paper, we have objectively evaluated our methods
by measuring how close our global PageRank estimates are
to the actual global PageRanks. To determine the
benefit of using global PageRanks in a localized search engine,
we suggest a user study in which users are asked to rate
the quality of search results for various search queries. For
some queries, only the local PageRanks are used in
ranking, and for the remaining queries, local PageRanks and the
approximate global PageRanks, as computed by our
algorithms, are used. The results of such a study can then be
analyzed to determine the added benefit of using the global
PageRanks computed by our methods, over just using the
local PageRanks.
Acknowledgements. This research was supported by NSF
grant CCF-0431257, NSF Career Award ACI-0093404, and
a grant from Sabre, Inc.
9. REFERENCES
[1] R. Baeza-Yates, M. Marin, C. Castillo, and
A. Rodriguez. Crawling a country: better strategies
than breadth-first for web page ordering. World-Wide
Web Conference, 2005.
[2] P. Boldi, M. Santini, and S. Vigna. Do your worst to
make the best: paradoxical effects in pagerank
incremental computations. Workshop on Web Graphs,
3243:168-180, 2004.
[3] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. Computer Networks
and ISDN Systems, 33(1-7):107-117, 1998.
[4] S. Chakrabarti, M. van den Berg, and B. Dom.
Focused crawling: a new approach to topic-specific
web resource discovery. World-Wide Web Conference,
1999.
[5] Y. Chen, Q. Gan, and T. Suel. Local methods for
estimating pagerank values. Conference on
Information and Knowledge Management, 2004.
[6] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through url ordering. World-Wide Web
Conference, 1998.
[7] T. H. Haveliwala and S. D. Kamvar. The second
eigenvalue of the Google matrix. Technical report,
Stanford University, 2003.
[8] T. Joachims, F. Radlinski, L. Granka, A. Cheng,
C. Tillekeratne, and A. Patel. Learning retrieval
functions from implicit feedback.
http://www.cs.cornell.edu/People/tj/career.
[9] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and
G. H. Golub. Exploiting the block structure of the
web for computing pagerank. World-Wide Web
Conference, 2003.
[10] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and
G. H. Golub. Extrapolation methods for accelerating
pagerank computation. World-Wide Web Conference,
2003.
[11] A. N. Langville and C. D. Meyer. Deeper inside
pagerank. Internet Mathematics, 2004.
[12] A. N. Langville and C. D. Meyer. Updating the
stationary vector of an irreducible markov chain with
an eye on Google"s pagerank. SIAM Journal on
Matrix Analysis, 2005.
[13] P. Lyman, H. R. Varian, K. Swearingen, P. Charles,
N. Good, L. L. Jordan, and J. Pal. How much
information 2003? School of Information Management
and System, University of California at Berkely, 2003.
[14] F. McSherry. A uniform approach to accelerated
pagerank computation. World-Wide Web Conference,
2005.
[15] C. D. Meyer. Stochastic complementation, uncoupling
markov chains, and the theory of nearly reducible
systems. SIAM Review, 31:240-272, 1989.
[16] US News and World Report. http://www.usnews.com.
[17] Dmoz open directory project. http://www.dmoz.org.
[18] Nutch open source search engine.
http://www.nutch.org.
[19] F. Radlinski and T. Joachims. Query chains: learning
to rank from implicit feedback. ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining, 2005.
[20] S. Raghavan and H. Garcia-Molina. Crawling the
hidden web. In Proceedings of the Twenty-seventh
International Conference on Very Large Databases,
2001.
[21] T. Tin Tang, D. Hawking, N. Craswell, and
K. Griffiths. Focused crawling for both topical
relevance and quality of medical information.
Conference on Information and Knowledge
Management, 2005.
[22] Y. Wang and D. J. DeWitt. Computing pagerank in a
distributed internet search system. Proceedings of the
30th VLDB Conference, 2004.
125
Research Track Paper | algorithm;localized search engine;web community;experimentation;subgraph;link-based ranking;topic-specific domain;large-scale search engine;local domain;global pagerank;crawling problem;global graph |
train_H-84 | Event Threading within News Topics | With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them. | 1. INTRODUCTION
News forms a major portion of information disseminated in the
world everyday. Common people and news analysts alike are very
interested in keeping abreast of new things that happen in the news,
but it is becoming very difficult to cope with the huge volumes
of information that arrives each day. Hence there is an increasing
need for automatic techniques to organize news stories in a way that
helps users interpret and analyze them quickly. This problem is
addressed by a research program called Topic Detection and Tracking
(TDT) [3] that runs an open annual competition on standardized
tasks of news organization.
One of the shortcomings of current TDT evaluation is its view of
news topics as flat collection of stories. For example, the detection
task of TDT is to arrange a collection of news stories into clusters
of topics. However, a topic in news is more than a mere collection
of stories: it is characterized by a definite structure of inter-related
events. This is indeed recognized by TDT which defines a topic as
‘a set of news stories that are strongly related by some seminal
realworld event" where an event is defined as ‘something that happens
at a specific time and location" [3]. For example, when a bomb
explodes in a building, that is the seminal event that triggers the
topic. Other events in the topic may include the rescue attempts,
the search for perpetrators, arrests and trials and so on. We see
that there is a pattern of dependencies between pairs of events in
the topic. In the above example, the event of rescue attempts is
‘influenced" by the event of bombing and so is the event of search
for perpetrators.
In this work we investigate methods for modeling the structure
of a topic in terms of its events. By structure, we mean not only
identifying the events that make up a topic, but also establishing
dependencies-generally causal-among them. We call the
process of recognizing events and identifying dependencies among
them event threading, an analogy to email threading that shows
connections between related email messages. We refer to the
resulting interconnected structure of events as the event model of the
topic. Although this paper focuses on threading events within an
existing news topic, we expect that such event based dependency
structure more accurately reflects the structure of news than strictly
bounded topics do. From a user"s perspective, we believe that our
view of a news topic as a set of interconnected events helps him/her
get a quick overview of the topic and also allows him/her navigate
through the topic faster.
The rest of the paper is organized as follows. In section 2, we
discuss related work. In section 3, we define the problem and use
an example to illustrate threading of events within a news topic. In
section 4, we describe how we built the corpus for our problem.
Section 5 presents our evaluation techniques while section 6
describes the techniques we use for modeling event structure. In
section 7 we present our experiments and results. Section 8 concludes
the paper with a few observations on our results and comments on
future work.
446
2. RELATED WORK
The process of threading events together is related to threading
of electronic mail only by name for the most part. Email usually
incorporates a strong structure of referenced messages and
consistently formatted subject headings-though information retrieval
techniques are useful when the structure breaks down [7]. Email
threading captures reference dependencies between messages and
does not attempt to reflect any underlying real-world structure of
the matter under discussion.
Another area of research that looks at the structure within a topic
is hierarchical text classification of topics [9, 6]. The hierarchy
within a topic does impose a structure on the topic, but we do not
know of an effort to explore the extent to which that structure
reflects the underlying event relationships.
Barzilay and Lee [5] proposed a content structure modeling
technique where topics within text are learnt using unsupervised
methods, and a linear order of these topics is modeled using hidden
Markov models. Our work differs from theirs in that we do not
constrain the dependency to be linear. Also their algorithms are tuned
to work on specific genres of topics such as earthquakes, accidents,
etc., while we expect our algorithms to generalize over any topic.
In TDT, researchers have traditionally considered topics as
flatclusters [1]. However, in TDT-2003, a hierarchical structure of
topic detection has been proposed and [2] made useful attempts
to adopt the new structure. However this structure still did not
explicitly model any dependencies between events.
In a work closest to ours, Makkonen [8] suggested modeling
news topics in terms of its evolving events. However, the paper
stopped short of proposing any models to the problem. Other
related work that dealt with analysis within a news topic includes
temporal summarization of news topics [4].
3. PROBLEM DEFINITION AND NOTATION
In this work, we have adhered to the definition of event and topic
as defined in TDT. We present some definitions (in italics) and our
interpretations (regular-faced) below for clarity.
1. Story: A story is a news article delivering some information
to users. In TDT, a story is assumed to refer to only a single
topic. In this work, we also assume that each story discusses
a single event. In other words, a story is the smallest atomic
unit in the hierarchy (topic event story). Clearly, both
the assumptions are not necessarily true in reality, but we
accept them for simplicity in modeling.
2. Event: An event is something that happens at some specific
time and place [10]. In our work, we represent an event by
a set of stories that discuss it. Following the assumption of
atomicity of a story, this means that any set of distinct events
can be represented by a set of non-overlapping clusters of
news stories.
3. Topic: A set of news stories strongly connected by a seminal
event. We expand on this definition and interpret a topic as
a series of related events. Thus a topic can be represented
by clusters of stories each representing an event and a set of
(directed or undirected) edges between pairs of these clusters
representing the dependencies between these events. We will
describe this representation of a topic in more detail in the
next section.
4. Topic detection and tracking (TDT) :Topic detection
detects clusters of stories that discuss the same topic; Topic
tracking detects stories that discuss a previously known topic [3].
Thus TDT concerns itself mainly with clustering stories into
topics that discuss them.
5. Event threading: Event threading detects events within in a
topic, and also captures the dependencies among the events.
Thus the main difference between event threading and TDT
is that we focus our modeling effort on microscopic events
rather than larger topics. Additionally event threading
models the relatedness or dependencies between pairs of events
in a topic while TDT models topics as unrelated clusters of
stories.
We first define our problem and representation of our model
formally and then illustrate with the help of an example. We are
given a set of Ò news stories Ë ×½ ¡ ¡ ¡ ×Ò on a given topic
Ì and their time of publication. We define a set of events
½ ¡ ¡ ¡ Ñ with the following constraints:
¾ ¾
Ë (1)
× Ø (2)
× ¾ × Ø × ¾ (3)
While the first constraint says that each event is an element in the
power set of S, the second constraint ensures that each story can
belong to at most one event. The last constraint tells us that every
story belongs to one of the events in . In fact this allows us to
define a mapping function from stories to events as follows:
´× µ iff × ¾ (4)
Further, we also define a set of directed edges ´ µ
which denote dependencies between events. It is important to
explain what we mean by this directional dependency: While the
existence of an edge itself represents relatedness of two events, the
direction could imply causality or temporal-ordering. By causal
dependency we mean that the occurrence of event B is related to
and is a consequence of the occurrence of event A. By temporal
ordering, we mean that event B happened after event A and is related
to A but is not necessarily a consequence of A. For example,
consider the following two events: ‘plane crash" (event A) and
‘subsequent investigations" (event B) in a topic on a plane crash incident.
Clearly, the investigations are a result of the crash. Hence an
arrow from A to B falls under the category of causal dependency.
Now consider the pair of events ‘Pope arrives in Cuba"(event A)
and ‘Pope meets Castro"(event B) in a topic that discusses Pope"s
visit to Cuba. Now events A and B are closely related through their
association with the Pope and Cuba but event B is not necessarily
a consequence of the occurrence of event A. An arrow in such
scenario captures what we call time ordering. In this work, we do not
make an attempt to distinguish between these two kinds of
dependencies and our models treats them as identical. A simpler (and
hence less controversial) choice would be to ignore direction in the
dependencies altogether and consider only undirected edges. This
choice definitely makes sense as a first step but we chose the former
since we believe directional edges make more sense to the user as
they provide a more illustrative flow-chart perspective to the topic.
To make the idea of event threading more concrete, consider the
example of TDT3 topic 30005, titled ‘Osama bin Laden"s
Indictment" (in the 1998 news). This topic has 23 stories which form 5
events. An event model of this topic can be represented as in figure
1. Each box in the figure indicates an event in the topic of Osama"s
indictment. The occurrence of event 2, namely ‘Trial and
Indictment of Osama" is dependent on the event of ‘evidence gathered
by CIA", i.e., event 1. Similarly, event 2 influences the occurrences
of events 3, 4 and 5, namely ‘Threats from Militants", ‘Reactions
447
from Muslim World" and ‘announcement of reward". Thus all the
dependencies in the example are causal.
Extending our notation further, we call an event A a parent of B
and B the child of A, if ´ µ ¾ . We define an event model
Å ´ µ to be a tuple of the set of events and set of
dependencies.
Trial and
(5)
(3)
(4)
CIA announces reward
Muslim world
Reactions from
Islamic militants
Threats from
(2)
(1)
Osama
Indictment of
CIA
gathered by
Evidence
Figure 1: An event model of TDT topic ‘Osama bin Laden"s
indictment".
Event threading is strongly related to topic detection and
tracking, but also different from it significantly. It goes beyond topics,
and models the relationships between events. Thus, event
threading can be considered as a further extension of topic detection and
tracking and is more challenging due to at least the following
difficulties.
1. The number of events is unknown.
2. The granularity of events is hard to define.
3. The dependencies among events are hard to model.
4. Since it is a brand new research area, no standard evaluation
metrics and benchmark data is available.
In the next few sections, we will describe our attempts to tackle
these problems.
4. LABELED DATA
We picked 28 topics from the TDT2 corpus and 25 topics from
the TDT3 corpus. The criterion we used for selecting a topic is that
it should contain at least 15 on-topic stories from CNN headline
news. If the topic contained more than 30 CNN stories, we picked
only the first 30 stories to keep the topic short enough for
annotators. The reason for choosing only CNN as the source is that the
stories from this source tend to be short and precise and do not tend
to digress or drift too far away from the central theme. We believe
modeling such stories would be a useful first step before dealing
with more complex data sets.
We hired an annotator to create truth data. Annotation includes
defining the event membership for each story and also the
dependencies. We supervised the annotator on a set of three topics that
we did our own annotations on and then asked her to annotate the
28 topics from TDT2 and 25 topics from TDT3.
In identifying events in a topic, the annotator was asked to broadly
follow the TDT definition of an event, i.e., ‘something that happens
at a specific time and location". The annotator was encouraged to
merge two events A and B into a single event C if any of the
stories discusses both A and B. This is to satisfy our assumption that
each story corresponds to a unique event. The annotator was also
encouraged to avoid singleton events, events that contain a single
news story, if possible. We realized from our own experience that
people differ in their perception of an event especially when the
number of stories in that event is small. As part of the guidelines,
we instructed the annotator to assign titles to all the events in each
topic. We believe that this would help make her understanding of
the events more concrete. We however, do not use or model these
titles in our algorithms.
In defining dependencies between events, we imposed no
restrictions on the graph structure. Each event could have single,
multiple or no parents. Further, the graph could have cycles or
orphannodes. The annotator was however instructed to assign a
dependency from event A to event B if and only if the occurrence of B
is ‘either causally influenced by A or is closely related to A and
follows A in time".
From the annotated topics, we created a training set of 26 topics
and a test set of 27 topics by merging the 28 topics from TDT2 and
25 from TDT3 and splitting them randomly. Table 1 shows that the
training and test sets have fairly similar statistics.
Feature Training set Test set
Num. topics 26 27
Avg. Num. Stories/Topic 28.69 26.74
Avg. Doc. Len. 64.60 64.04
Avg. Num. Stories/Event 5.65 6.22
Avg. Num. Events/Topic 5.07 4.29
Avg. Num. Dependencies/Topic 3.07 2.92
Avg. Num. Dependencies/Event 0.61 0.68
Avg. Num. Days/Topic 30.65 34.48
Table 1: Statistics of annotated data
5. EVALUATION
A system can generate some event model ż ´
¼ ¼µ using
certain algorithms, which is usually different from the truth model
Å ´ µ (we assume the annotator did not make any
mistake). Comparing a system event model ż with the true model
Å requires comparing the entire event models including their
dependency structure. And different event granularities may bring
huge discrepancy between ż and Å. This is certainly non-trivial
as even testing whether two graphs are isomorphic has no known
polynomial time solution. Hence instead of comparing the actual
structure we examine a pair of stories at a time and verify if the
system and true labels agree on their event-memberships and
dependencies. Specifically, we compare two kinds of story pairs:
¯ Cluster pairs ( ´Åµ): These are the complete set of
unordered pairs ´× × µ of stories × and × that fall within the
same event given a model Å. Formally,
´Åµ ´× × µ × × ¾ Ë ´× µ ´× µ (5)
where is the function in Å that maps stories to events as
defined in equation 4.
¯ Dependency pairs ( ´Åµ): These are the set of all ordered
pairs of stories ´× × µ such that there is a dependency from
the event of × to the event of × in the model Å.
´Åµ ´× × µ ´ ´× µ ´× µµ ¾ (6)
Note the story pair is ordered here, so ´× × µ is not
equivalent to ´× × µ. In our evaluation, a correct pair with wrong
448
(B->D)
Cluster pairs
(A,C)
Dependency pairs
(A->B)
(C->B)
(B->D)
D,E
D,E
(D,E)
(D,E)
(A->C) (A->E)
(B->C) (B->E)
(B->E)
Cluster precision: 1/2
Cluster Recall: 1/2
Dependency Recall: 2/6
Dependency Precision: 2/4
(A->D)
True event model System event model
A,B
C
A,C B
Cluster pairs
(A,B)
Dependency pairs
Figure 2: Evaluation measures
direction will be considered a mistake. As we mentioned
earlier in section 3, ignoring the direction may make the
problem simpler, but we will lose the expressiveness of our
representation.
Given these two sets of story pairs corresponding to the true
event model Šand the system event model ż, we define recall
and precision for each category as follows.
¯ Cluster Precision (CP): It is the probability that two
randomly selected stories × and × are in the same true-event
given that they are in the same system event.
È È´ ´× µ ´× µ
¼´× µ
¼´× µµ
´Åµ ´Å¼µ
´Å¼µ
(7)
where ¼ is the story-event mapping function corresponding
to the model ż.
¯ Cluster Recall(CR): It is the probability that two randomly
selected stories × and × are in the same system-event given
that they are in the same true event.
Ê È´
¼´× µ
¼´× µ ´× µ ´× µµ
´Åµ ´Å¼µ
´Åµ
(8)
¯ Dependency Precision(DP): It is the probability that there is
a dependency between the events of two randomly selected
stories × and × in the true model Å given that they have a
dependency in the system model ż. Note that the direction
of dependency is important in comparison.
È È´´ ´× µ ´× µµ ¾ ´
¼´× µ
¼´× µµ ¾
¼µ
´Åµ ´Å¼µ
´Å¼µ
(9)
¯ Dependency Recall(DR): It is the probability that there is
a dependency between the events of two randomly selected
stories × and × in the system model ż given that they have
a dependency in the true model Å. Again, the direction of
dependency is taken into consideration.
Ê È´´
¼´× µ
¼´× µµ ¾
¼ ´ ´× µ ´× µµ ¾ µ
´Åµ ´Å¼µ
´Åµ
(10)
The measures are illustrated by an example in figure 2. We also
combine these measures using the well known F1-measure
commonly used in text classification and other research areas as shown
below.
¾ ¢ È ¢ Ê
È · Ê
¾ ¢ È ¢ Ê
È · Ê
 ¾ ¢ ¢
·
(11)
where and are the cluster and dependency F1-measures
respectively and  is the Joint F1-measure ( ) that we use to
measure the overall performance.
6. TECHNIQUES
The task of event modeling can be split into two parts: clustering
the stories into unique events in the topic and constructing
dependencies among them. In the following subsections, we describe
techniques we developed for each of these sub-tasks.
6.1 Clustering
Each topic is composed of multiple events, so stories must be
clustered into events before we can model the dependencies among
them. For simplicity, all stories in the same topic are assumed to
be available at one time, rather than coming in a text stream. This
task is similar to traditional clustering but features other than word
distributions may also be critical in our application.
In many text clustering systems, the similarity between two
stories is the inner product of their tf-idf vectors, hence we use it as
one of our features. Stories in the same event tend to follow
temporal locality, so the time stamp of each story can be a useful feature.
Additionally, named-entities such as person and location names are
another obvious feature when forming events. Stories in the same
event tend to be related to the same person(s) and locations(s).
In this subsection, we present an agglomerative clustering
algorithm that combines all these features. In our experiments,
however, we study the effect of each feature on the performance
separately using modified versions of this algorithm.
6.1.1 Agglomerative clustering with
time decay (ACDT)
We initialize our events to singleton events (clusters), i.e., each
cluster contains exactly one story. So the similarity between two
events, to start with, is exactly the similarity between the
corresponding stories. The similarity Û×ÙÑ´×½ ×¾µ between two
stories ×½ and ×¾ is given by the following formula:
Û×ÙÑ´×½ ×¾µ ½ Ó×´×½ ×¾µ · ¾ÄÓ ´×½ ×¾µ · ¿È Ö´×½ ×¾µ
(12)
Here ½, ¾, ¿ are the weights on different features. In this work,
we determined them empirically, but in the future, one can
consider more sophisticated learning techniques to determine them.
Ó×´×½ ×¾µ is the cosine similarity of the term vectors. ÄÓ ´×½ ×¾µ
is 1 if there is some location that appears in both stories, otherwise
it is 0. È Ö´×½ ×¾µ is similarly defined for person name.
We use time decay when calculating similarity of story pairs,
i.e., the larger time difference between two stories, the smaller their
similarities. The time period of each topic differs a lot, from a few
days to a few months. So we normalize the time difference using
the whole duration of that topic. The time decay adjusted similarity
449
× Ñ´×½ ×¾µ is given by
× Ñ´×½ ×¾µ Û×ÙÑ´×½ ×¾µ
« ؽ ؾ
Ì (13)
where ؽ and ؾ are the time stamps for story 1 and 2 respectively.
T is the time difference between the earliest and the latest story in
the given topic. « is the time decay factor.
In each iteration, we find the most similar event pair and merge
them. We have three different ways to compute the similarity
between two events Ù and Ú:
¯ Average link: In this case the similarity is the average of the
similarities of all pairs of stories between Ù and Ú as shown
below:
× Ñ´ Ù Ú µ
È×Ù¾ Ù
È×Ú¾ Ú × Ñ´×Ù ×Ú µ
Ù Ú
(14)
¯ Complete link: The similarity between two events is given
by the smallest of the pair-wise similarities.
× Ñ´ Ù Ú µ Ñ Ò
×Ù¾ Ù ×Ú¾ Ú
× Ñ´×Ù ×Ú µ (15)
¯ Single link: Here the similarity is given by the best similarity
between all pairs of stories.
× Ñ´ Ù Ú µ Ñ Ü
×Ù¾ Ù ×Ú¾ Ú
× Ñ´×Ù ×Ú µ (16)
This process continues until the maximum similarity falls below
the threshold or the number of clusters is smaller than a given
number.
6.2 Dependency modeling
Capturing dependencies is an extremely hard problem because
it may require a ‘deeper understanding" of the events in question.
A human annotator decides on dependencies not just based on the
information in the events but also based on his/her vast repertoire
of domain-knowledge and general understanding of how things
operate in the world. For example, in Figure 1 a human knows ‘Trial
and indictment of Osama" is influenced by ‘Evidence gathered by
CIA" because he/she understands the process of law in general.
We believe a robust model should incorporate such domain
knowledge in capturing dependencies, but in this work, as a first step, we
will rely on surface-features such as time-ordering of news stories
and word distributions to model them. Our experiments in later
sections demonstrate that such features are indeed useful in capturing
dependencies to a large extent.
In this subsection, we describe the models we considered for
capturing dependencies. In the rest of the discussion in this subsection,
we assume that we are already given the mapping ¼ Ë and
we focus only on modeling the edges ¼. First we define a couple
of features that the following models will employ.
First we define a 1-1 time-ordering function Ø Ë ½ ¡ ¡ ¡ Ò
that sorts stories in ascending order by their time of publication.
Now, the event-time-ordering function Ø is defined as follows.
Ø ½ ¡ ¡ ¡ Ñ × Ø
Ù Ú ¾ Ø ´ Ùµ Ø ´ Úµ ´µ Ñ Ò
×Ù¾ Ù
Ø´×Ùµ Ñ Ò
×Ú¾ Ú
Ø´×Úµ
(17)
In other words, Ø time-orders events based on the time-ordering of
their respective first stories.
We will also use average cosine similarity between two events as
a feature and it is defined as follows.
Ú Ë Ñ´ Ù Ú µ
È×Ù¾ Ù
È×Ú¾ Ú Ó×´×Ù ×Ú µ
Ù Ú
(18)
6.2.1 Complete-Link model
In this model, we assume that there are dependencies between all
pairs of events. The direction of dependency is determined by the
time-ordering of the first stories in the respective events. Formally,
the system edges are defined as follows.
¼ ´ Ù Ú µ Ø ´ Ùµ Ø ´ Ú µ (19)
where Ø is the event-time-ordering function. In other words, the
dependency edge is directed from event Ù to event Ú , if the first
story in event Ù is earlier than the first story in event Ú . We point
out that this is not to be confused with the complete-link algorithm
in clustering. Although we use the same names, it will be clear
from the context which one we refer to.
6.2.2 Simple Thresholding
This model is an extension of the complete link model with an
additional constraint that there is a dependency between any two
events Ù and Ú only if the average cosine similarity between
event Ù and event Ú is greater than a threshold Ì. Formally,
¼ ´ Ù Úµ Ú Ë Ñ´ Ù Ú µ Ì
Ø ´ Ùµ Ø ´ Ú µ (20)
6.2.3 Nearest Parent Model
In this model, we assume that each event can have at most one
parent. We define the set of dependencies as follows.
¼ ´ Ù Úµ Ú Ë Ñ´ Ù Ú µ Ì
Ø ´ Úµ Ø ´ Ùµ · ½ (21)
Thus, for each event Ú , the nearest parent model considers only
the event preceding it as defined by Ø as a potential candidate. The
candidate is assigned as the parent only if the average similarity
exceeds a pre-defined threshold Ì.
6.2.4 Best Similarity Model
This model also assumes that each event can have at most one
parent. An event Ú is assigned a parent Ù if and only if Ù is
the most similar earlier event to Ú and the similarity exceeds a
threshold Ì. Mathematically, this can be expressed as:
¼ ´ Ù Ú µ Ú Ë Ñ´ Ù Úµ Ì
Ù Ö Ñ Ü
Û Ø ´ Ûµ Ø ´ Úµ
Ú Ë Ñ´ Û Ú µ
(22)
6.2.5 Maximum Spanning Tree model
In this model, we first build a maximum spanning tree (MST)
using a greedy algorithm on the following fully connected weighted,
undirected graph whose vertices are the events and whose edges
are defined as follows:
´ Ù Ú µ Û´ Ù Ú µ Ú Ë Ñ´ Ù Úµ (23)
Let ÅËÌ´ µ be the set of edges in the maximum spanning tree of
¼. Now our directed dependency edges are defined as follows.
¼ ´ Ù Ú µ ´ Ù Ú µ ¾ ÅËÌ´ µ Ø ´ Ùµ Ø ´ Úµ
Ú Ë Ñ´ Ù Ú µ Ì (24)
450
Thus in this model, we assign dependencies between the most
similar events in the topic.
7. EXPERIMENTS
Our experiments consists of three parts. First we modeled only
the event clustering part (defining the mapping function ¼) using
clustering algorithms described in section 6.1. Then we modeled
only the dependencies by providing to the system the true clusters
and running only the dependency algorithms of section 6.2. Finally,
we experimented with combinations of clustering and dependency
algorithms to produce the complete event model. This way of
experimentation allows us to compare the performance of our
algorithms in isolation and in association with other components. The
following subsections present the three parts of our
experimentation.
7.1 Clustering
We have tried several variations of the Ì algorithm to study
the effects of various features on the clustering performance. All
the parameters are learned by tuning on the training set. We also
tested the algorithms on the test set with parameters fixed at their
optimal values learned from training. We used agglomerative
clusModel best T CP CR CF P-value
cos+1-lnk 0.15 0.41 0.56
0.43cos+all-lnk 0.00 0.40 0.62
0.45cos+Loc+avg-lnk 0.07 0.37 0.74
0.45cos+Per+avg-lnk 0.07 0.39 0.70
0.46cos+TD+avg-lnk 0.04 0.45 0.70 0.53 2.9e-4*
cos+N(T)+avg-lnk - 0.41 0.62 0.48 7.5e-2
cos+N(T)+T+avg-lnk 0.03 0.42 0.62 0.49 2.4e-2*
cos+TD+N(T)+avg-lnk - 0.44 0.66 0.52 7.0e-3*
cos+TD+N(T)+T+avg-lnk 0.03 0.47 0.64 0.53 1.1e-3*
Baseline(cos+avg-lnk) 0.05 0.39 0.67
0.46Table 2: Comparison of agglomerative clustering algorithms
(training set)
tering based on only cosine similarity as our clustering baseline.
The results on the training and test sets are in Table 2 and 3
respectively. We use the Cluster F1-measure (CF) averaged over all topics
as our evaluation criterion.
Model CP CR CF P-value
cos+1-lnk 0.43 0.49
0.39cos+all-lnk 0.43 0.62
0.47cos+Loc+avg-lnk 0.37 0.73
0.45cos+Per+avg-lnk 0.44 0.62
0.45cos+TD+avg-lnk 0.48 0.70 0.54 0.014*
cos+N(T)+avg-lnk 0.41 0.71 0.51 0.31
cos+N(T)+T+avg-lnk 0.43 0.69* 0.52 0.14
cos+TD+N(T)+avg-lnk 0.43 0.76 0.54 0.025*
cos+TD+N(T)+T+avg-lnk 0.47 0.69 0.54 0.0095*
Baseline(cos+avg-lnk) 0.44 0.67
0.50Table 3: Comparison of agglomerative clustering algorithms
(test set)
P-value marked with a £ means that it is a statistically significant
improvement over the baseline (95% confidence level, one tailed
T-test). The methods shown in table 2 and 3 are:
¯ Baseline: tf-idf vector weight, cosine similarity, average link
in clustering. In equation 12, ½ ½, ¾ ¿ ¼. And
« ¼ in equation 13. This F-value is the maximum obtained
by tuning the threshold.
¯ cos+1-lnk: Single link comparison (see equation 16) is used
where similarity of two clusters is the maximum of all story
pairs, other configurations are the same as the baseline run.
¯ cos+all-lnk: Complete link algorithm of equation 15 is used.
Similar to single link but it takes the minimum similarity of
all story pairs.
¯ cos+Loc+avg-lnk: Location names are used when
calculating similarity. ¾ ¼ ¼ in equation 12. All algorithms
starting from this one use average link (equation 14), since
single link and complete link do not show any improvement
of performance.
¯ cos+Per+avg-lnk: ¿ ¼ ¼ in equation 12, i.e., we put
some weight on person names in the similarity.
¯ cos+TD+avg-lnk: Time Decay coefficient « ½ in equation
13, which means the similarity between two stories will be
decayed to ½ if they are at different ends of the topic.
¯ cos+N(T)+avg-lnk: Use the number of true events to control
the agglomerative clustering algorithm. When the number
of clusters is fewer than that of truth events, stop merging
clusters.
¯ cos+N(T)+T+avg-lnk: similar to N(T) but also stop
agglomeration if the maximal similarity is below the threshold Ì.
¯ cos+TD:+N(T)+avg-lnk: similar to N(T) but the similarities
are decayed, « ½ in equation 13.
¯ cos+TD+N(T)+T+avg-lnk: similar to TD+N(Truth) but
calculation halts when the maximal similarity is smaller than
the threshold Ì.
Our experiments demonstrate that single link and complete link
similarities perform worse than average link, which is reasonable
since average link is less sensitive to one or two story pairs. We
had expected locations and person names to improve the result, but
it is not the case. Analysis of topics shows that many on-topic
stories share the same locations or persons irrespective of the event
they belong to, so these features may be more useful in identifying
topics rather than events. Time decay is successful because events
are temporally localized, i.e., stories discussing the same event tend
to be adjacent to each other in terms of time. Also we noticed
that providing the number of true events improves the performance
since it guides the clustering algorithm to get correct granularity.
However, for most applications, it is not available. We used it only
as a cheat experiment for comparison with other algorithms. On
the whole, time decay proved to the most powerful feature besides
cosine similarity on both training and test sets.
7.2 Dependencies
In this subsection, our goal is to model only dependencies. We
use the true mapping function and by implication the true events
Î . We build our dependency structure ¼ using all the five
models described in section 6.2. We first train our models on the 26
training topics. Training involves learning the best threshold Ì
for each of the models. We then test the performances of all the
trained models on the 27 test topics. We evaluate our performance
451
using the average values of Dependency Precision (DP),
Dependency Recall (DR) and Dependency F-measure (DF). We consider
the complete-link model to be our baseline since for each event, it
trivially considers all earlier events to be parents.
Table 4 lists the results on the training set. We see that while all
the algorithms except MST outperform the baseline complete-link
algorithm , the nearest Parent algorithm is statistically significant
from the baseline in terms of its DF-value using a one-tailed paired
T-test at 95% confidence level.
Model best Ì DP DR DF P-value
Nearest Parent 0.025 0.55 0.62 0.56 0.04*
Best Similarity 0.02 0.51 0.62 0.53 0.24
MST 0.0 0.46 0.58
0.48Simple Thresh. 0.045 0.45 0.76 0.52 0.14
Complete-link - 0.36 0.93
0.48Table 4: Results on the training set: Best Ì is the optimal value
of the threshold Ì. * indicates the corresponding model is
statistically significant compared to the baseline using a one-tailed,
paired T-test at 95% confidence level.
In table 5 we present the comparison of the models on the test
set. Here, we do not use any tuning but set the threshold to the
corresponding optimal values learned from the training set. The
results throw some surprises: The nearest parent model, which was
significantly better than the baseline on training set, turns out to be
worse than the baseline on the test set. However all the other
models are better than the baseline including the best similarity which
is statistically significant. Notice that all the models that perform
better than the baseline in terms of DF, actually sacrifice their
recall performance compared to the baseline, but improve on their
precision substantially thereby improving their performance on the
DF-measure.
We notice that both simple-thresholding and best similarity are
better than the baseline on both training and test sets although the
improvement is not significant. On the whole, we observe that the
surface-level features we used capture the dependencies to a
reasonable level achieving a best value of 0.72 DF on the test set.
Although there is a lot of room for improvement, we believe this is
a good first step.
Model DP DR DF P-value
Nearest Parent 0.61 0.60
0.60Best Similarity 0.71 0.74 0.72 0.04*
MST 0.70 0.68 0.69 0.22
Simple Thresh. 0.57 0.75 0.64 0.24
Baseline (Complete-link) 0.50 0.94
0.63Table 5: Results on the test set
7.3 Combining Clustering and Dependencies
Now that we have studied the clustering and dependency
algorithms in isolation, we combine the best performing algorithms and
build the entire event model. Since none of the dependency
algorithms has been shown to be consistently and significantly better
than the others, we use all of them in our experimentation. From
the clustering techniques, we choose the best performing Cos+TD.
As a baseline, we use a combination of the baselines in each
components, i.e., cos for clustering and complete-link for dependencies.
Note that we need to retrain all the algorithms on the training
set because our objective function to optimize is now JF, the joint
F-measure. For each algorithm, we need to optimize both the
clustering threshold and the dependency threshold. We did this
empirically on the training set and the optimal values are listed in table
6.
The results on the training set, also presented in table 6, indicate
that cos+TD+Simple-Thresholding is significantly better than the
baseline in terms of the joint F-value JF, using a one-tailed paired
Ttest at 95% confidence level. On the whole, we notice that while the
clustering performance is comparable to the experiments in section
7.1, the overall performance is undermined by the low dependency
performance. Unlike our experiments in section 7.2 where we had
provided the true clusters to the system, in this case, the system
has to deal with deterioration in the cluster quality. Hence the
performance of the dependency algorithms has suffered substantially
thereby lowering the overall performance.
The results on the test set present a very similar story as shown
in table 7. We also notice a fair amount of consistency in the
performance of the combination algorithms. cos+TD+Simple-Thresholding
outperforms the baseline significantly. The test set results also point
to the fact that the clustering component remains a bottleneck in
achieving an overall good performance.
8. DISCUSSION AND CONCLUSIONS
In this paper, we have presented a new perspective of modeling
news topics. Contrary to the TDT view of topics as flat
collection of news stories, we view a news topic as a relational structure
of events interconnected by dependencies. In this paper, we also
proposed a few approaches for both clustering stories into events
and constructing dependencies among them. We developed a
timedecay based clustering approach that takes advantage of
temporallocalization of news stories on the same event and showed that it
performs significantly better than the baseline approach based on
cosine similarity. Our experiments also show that we can do fairly
well on dependencies using only surface-features such as
cosinesimilarity and time-stamps of news stories as long as true events
are provided to the system. However, the performance deteriorates
rapidly if the system has to discover the events by itself. Despite
that discouraging result, we have shown that our combined
algorithms perform significantly better than the baselines.
Our results indicate modeling dependencies can be a very hard
problem especially when the clustering performance is below ideal
level. Errors in clustering have a magnifying effect on errors in
dependencies as we have seen in our experiments. Hence, we should
focus not only on improving dependencies but also on clustering at
the same time.
As part of our future work, we plan to investigate further into
the data and discover new features that influence clustering as well
as dependencies. And for modeling dependencies, a probabilistic
framework should be a better choice since there is no definite
answer of yes/no for the causal relations among some events. We also
hope to devise an iterative algorithm which can improve clustering
and dependency performance alternately as suggested by one of
the reviewers. We also hope to expand our labeled corpus further
to include more diverse news sources and larger and more complex
event structures.
Acknowledgments
We would like to thank the three anonymous reviewers for their
valuable comments. This work was supported in part by the Center
452
Model Cluster T Dep. T CP CR CF DP DR DF JF P-value
cos+TD+Nearest-Parent 0.055 0.02 0.51 0.53 0.49 0.21 0.19 0.19
0.27cos+TD+Best-Similarity 0.04 0.02 0.45 0.70 0.53 0.21 0.33 0.23
0.32cos+TD+MST 0.04 0.00 0.45 0.70 0.53 0.22 0.35 0.25
0.33cos+TD+Simple-Thresholding 0.065 0.02 0.56 0.47 0.48 0.23 0.61 0.32 0.38 0.0004*
Baseline (cos+Complete-link) 0.10 - 0.58 0.31 0.38 0.20 0.67 0.30
0.33Table 6: Combined results on the training set
Model CP CR CF DP DR DF JF P-value
cos+TD+Nearest Parent 0.57 0.50 0.50 0.27 0.19 0.21
0.30cos+TD+Best Similarity 0.48 0.70 0.54 0.31 0.27 0.26
0.35cos+TD+MST 0.48 0.70 0.54 0.31 0.30 0.28
0.37cos+TD+Simple Thresholding 0.60 0.39 0.44 0.32 0.66 0.42 0.43 0.0081*
Baseline (cos+Complete-link) 0.66 0.27 0.36 0.30 0.72 0.43
0.39Table 7: Combined results on the test set
for Intelligent Information Retrieval and in part by
SPAWARSYSCENSD grant number N66001-02-1-8903. Any opinions, findings and
conclusions or recommendations expressed in this material are the
authors" and do not necessarily reflect those of the sponsor.
9. REFERENCES
[1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and
Y. Yang. Topic detection and tracking pilot study: Final
report. In Proceedings of the DARPA Broadcast News
Transcription and Understanding Workshop, pages 194-218,
1998.
[2] J. Allan, A. Feng, and A. Bolivar. Flexible intrinsic
evaluation of hierarchical clustering for tdt. volume In the
Proc. of the ACM Twelfth International Conference on
Information and Knowledge Management, pages 263-270,
Nov 2003.
[3] James Allan, editor. Topic Detection and Tracking:Event
based Information Organization. Kluwer Academic
Publishers, 2000.
[4] James Allan, Rahul Gupta, and Vikas Khandelwal. Temporal
summaries of new topics. In Proceedings of the 24th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 10-18. ACM
Press, 2001.
[5] Regina Barzilay and Lillian Lee. Catching the drift:
Probabilistic content models, with applications to generation
and summarization. In Proceedings of Human Language
Technology Conference and North American Chapter of the
Association for Computational Linguistics(HLT-NAACL),
pages 113-120, 2004.
[6] D. Lawrie and W. B. Croft. Discovering and comparing topic
hierarchies. In Proceedings of RIAO 2000 Conference, pages
314-330, 1999.
[7] David D. Lewis and Kimberly A. Knowles. Threading
electronic mail: a preliminary study. Inf. Process. Manage.,
33(2):209-217, 1997.
[8] Juha Makkonen. Investigations on event evolution in tdt. In
Proceedings of HLT-NAACL 2003 Student Workshop, pages
43-48, 2004.
[9] Aixin Sun and Ee-Peng Lim. Hierarchical text classification
and evaluation. In Proceedings of the 2001 IEEE
International Conference on Data Mining, pages 521-528.
IEEE Computer Society, 2001.
[10] Yiming Yang, Jaime Carbonell, Ralf Brown, Thomas Pierce,
Brian T. Archibald, and Xin Liu. Learning approaches for
detecting and tracking news events. In IEEE Intelligent
Systems Special Issue on Applications of Intelligent
Information Retrieval, volume 14 (4), pages 32-43, 1999.
453 | dependency recall;topic detection;correct granularity;flatcluster;thread;simple thresholding;mapping function;temporal locality;event threading;seminal event;time ordering;cluster;topic cluster;inter-related event;quick overview;directed edge;dependency precision;hidden markov model;cluster of topic;dependency;cosine similarity;novel feature;event;atomicity;flat hierarchy;microscopic event;automatic technique;agglomerative clustering;dependency f-measure;maximum spanning tree;event recognition;temporallocalization;event model;term vector;timedecay;news organization;time-ordering |
train_H-85 | Learning User Interaction Models for Predicting Web Search Result Preferences | Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected noisy user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods. | 1. INTRODUCTION
Relevance measurement is crucial to web search and to
information retrieval in general. Traditionally, search relevance is
measured by using human assessors to judge the relevance of
query-document pairs. However, explicit human ratings are
expensive and difficult to obtain. At the same time, millions of
people interact daily with web search engines, providing valuable
implicit feedback through their interactions with the search
results. If we could turn these interactions into relevance
judgments, we could obtain large amounts of data for evaluating,
maintaining, and improving information retrieval systems.
Recently, automatic or implicit relevance feedback has
developed into an active area of research in the information
retrieval community, at least in part due to an increase in
available resources and to the rising popularity of web search.
However, most traditional IR work was performed over
controlled test collections and carefully-selected query sets and
tasks. Therefore, it is not clear whether these techniques will
work for general real-world web search. A significant distinction
is that web search is not controlled. Individual users may behave
irrationally or maliciously, or may not even be real users; all of
this affects the data that can be gathered. But the amount of the
user interaction data is orders of magnitude larger than anything
available in a non-web-search setting. By using the aggregated
behavior of large numbers of users (and not treating each user as
an individual expert) we can correct for the noise inherent in
individual interactions, and generate relevance judgments that
are more accurate than techniques not specifically designed for
the web search setting.
Furthermore, observations and insights obtained in laboratory
settings do not necessarily translate to real world usage. Hence,
it is preferable to automatically induce feedback interpretation
strategies from large amounts of user interactions. Automatically
learning to interpret user behavior would allow systems to adapt
to changing conditions, changing user behavior patterns, and
different search settings. We present techniques to automatically
interpret the collective behavior of users interacting with a web
search engine to predict user preferences for search results. Our
contributions include:
• A distributional model of user behavior, robust to noise
within individual user sessions, that can recover relevance
preferences from user interactions (Section 3).
• Extensions of existing clickthrough strategies to include
richer browsing and interaction features (Section 4).
• A thorough evaluation of our user behavior models, as well
as of previously published state-of-the-art techniques, over
a large set of web search sessions (Sections 5 and 6).
We discuss our results and outline future directions and
various applications of this work in Section 7, which concludes
the paper.
2. BACKGROUND AND RELATED WORK
Ranking search results is a fundamental problem in
information retrieval. The most common approaches in the
context of the web use both the similarity of the query to the
page content, and the overall quality of a page [3, 20]. A
state-ofthe-art search engine may use hundreds of features to describe a
candidate page, employing sophisticated algorithms to rank
pages based on these features. Current search engines are
commonly tuned on human relevance judgments. Human
annotators rate a set of pages for a query according to perceived
relevance, creating the gold standard against which different
ranking algorithms can be evaluated. Reducing the dependence on
explicit human judgments by using implicit relevance feedback
has been an active topic of research.
Several research groups have evaluated the relationship
between implicit measures and user interest. In these studies,
both reading time and explicit ratings of interest are collected.
Morita and Shinoda [14] studied the amount of time that users
spent reading Usenet news articles and found that reading time
could predict a user"s interest levels. Konstan et al. [13] showed
that reading time was a strong predictor of user interest in their
GroupLens system. Oard and Kim [15] studied whether implicit
feedback could substitute for explicit ratings in recommender
systems. More recently, Oard and Kim [16] presented a
framework for characterizing observable user behaviors using two
dimensions-the underlying purpose of the observed behavior and
the scope of the item being acted upon.
Goecks and Shavlik [8] approximated human labels by
collecting a set of page activity measures while users browsed the
World Wide Web. The authors hypothesized correlations between
a high degree of page activity and a user"s interest. While the
results were promising, the sample size was small and the
implicit measures were not tested against explicit judgments of
user interest. Claypool et al. [6] studied how several implicit
measures related to the interests of the user. They developed a
custom browser called the Curious Browser to gather data, in a
computer lab, about implicit interest indicators and to probe for
explicit judgments of Web pages visited. Claypool et al. found
that the time spent on a page, the amount of scrolling on a page,
and the combination of time and scrolling have a strong positive
relationship with explicit interest, while individual scrolling
methods and mouse-clicks were not correlated with explicit
interest. Fox et al. [7] explored the relationship between implicit
and explicit measures in Web search. They built an instrumented
browser to collect data and then developed Bayesian models to
relate implicit measures and explicit relevance judgments for both
individual queries and search sessions. They found that
clickthrough was the most important individual variable but that
predictive accuracy could be improved by using additional
variables, notably dwell time on a page.
Joachims [9] developed valuable insights into the collection of
implicit measures, introducing a technique based entirely on
clickthrough data to learn ranking functions. More recently,
Joachims et al. [10] presented an empirical evaluation of
interpreting clickthrough evidence. By performing eye tracking
studies and correlating predictions of their strategies with explicit
ratings, the authors showed that it is possible to accurately
interpret clickthrough events in a controlled, laboratory setting. A
more comprehensive overview of studies of implicit measures is
described in Kelly and Teevan [12].
Unfortunately, the extent to which existing research applies to
real-world web search is unclear. In this paper, we build on
previous research to develop robust user behavior interpretation
models for the real web search setting.
3. LEARNING USER BEHAVIOR MODELS
As we noted earlier, real web search user behavior can be
noisy in the sense that user behaviors are only probabilistically
related to explicit relevance judgments and preferences. Hence,
instead of treating each user as a reliable expert, we aggregate
information from many unreliable user search session traces. Our
main approach is to model user web search behavior as if it were
generated by two components: a relevance component -
queryspecific behavior influenced by the apparent result relevance, and
a background component - users clicking indiscriminately.
Our general idea is to model the deviations from the expected
user behavior. Hence, in addition to basic features, which we
will describe in detail in Section 3.2, we compute derived
features that measure the deviation of the observed feature value
for a given search result from the expected values for a result,
with no query-dependent information. We motivate our
intuitions with a particularly important behavior feature, result
clickthrough, analyzed next, and then introduce our general
model of user behavior that incorporates other user actions
(Section 3.2).
3.1 A Case Study in Click Distributions
As we discussed, we aggregate statistics across many user
sessions. A click on a result may mean that some user found the
result summary promising; it could also be caused by people
clicking indiscriminately. In general, individual user behavior,
clickthrough and otherwise, is noisy, and cannot be relied upon
for accurate relevance judgments. The data set is described in
more detail in Section 5.2. For the present it suffices to note that
we focus on a random sample of 3,500 queries that were
randomly sampled from query logs. For these queries we
aggregate click data over more than 120,000 searches performed
over a three week period. We also have explicit relevance
judgments for the top 10 results for each query.
Figure 3.1 shows the relative clickthrough frequency as a
function of result position. The aggregated click frequency at
result position p is calculated by first computing the frequency of
a click at p for each query (i.e., approximating the probability
that a randomly chosen click for that query would land on
position p). These frequencies are then averaged across queries
and normalized so that relative frequency of a click at the top
position is 1. The resulting distribution agrees with previous
observations that users click more often on top-ranked results.
This reflects the fact that search engines do a reasonable job of
ranking results as well as biases to click top results and
noisewe attempt to separate these components in the analysis that
follows.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
result position
RelativeClickFrequency
Figure 3.1: Relative click frequency for top 30 result
positions over 3,500 queries and 120,000 searches.
First we consider the distribution of clicks for the relevant
documents for these queries. Figure 3.2 reports the aggregated
click distribution for queries with varying Position of Top
Relevant document (PTR). While there are many clicks above
the first relevant document for each distribution, there are
clearly peaks in click frequency for the first relevant result.
For example, for queries with top relevant result in position 2,
the relative click frequency at that position (second bar) is higher
than the click frequency at other positions for these queries.
Nevertheless, many users still click on the non-relevant results
in position 1 for such queries. This shows a stronger property of
the bias in the click distribution towards top results - users click
more often on results that are ranked higher, even when they are
not relevant.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 5 10
result position
relativeclickfrequency
PTR=1
PTR=2
PTR=3
PTR=5
PTR=10
Background
Figure 3.2: Relative click frequency for queries with varying
PTR (Position of Top Relevant document).
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
1 2 3 5 10
result position
correctedrelativeclickfrequency
PTR=1
PTR=2
PTR=3
PTR=5
PTR=10
Figure 3.3: Relative corrected click frequency for relevant
documents with varying PTR (Position of Top Relevant).
If we subtract the background distribution of Figure 3.1 from the
mixed distribution of Figure 3.2, we obtain the distribution in
Figure 3.3, where the remaining click frequency distribution can
be interpreted as the relevance component of the results. Note that
the corrected click distribution correlates closely with actual
result relevance as explicitly rated by human judges.
3.2 Robust User Behavior Model
Clicks on search results comprise only a small fraction of the
post-search activities typically performed by users. We now
introduce our techniques for going beyond the clickthrough
statistics and explicitly modeling post-search user behavior.
Although clickthrough distributions are heavily biased towards
top results, we have just shown how the ‘relevance-driven" click
distribution can be recovered by correcting for the prior,
background distribution. We conjecture that other aspects of user
behavior (e.g., page dwell time) are similarly distorted. Our
general model includes two feature types for describing user
behavior: direct and deviational where the former is the directly
measured values, and latter is deviation from the expected values
estimated from the overall (query-independent) distributions for
the corresponding directly observed features.
More formally, we postulate that the observed value o of a
feature f for a query q and result r can be expressed as a mixture
of two components:
),,()(),,( frqrelfCfrqo += (1)
where )( fC is the prior background distribution for values of f
aggregated across all queries, and rel(q,r,f) is the component of
the behavior influenced by the relevance of the result r. As
illustrated above with the clickthrough feature, if we subtract the
background distribution (i.e., the expected clickthrough for a
result at a given position) from the observed clickthrough
frequency at a given position, we can approximate the relevance
component of the clickthrough value1
. In order to reduce the
effect of individual user variations in behavior, we average
observed feature values across all users and search sessions for
each query-URL pair. This aggregation gives additional
robustness of not relying on individual noisy user interactions.
In summary, the user behavior for a query-URL pair is
represented by a feature vector that includes both the directly
observed features and the derived, corrected feature values.
We now describe the actual features we use to represent user
behavior.
3.3 Features for Representing User Behavior
Our goal is to devise a sufficiently rich set of features that
allow us to characterize when a user will be satisfied with a web
search result. Once the user has submitted a query, they perform
many different actions (reading snippets, clicking results,
navigating, refining their query) which we capture and
summarize. This information was obtained via opt-in client-side
instrumentation from users of a major web search engine.
This rich representation of user behavior is similar in many
respects to the recent work by Fox et al. [7]. An important
difference is that many of our features are (by design) query
specific whereas theirs was (by design) a general,
queryindependent model of user behavior. Furthermore, we include
derived, distributional features computed as described above.
The features we use to represent user search interactions are
summarized in Table 3.1. For clarity, we organize the features
into the groups Query-text, Clickthrough, and Browsing.
Query-text features: Users decide which results to examine in
more detail by looking at the result title, URL, and summary - in
some cases, looking at the original document is not even
necessary. To model this aspect of user experience we defined
features to characterize the nature of the query and its relation to
the snippet text. These include features such as overlap between
the words in title and in query (TitleOverlap), the fraction of
words shared by the query and the result summary
(SummaryOverlap), etc.
Browsing features: Simple aspects of the user web page
interactions can be captured and quantified. These features are
used to characterize interactions with pages beyond the results
page. For example, we compute how long users dwell on a page
(TimeOnPage) or domain (TimeOnDomain), and the deviation
of dwell time from expected page dwell time for a query. These
features allows us to model intra-query diversity of page
browsing behavior (e.g., navigational queries, on average, are
likely to have shorter page dwell time than transactional or
informational queries). We include both the direct features and
the derived features described above.
Clickthrough features: Clicks are a special case of user
interaction with the search engine. We include all the features
necessary to learn the clickthrough-based strategies described
in Sections 4.1 and 4.4. For example, for a query-URL pair we
provide the number of clicks for the result (ClickFrequency), as
1
Of course, this is just a rough estimate, as the observed
background distribution also includes the relevance
component.
well as whether there was a click on result below or above the
current URL (IsClickBelow, IsClickAbove). The derived feature
values such as ClickRelativeFrequency and ClickDeviation are
computed as described in Equation 1.
Query-text features
TitleOverlap Fraction of shared words between query and title
SummaryOverlap Fraction of shared words between query and summary
QueryURLOverlap Fraction of shared words between query and URL
QueryDomainOverlap Fraction of shared words between query and domain
QueryLength Number of tokens in query
QueryNextOverlap Average fraction of words shared with next query
Browsing features
TimeOnPage Page dwell time
CumulativeTimeOnPage Cumulative time for all subsequent pages after search
TimeOnDomain Cumulative dwell time for this domain
TimeOnShortUrl Cumulative time on URL prefix, dropping parameters
IsFollowedLink 1 if followed link to result, 0 otherwise
IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise
IsRedirected 1 if initial URL same as final URL, 0 otherwise
IsPathFromSearch 1 if only followed links after query, 0 otherwise
ClicksFromSearch Number of hops to reach page from query
AverageDwellTime Average time on page for this query
DwellTimeDeviation Deviation from overall average dwell time on page
CumulativeDeviation Deviation from average cumulative time on page
DomainDeviation Deviation from average time on domain
ShortURLDeviation Deviation from average time on short URL
Clickthrough features
Position Position of the URL in Current ranking
ClickFrequency Number of clicks for this query, URL pair
ClickRelativeFrequency Relative frequency of a click for this query and URL
ClickDeviation Deviation from expected click frequency
IsNextClicked 1 if there is a click on next position, 0 otherwise
IsPreviousClicked 1 if there is a click on previous position, 0 otherwise
IsClickAbove 1 if there is a click above, 0 otherwise
IsClickBelow 1 if there is click below, 0 otherwise
Table 3.1: Features used to represent post-search interactions
for a given query and search result URL
3.4 Learning a Predictive Behavior Model
Having described our features, we now turn to the actual
method of mapping the features to user preferences. We attempt
to learn a general implicit feedback interpretation strategy
automatically instead of relying on heuristics or insights. We
consider this approach to be preferable to heuristic strategies,
because we can always mine more data instead of relying (only)
on our intuition and limited laboratory evidence. Our general
approach is to train a classifier to induce weights for the user
behavior features, and consequently derive a predictive model of
user preferences. The training is done by comparing a wide range
of implicit behavior measures with explicit user judgments for a
set of queries.
For this, we use a large random sample of queries in the search
query log of a popular web search engine, the sets of results
(identified by URLs) returned for each of the queries, and any
explicit relevance judgments available for each query/result pair.
We can then analyze the user behavior for all the instances where
these queries were submitted to the search engine.
To learn the mapping from features to relevance preferences,
we use a scalable implementation of neural networks, RankNet
[4], capable of learning to rank a set of given items. More
specifically, for each judged query we check if a result link has
been judged. If so, the label is assigned to the query/URL pair and
to the corresponding feature vector for that search result. These
vectors of feature values corresponding to URLs judged relevant
or non-relevant by human annotators become our training set.
RankNet has demonstrated excellent performance in learning to
rank objects in a supervised setting, hence we use RankNet for
our experiments.
4. PREDICTING USER PREFERENCES
In our experiments, we explore several models for predicting
user preferences. These models range from using no implicit
user feedback to using all available implicit user feedback.
Ranking search results to predict user preferences is a
fundamental problem in information retrieval. Most traditional
IR and web search approaches use a combination of page and
link features to rank search results, and a representative
state-ofthe-art ranking system will be used as our baseline ranker
(Section 4.1). At the same time, user interactions with a search
engine provide a wealth of information. A commonly considered
type of interaction is user clicks on search results. Previous work
[9], as described above, also examined which results were
skipped (e.g., ‘skip above" and ‘skip next") and other related
strategies to induce preference judgments from the users"
skipping over results and not clicking on following results. We
have also added refinements of these strategies to take into
account the variability observed in realistic web scenarios.. We
describe these strategies in Section 4.2.
As clickthroughs are just one aspect of user interaction, we
extend the relevance estimation by introducing a machine
learning model that incorporates clicks as well as other aspects
of user behavior, such as follow-up queries and page dwell time
(Section 4.3). We conclude this section by briefly describing our
baseline - a state-of-the-art ranking algorithm used by an
operational web search engine.
4.1 Baseline Model
A key question is whether browsing behavior can provide
information absent from existing explicit judgments used to train
an existing ranker. For our baseline system we use a
state-of-theart page ranking system currently used by a major web search
engine. Hence, we will call this system Current for the
subsequent discussion. While the specific algorithms used by the
search engine are beyond the scope of this paper, the algorithm
ranks results based on hundreds of features such as query to
document similarity, query to anchor text similarity, and
intrinsic page quality. The Current web search engine rankings
provide a strong system for comparison and experiments of the
next two sections.
4.2 Clickthrough Model
If we assume that every user click was motivated by a rational
process that selected the most promising result summary, we can
then interpret each click as described in Joachims et al.[10]. By
studying eye tracking and comparing clicks with explicit
judgments, they identified a few basic strategies. We discuss the
two strategies that performed best in their experiments, Skip
Above and Skip Next.
Strategy SA (Skip Above): For a set of results for a query
and a clicked result at position p, all unclicked results
ranked above p are predicted to be less relevant than the
result at p.
In addition to information about results above the clicked
result, we also have information about the result immediately
following the clicked one. Eye tracking study performed by
Joachims et al. [10] showed that users usually consider the result
immediately following the clicked result in current ranking. Their
Skip Next strategy uses this observation to predict that a result
following the clicked result at p is less relevant than the clicked
result, with accuracy comparable to the SA strategy above. For
better coverage, we combine the SA strategy with this extension to
derive the Skip Above + Skip Next strategy:
Strategy SA+N (Skip Above + Skip Next): This strategy
predicts all un-clicked results immediately following a
clicked result as less relevant than the clicked result, and
combines these predictions with those of the SA strategy
above.
We experimented with variations of these strategies, and found
that SA+N outperformed both SA and the original Skip Next
strategy, so we will consider the SA and SA+N strategies in the
rest of the paper. These strategies are motivated and empirically
tested for individual users in a laboratory setting. As we will
show, these strategies do not work as well in real web search
setting due to inherent inconsistency and noisiness of individual
users" behavior.
The general approach for using our clickthrough models
directly is to filter clicks to those that reflect higher-than-chance
click frequency. We then use the same SA and SA+N strategies,
but only for clicks that have higher-than-expected frequency
according to our model. For this, we estimate the relevance
component rel(q,r,f) of the observed clickthrough feature f as the
deviation from the expected (background) clickthrough
distribution )( fC .
Strategy CD (deviation d): For a given query, compute the
observed click frequency distribution o(r, p) for all results r
in positions p. The click deviation for a result r in position p,
dev(r, p) is computed as:
)(),(),( pCproprdev −=
where C(p) is the expected clickthrough at position p. If
dev(r,p)>d, retain the click as input to the SA+N strategy
above, and apply SA+N strategy over the filtered set of click
events.
The choice of d selects the tradeoff between recall and
precision. While the above strategy extends SA and SA+N, it still
assumes that a (filtered) clicked result is preferred over all
unclicked results presented to the user above a clicked position.
However, for informational queries, multiple results may be
clicked, with varying frequency. Hence, it is preferable to
individually compare results for a query by considering the
difference between the estimated relevance components of the
click distribution of the corresponding query results. We now
define a generalization of the previous clickthrough interpretation
strategy:
Strategy CDiff (margin m): Compute deviation dev(r,p) for
each result r1...rn in position p. For each pair of results ri and
rj, predict preference of ri over rj iff dev(ri,pi)-dev(ri,pj)>m.
As in CD, the choice of m selects the tradeoff between recall
and precision. The pairs may be preferred in the original order or
in reverse of it. Given the margin, two results might be effectively
indistinguishable, but only one can possibly be preferred over the
other. Intuitively, CDiff generalizes the skip idea above to include
cases where the user skipped (i.e., clicked less than expected)
on uj and preferred (i.e., clicked more than expected) on ui.
Furthermore, this strategy allows for differentiation within the set
of clicked results, making it more appropriate to noisy user
behavior.
CDiff and CD are complimentary. CDiff is a generalization of
the clickthrough frequency model of CD, but it ignores the
positional information used in CD. Hence, combining the two
strategies to improve coverage is a natural approach:
Strategy CD+CDiff (deviation d, margin m): Union
of CD and CDiff predictions.
Other variations of the above strategies were considered, but
these five methods cover the range of observed performance.
4.3 General User Behavior Model
The strategies described in the previous section generate
orderings based solely on observed clickthrough frequencies. As
we discussed, clickthrough is just one, albeit important, aspect
of user interactions with web search engine results. We now
present our general strategy that relies on the automatically
derived predictive user behavior models (Section 3).
The UserBehavior Strategy: For a given query, each
result is represented with the features in Table 3.1.
Relative user preferences are then estimated using the
learned user behavior model described in Section 3.4.
Recall that to learn a predictive behavior model we used the
features from Table 3.1 along with explicit relevance judgments
as input to RankNet which learns an optimal weighting of
features to predict preferences.
This strategy models user interaction with the search engine,
allowing it to benefit from the wisdom of crowds interacting
with the results and the pages beyond. As our experiments in the
subsequent sections demonstrate, modeling a richer set of user
interactions beyond clickthroughs results in more accurate
predictions of user preferences.
5. EXPERIMENTAL SETUP
We now describe our experimental setup. We first describe
the methodology used, including our evaluation metrics (Section
5.1). Then we describe the datasets (Section 5.2) and the
methods we compared in this study (Section 5.3).
5.1 Evaluation Methodology and Metrics
Our evaluation focuses on the pairwise agreement between
preferences for results. This allows us to compare to previous
work [9,10]. Furthermore, for many applications such as tuning
ranking functions, pairwise preference can be used directly for
training [1,4,9]. The evaluation is based on comparing
preferences predicted by various models to the correct
preferences derived from the explicit user relevance judgments.
We discuss other applications of our models beyond web search
ranking in Section 7.
To create our set of test pairs we take each query and
compute the cross-product between all search results, returning
preferences for pairs according to the order of the associated
relevance labels. To avoid ambiguity in evaluation, we discard
all ties (i.e., pairs with equal label).
In order to compute the accuracy of our preference predictions
with respect to the correct preferences, we adapt the standard
Recall and Precision measures [20]. While our task of computing
pairwise agreement is different from the absolute relevance
ranking task, the metrics are used in the similar way.
Specifically, we report the average query recall and precision.
For our task, Query Precision and Query Recall for a query q are
defined as:
• Query Precision: Fraction of predicted preferences for results
for q that agree with preferences obtained from explicit
human judgment.
• Query Recall: Fraction of preferences obtained from explicit
human judgment for q that were correctly predicted.
The overall Recall and Precision are computed as the average of
Query Recall and Query Precision, respectively. A drawback of
this evaluation measure is that some preferences may be more
valuable than others, which pairwise agreement does not capture.
We discuss this issue further when we consider extensions to the
current work in Section 7.
5.2 Datasets
For evaluation we used 3,500 queries that were randomly
sampled from query logs(for a major web search engine. For each
query the top 10 returned search results were manually rated on a
6-point scale by trained judges as part of ongoing relevance
improvement effort. In addition for these queries we also had user
interaction data for more than 120,000 instances of these queries.
The user interactions were harvested from anonymous
browsing traces that immediately followed a query submitted to
the web search engine. This data collection was part of voluntary
opt-in feedback submitted by users from October 11 through
October 31. These three weeks (21 days) of user interaction data
was filtered to include only the users in the English-U.S. market.
In order to better understand the effect of the amount of user
interaction data available for a query on accuracy, we created
subsets of our data (Q1, Q10, and Q20) that contain different
amounts of interaction data:
• Q1: Human-rated queries with at least 1 click on results
recorded (3500 queries, 28,093 query-URL pairs)
• Q10: Queries in Q1 with at least 10 clicks (1300 queries,
18,728 query-URL pairs).
• Q20: Queries in Q1 with at least 20 clicks (1000 queries total,
12,922 query-URL pairs).
These datasets were collected as part of normal user experience
and hence have different characteristics than previously reported
datasets collected in laboratory settings. Furthermore, the data
size is order of magnitude larger than any study reported in the
literature.
5.3 Methods Compared
We considered a number of methods for comparison. We
compared our UserBehavior model (Section 4.3) to previously
published implicit feedback interpretation techniques and some
variants of these approaches (Section 4.2), and to the current
search engine ranking based on query and page features alone
(Section 4.1). Specifically, we compare the following strategies:
• SA: The Skip Above clickthrough strategy (Section 4.2)
• SA+N: A more comprehensive extension of SA that takes
better advantage of current search engine ranking.
• CD: Our refinement of SA+N that takes advantage of our
mixture model of clickthrough distribution to select trusted
clicks for interpretation (Section 4.2).
• CDiff: Our generalization of the CD strategy that explicitly
uses the relevance component of clickthrough probabilities to
induce preferences between search results (Section 4.2).
• CD+CDiff: The strategy combining CD and CDiff as the
union of predicted preferences from both (Section 4.2).
• UserBehavior: We order predictions based on decreasing
highest score of any page. In our preliminary experiments
we observed that higher ranker scores indicate higher
confidence in the predictions. This heuristic allows us to
do graceful recall-precision tradeoff using the score of the
highest ranked result to threshold the queries (Section 4.3)
• Current: Current search engine ranking (section 4.1). Note
that the Current ranker implementation was trained over a
superset of the rated query/URL pairs in our datasets, but
using the same truth labels as we do for our evaluation.
Training/Test Split: The only strategy for which splitting the
datasets into training and test was required was the
UserBehavior method. To evaluate UserBehavior we train and
validate on 75% of labeled queries, and test on the remaining
25%. The sampling was done per query (i.e., all results for a
chosen query were included in the respective dataset, and there
was no overlap in queries between training and test sets).
It is worth noting that both the ad-hoc SA and SA+N, as well
as the distribution-based strategies (CD, CDiff, and CD+CDiff),
do not require a separate training and test set, since they are
based on heuristics for detecting anomalous click frequencies
for results. Hence, all strategies except for UserBehavior were
tested on the full set of queries and associated relevance
preferences, while UserBehavior was tested on a randomly
chosen hold-out subset of the queries as described above. To
make sure we are not favoring UserBehavior, we also tested all
other strategies on the same hold-out test sets, resulting in the
same accuracy results as testing over the complete datasets.
6. RESULTS
We now turn to experimental evaluation of predicting
relevance preference of web search results. Figure 6.1 shows the
recall-precision results over the Q1 query set (Section 5.2). The
results indicate that previous click interpretation strategies, SA
and SA+N perform suboptimally in this setting, exhibiting
precision 0.627 and 0.638 respectively. Furthermore, there is no
mechanism to do recall-precision trade-off with SA and SA+N,
as they do not provide prediction confidence. In contrast, our
clickthrough distribution-based techniques CD and CD+CDiff
exhibit somewhat higher precision than SA and SA+N (0.648
and 0.717 at Recall of 0.08, maximum achieved by SA or
SA+N).
SA+N
SA
0.6
0.62
0.64
0.66
0.68
0.7
0.72
0.74
0.76
0.78
0.8
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
Recall
Precision
SA SA+N
CD CDiff
CD+CDiff UserBehavior
Current
Figure 6.1: Precision vs. Recall of SA, SA+N, CD, CDiff,
CD+CDiff, UserBehavior, and Current relevance prediction
methods over the Q1 dataset.
Interestingly, CDiff alone exhibits precision equal to SA
(0.627) at the same recall at 0.08. In contrast, by combining CD
and CDiff strategies (CD+CDiff method) we achieve the best
performance of all clickthrough-based strategies, exhibiting
precision of above 0.66 for recall values up to 0.14, and higher at
lower recall levels. Clearly, aggregating and intelligently
interpreting clickthroughs, results in significant gain for realistic
web search, than previously described strategies. However, even
the CD+CDiff clickthrough interpretation strategy can be
improved upon by automatically learning to interpret the
aggregated clickthrough evidence.
But first, we consider the best performing strategy,
UserBehavior. Incorporating post-search navigation history in
addition to clickthroughs (Browsing features) results in the
highest recall and precision among all methods compared. Browse
exhibits precision of above 0.7 at recall of 0.16, significantly
outperforming our Baseline and clickthrough-only strategies.
Furthermore, Browse is able to achieve high recall (as high as
0.43) while maintaining precision (0.67) significantly higher than
the baseline ranking.
To further analyze the value of different dimensions of implicit
feedback modeled by the UserBehavior strategy, we consider each
group of features in isolation. Figure 6.2 reports Precision vs.
Recall for each feature group. Interestingly, Query-text alone has
low accuracy (only marginally better than Random). Furthermore,
Browsing features alone have higher precision (with lower
maximum recall achieved) than considering all of the features in
our UserBehavior model. Applying different machine learning
methods for combining classifier predictions may increase
performance of using all features for all recall values.
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45
Recall
Precision
All Features
Clickthrough
Query-text
Browsing
Figure 6.2: Precision vs. recall for predicting relevance with
each group of features individually.
0.65
0.67
0.69
0.71
0.73
0.75
0.77
0.79
0.81
0.83
0.85
0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45 0.49
Recall
Precision
CD+CDiff:Q1 UserBehavior:Q1
CD+CDiff:Q10 UserBehavior:Q10
CD+CDiff:Q20 UserBehavior:Q20
Figure 6.3: Recall vs. Precision of CD+CDiff and
UserBehavior for query sets Q1, Q10, and Q20 (queries with
at least 1, at least 10, and at least 20 clicks respectively).
Interestingly, the ranker trained over Clickthrough-only
features achieves substantially higher recall and precision than
human-designed clickthrough-interpretation strategies described
earlier. For example, the clickthrough-trained classifier achieves
0.67 precision at 0.42 Recall vs. the maximum recall of 0.14
achieved by the CD+CDiff strategy.
Our clickthrough and user behavior interpretation strategies
rely on extensive user interaction data. We consider the effects
of having sufficient interaction data available for a query before
proposing a re-ranking of results for that query. Figure 6.3
reports recall-precision curves for the CD+CDiff and
UserBehavior methods for different test query sets with at least
1 click (Q1), 10 clicks (Q10) and 20 clicks (Q20) available per
query. Not surprisingly, CD+CDiff improves with more clicks.
This indicates that accuracy will improve as more user
interaction histories become available, and more queries from
the Q1 set will have comprehensive interaction histories.
Similarly, the UserBehavior strategy performs better for queries
with 10 and 20 clicks, although the improvement is less dramatic
than for CD+CDiff. For queries with sufficient clicks, CD+CDiff
exhibits precision comparable with Browse at lower recall.
0
0.05
0.1
0.15
0.2
7 12 17 21
Days of user interaction data harvested
Recall
CD+CDiff
UserBehavior
Figure 6.4: Recall of CD+CDiff and UserBehavior strategies
at fixed minimum precision 0.7 for varying amounts of user
activity data (7, 12, 17, 21 days).
Our techniques often do not make relevance predictions for
search results (i.e., if no interaction data is available for the
lower-ranked results), consequently maintaining higher precision
at the expense of recall. In contrast, the current search engine
always makes a prediction for every result for a given query. As
a consequence, the recall of Current is high (0.627) at the
expense of lower precision As another dimension of acquiring
training data we consider the learning curve with respect to
amount (days) of training data available. Figure 6.4 reports the
Recall of CD+CDiff and UserBehavior strategies for varying
amounts of training data collected over time. We fixed minimum
precision for both strategies at 0.7 as a point substantially higher
than the baseline (0.625). As expected, Recall of both strategies
improves quickly with more days of interaction data examined.
We now briefly summarize our experimental results. We
showed that by intelligently aggregating user clickthroughs
across queries and users, we can achieve higher accuracy on
predicting user preferences. Because of the skewed distribution
of user clicks our clickthrough-only strategies have high
precision, but low recall (i.e., do not attempt to predict relevance
of many search results). Nevertheless, our CD+CDiff
clickthrough strategy outperforms most recent state-of-the-art
results by a large margin (0.72 precision for CD+CDiff vs. 0.64
for SA+N) at the highest recall level of SA+N.
Furthermore, by considering the comprehensive UserBehavior
features that model user interactions after the search and beyond
the initial click, we can achieve substantially higher precision
and recall than considering clickthrough alone. Our
UserBehavior strategy achieves recall of over 0.43 with precision
of over 0.67 (with much higher precision at lower recall levels),
substantially outperforms the current search engine preference
ranking and all other implicit feedback interpretation methods.
7. CONCLUSIONS AND FUTURE WORK
Our paper is the first, to our knowledge, to interpret
postsearch user behavior to estimate user preferences in a real web
search setting. We showed that our robust models result in higher
prediction accuracy than previously published techniques.
We introduced new, robust, probabilistic techniques for
interpreting clickthrough evidence by aggregating across users
and queries. Our methods result in clickthrough interpretation
substantially more accurate than previously published results not
specifically designed for web search scenarios. Our methods"
predictions of relevance preferences are substantially more
accurate than the current state-of-the-art search result ranking that
does not consider user interactions. We also presented a general
model for interpreting post-search user behavior that incorporates
clickthrough, browsing, and query features. By considering the
complete search experience after the initial query and click, we
demonstrated prediction accuracy far exceeding that of
interpreting only the limited clickthrough information.
Furthermore, we showed that automatically learning to
interpret user behavior results in substantially better performance
than the human-designed ad-hoc clickthrough interpretation
strategies. Another benefit of automatically learning to interpret
user behavior is that such methods can adapt to changing
conditions and changing user profiles. For example, the user
behavior model on intranet search may be different from the web
search behavior. Our general UserBehavior method would be able
to adapt to these changes by automatically learning to map new
behavior patterns to explicit relevance ratings.
A natural application of our preference prediction models is to
improve web search ranking [1]. In addition, our work has many
potential applications including click spam detection, search
abuse detection, personalization, and domain-specific ranking. For
example, our automatically derived behavior models could be
trained on examples of search abuse or click spam behavior
instead of relevance labels. Alternatively, our models could be
used directly to detect anomalies in user behavior - either due to
abuse or to operational problems with the search engine.
While our techniques perform well on average, our
assumptions about clickthrough distributions (and learning the
user behavior models) may not hold equally well for all queries.
For example, queries with divergent access patterns (e.g., for
ambiguous queries with multiple meanings) may result in
behavior inconsistent with the model learned for all queries.
Hence, clustering queries and learning different predictive models
for each query type is a promising research direction. Query
distributions also change over time, and it would be productive to
investigate how that affects the predictive ability of these models.
Furthermore, some predicted preferences may be more valuable
than others, and we plan to investigate different metrics to capture
the utility of the predicted preferences.
As we showed in this paper, using the wisdom of crowds can
give us accurate interpretation of user interactions even in the
inherently noisy web search setting. Our techniques allow us to
automatically predict relevance preferences for web search results
with accuracy greater than the previously published methods. The
predicted relevance preferences can be used for automatic
relevance evaluation and tuning, for deploying search in new
settings, and ultimately for improving the overall web search
experience.
8. REFERENCES
[1] E. Agichtein, E. Brill, and S. Dumais, Improving Web Search Ranking
by Incorporating User Behavior, in Proceedings of the ACM
Conference on Research and Development on Information Retrieval
(SIGIR), 2006
[2] J. Allan. HARD Track Overview in TREC 2003: High Accuracy
Retrieval from Documents. In Proceedings of TREC 2003, 24-37,
2004.
[3] S. Brin and L. Page, The Anatomy of a Large-scale Hypertextual Web
Search Engine,. In Proceedings of WWW7, 107-117, 1998.
[4] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N.
Hamilton, and G. Hullender, Learning to Rank using Gradient
Descent, in Proceedings of the International Conference on Machine
Learning (ICML), 2005
[5] D.M. Chickering, The WinMine Toolkit, Microsoft Technical Report
MSR-TR-2002-103, 2002
[6] M. Claypool, D. Brown, P. Lee and M. Waseda. Inferring user interest,
in IEEE Internet Computing. 2001
[7] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White.
Evaluating implicit measures to improve the search experience. In
ACM Transactions on Information Systems, 2005
[8] J. Goecks and J. Shavlick. Learning users" interests by unobtrusively
observing their normal behavior. In Proceedings of the IJCAI
Workshop on Machine Learning for Information Filtering. 1999.
[9] T. Joachims, Optimizing Search Engines Using Clickthrough Data, in
Proceedings of the ACM Conference on Knowledge Discovery and
Datamining (SIGKDD), 2002
[10] T. Joachims, L. Granka, B. Pang, H. Hembrooke and G. Gay,
Accurately Interpreting Clickthrough Data as Implicit Feedback, in
Proceedings of the ACM Conference on Research and Development
on Information Retrieval (SIGIR), 2005
[11] T. Joachims, Making Large-Scale SVM Learning Practical. Advances
in Kernel Methods, in Support Vector Learning, MIT Press, 1999
[12] D. Kelly and J. Teevan, Implicit feedback for inferring user preference:
A bibliography. In SIGIR Forum, 2003
[13] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon and J. Riedl.
GroupLens: Applying collaborative filtering to usenet news. In
Communications of ACM, 1997.
[14] M. Morita, and Y. Shinoda, Information filtering based on user
behavior analysis and best match text retrieval. In Proceedings of the
ACM Conference on Research and Development on Information
Retrieval (SIGIR), 1994
[15] D. Oard and J. Kim. Implicit feedback for recommender systems. in
Proceedings of AAAI Workshop on Recommender Systems. 1998
[16] D. Oard and J. Kim. Modeling information content using observable
behavior. In Proceedings of the 64th Annual Meeting of the
American Society for Information Science and Technology. 2001
[17] P. Pirolli, The Use of Proximal Information Scent to Forage for Distal
Content on the World Wide Web. In Working with Technology in
Mind: Brunswikian. Resources for Cognitive Science and
Engineering, Oxford University Press, 2004
[18] F. Radlinski and T. Joachims, Query Chains: Learning to Rank from
Implicit Feedback, in Proceedings of the ACM Conference on
Knowledge Discovery and Data Mining (KDD), ACM, 2005
[19] F. Radlinski and T. Joachims, Evaluating the Robustness of Learning
from Implicit Feedback, in the ICML Workshop on Learning in Web
Search, 2005
[20] G. Salton and M. McGill. Introduction to modern information
retrieval. McGraw-Hill, 1983
[21] E.M. Voorhees, D. Harman, Overview of TREC, 2001 | precision measure;relevance measurement;recall measure;predictive model;induce weight;clickthrough;top relevant document position;search abuse detection;predictive behavior model;low recall;information retrieval;predict relevance preference;position of top relevant document;explicit relevance judgment;user preference;interpret implicit relevance feedback;follow-up query;web search ranking;click spam detection;domain-specific ranking;page dwell time;implicit feedback;user behavior model;personalization |
train_H-87 | Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation | This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05~0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β=0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information. | 1. INTRODUCTION
Adaptive filtering (AF) has been a challenging research topic in
information retrieval. The task is for the system to make an
online topic membership decision (yes or no) for every
document, as soon as it arrives, with respect to each pre-defined
topic of interest. Starting from 1997 in the Topic Detection and
Tracking (TDT) area and 1998 in the Text Retrieval
Conferences (TREC), benchmark evaluations have been
conducted by NIST under the following
conditions[6][7][8][3][4]:
• A very small number (1 to 4) of positive training examples
was provided for each topic at the starting point.
• Relevance feedback was available but only for the
systemaccepted documents (with a yes decision) in the TREC
evaluations for AF.
• Relevance feedback (RF) was not allowed in the TDT
evaluations for AF (or topic tracking in the TDT
terminology) until 2004.
• TDT2004 was the first time that TREC and TDT metrics
were jointly used in evaluating AF methods on the same
benchmark (the TDT5 corpus) where non-stationary topics
dominate.
The above conditions attempt to mimic realistic situations where
an AF system would be used. That is, the user would be willing
to provide a few positive examples for each topic of interest at
the start, and might or might not be able to provide additional
labeling on a small portion of incoming documents through
relevance feedback. Furthermore, topics of interest might
change over time, with new topics appearing and growing, and
old topics shrinking and diminishing. These conditions make
adaptive filtering a difficult task in statistical learning (online
classification), for the following reasons:
1) it is difficult to learn accurate models for prediction based
on extremely sparse training data;
2) it is not obvious how to correct the sampling bias (i.e.,
relevance feedback on system-accepted documents only)
during the adaptation process;
3) it is not well understood how to effectively tune parameters
in AF methods using cross-corpus validation where the
validation and evaluation topics do not overlap, and the
documents may be from different sources or different
epochs.
None of these problems is addressed in the literature of
statistical learning for batch classification where all the training
data are given at once. The first two problems have been
studied in the adaptive filtering literature, including topic profile
adaptation using incremental Rocchio, Gaussian-Exponential
density models, logistic regression in a Bayesian framework,
etc., and threshold optimization strategies using probabilistic
calibration or local fitting techniques [1][2][9][10][11][12][13].
Although these works provide valuable insights for
understanding the problems and possible solutions, it is difficult
to draw conclusions regarding the effectiveness and robustness
of current methods because the third problem has not been
thoroughly investigated. Addressing the third issue is the main
focus in this paper.
We argue that robustness is an important measure for evaluating
and comparing AF methods. By robust we mean consistent
and strong performance across benchmark corpora with a
systematic method for parameter tuning across multiple corpora.
Most AF methods have pre-specified parameters that may
influence the performance significantly and that must be
determined before the test process starts. Available training
examples, on the other hand, are often insufficient for tuning the
parameters. In TDT5, for example, there is only one labeled
training example per topic at the start; parameter optimization
on such training data is doomed to be ineffective.
This leaves only one option (assuming tuning on the test set is
not an alternative), that is, choosing an external corpus as the
validation set. Notice that the validation-set topics often do not
overlap with the test-set topics, thus the parameter optimization
is performed under the tough condition that the validation data
and the test data may be quite different from each other. Now
the important question is: which methods (if any) are robust
under the condition of using cross-corpus validation to tune
parameters? Current literature does not offer an answer because
no thorough investigation on the robustness of AF methods has
been reported.
In this paper we address the above question by conducting a
cross-benchmark evaluation with two effective approaches in
AF: incremental Rocchio and regularized logistic regression
(LR). Rocchio-style classifiers have been popular in AF, with
good performance in benchmark evaluations (TREC and TDT)
if appropriate parameters were used and if combined with an
effective threshold calibration strategy [2][4][7][8][9][11][13].
Logistic regression is a classical method in statistical learning,
and one of the best in batch-mode text categorization [15][14]. It
was recently evaluated in adaptive filtering and was found to
have relatively strong performance (Section 5.1). Furthermore, a
recent paper [13] reported that the joint use of Rocchio and LR
in a Bayesian framework outperformed the results of using each
method alone on the TREC11 corpus. Stimulated by those
findings, we decided to include Rocchio and LR in our
crossbenchmark evaluation for robustness testing. Specifically, we
focus on how much the performance of these methods depends
on parameter tuning, what the most influential parameters are in
these methods, how difficult (or how easy) to optimize these
influential parameters using cross-corpus validation, how strong
these methods perform on multiple benchmarks with the
systematic tuning of parameters on other corpora, and how
efficient these methods are in running AF on large benchmark
corpora.
The organization of the paper is as follows: Section 2 introduces
the four benchmark corpora (TREC10 and TREC11, TDT3 and
TDT5) used in this study. Section 3 analyzes the differences
among the TREC and TDT metrics (utilities and tracking cost)
and the potential implications of those differences. Section 4
outlines the Rocchio and LR approaches to AF, respectively.
Section 5 reports the experiments and results. Section 6
concludes the main findings in this study.
2. BENCHMARK CORPORA
We used four benchmark corpora in our study. Table 1 shows
the statistics about these data sets.
TREC10 was the evaluation benchmark for adaptive filtering in
TREC 2001, consisting of roughly 806,791 Reuters news stories
from August 1996 to August 1997 with 84 topic labels (subject
categories)[7]. The first two weeks (August 20th
to 31st
, 1996) of
documents is the training set, and the remaining 11 & ½ months
(from September 1st
, 1996 to August 19th
, 1997) is the test set.
TREC11 was the evaluation benchmark for adaptive filtering in
TREC 2002, consisting of the same set of documents as those in
TREC10 but with a slightly different splitting point for the
training and test sets. The TREC11 topics (50) are quite
different from those in TREC10; they are queries for retrieval
with relevance judgments by NIST assessors [8].
TDT3 was the evaluation benchmark in the TDT2001 dry run1
.
The tracking part of the corpus consists of 71,388 news stories
from multiple sources in English and Mandarin (AP, NYT,
CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America
and PRI the World) in the period of October to December 1998.
Machine-translated versions of the non-English stories (Xinhua,
Zaobao and VOA Mandarin) are provided as well. The splitting
point for training-test sets is different for each topic in TDT.
TDT5 was the evaluation benchmark in TDT2004 [4]. The
tracking part of the corpus consists of 407,459 news stories in
the period of April to September, 2003 from 15 news agents or
broadcast sources in English, Arabic and Mandarin, with
machine-translated versions of the non-English stories. We only
used the English versions of those documents in our
experiments for this paper.
The TDT topics differ from TREC topics both conceptually
and statistically. Instead of generic, ever-lasting subject
categories (as those in TREC), TDT topics are defined at a finer
level of granularity, for events that happen at certain times and
locations, and that are born and die, typically associated
with a bursty distribution over chronologically ordered news
stories. The average size of TDT topics (events) is two orders
of magnitude smaller than that of the TREC10 topics. Figure 1
compares the document densities of a TREC topic (Civil
Wars) and two TDT topics (Gunshot and APEC Summit
Meeting, respectively) over a 3-month time period, where the
area under each curve is normalized to one.
The granularity differences among topics and the corresponding
non-stationary distributions make the cross-benchmark
evaluation interesting. For example, algorithms favoring large
and stable topics may not work well for short-lasting and
nonstationary topics, and vice versa. Cross-benchmark evaluations
allow us to test this hypothesis and possibly identify the
weaknesses in current approaches to adaptive filtering in
tracking the drifting trends of topics.
1
http://www.ldc.upenn.edu/Projects/TDT2001/topics.html
Table 1: Statistics of benchmark corpora for adaptive filtering evaluations
N(tr) is the number of the initial training documents; N(ts) is the number of the test documents;
n+ is the number of positive examples of a predefined topic; * is an average over all the topics.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Week
P(topic|week)
Gunshot (TDT5)
APEC Summit Meeting (TDT3)
Civil War(TREC10)
Figure 1: The temporal nature of topics
3. METRICS
To make our results comparable to the literature, we decided to
use both TREC-conventional and TDT-conventional metrics in
our evaluation.
3.1 TREC11 metrics
Let A, B, C and D be, respectively, the numbers of true
positives, false alarms, misses and true negatives for a specific
topic, and DCBAN +++= be the total number of test
documents. The TREC-conventional metrics are defined as:
Precision )/( BAA += , Recall )/( CAA +=
)(2
)21(
CABA
A
F
+++
+
=
β
β
β
( )
η
ηηβ
ηβ
−
−+−
=
1
),/()(max
11 ,
CABA
SUT
where parameters β and η were set to 0.5 and -0.5 respectively
in TREC10 (2001) and TREC11 (2002). For evaluating the
performance of a system, the performance scores are computed
for individual topics first and then averaged over topics
(macroaveraging).
3.2 TDT metrics
The TDT-conventional metric for topic tracking is defined as:
famisstrk PTPwPTPwTC ))(1()()( 21 −+=
where P(T) is the percentage of documents on topic T, missP is
the miss rate by the system on that topic, faP is the false alarm
rate, and 1w and 2w are the costs (pre-specified constants) for a
miss and a false alarm, respectively. The TDT benchmark
evaluations (since 1997) have used the settings
of 11 =w , 1.02 =w and 02.0)( =TP for all topics. For evaluating
the performance of a system, Ctrk is computed for each topic
first and then the resulting scores are averaged for a single
measure (the topic-weighted Ctrk).
To make the intuition behind this measure transparent, we
substitute the terms in the definition of Ctrk as follows:
N
CA
TP
+
=)( ,
N
DB
TP
+
=− )(1 ,
CA
C
Pmiss
+
= ,
DB
B
Pfa
+
= ,
)(
1
)(
21
21
BwCw
N
DB
B
N
DB
w
CA
C
N
CA
wTCtrk
+⋅=
+
⋅
+
⋅+
+
⋅
+
⋅=
Clearly, trkC is the average cost per error on topic T, with 1w
and 2w controlling the penalty ratio for misses vs. false alarms.
In addition to trkC , TDT2004 also employed 1.011 =βSUT as a
utility metric. To distinguish this from the 5.011 =βSUT in
TREC11, we call former TDT5SU in the rest of this paper.
Corpus #Topics N(tr) N(ts) Avg
n+ (tr)
Avg
n+ (ts)
Max
n+ (ts)
Min
n+ (ts)
#Topics per
doc (ts)
TREC10 84 20,307 783,484 2 9795.3 39,448 38 1.57
TREC11 50 80.664 726,419 3 378.0 597 198 1.12
TDT3 53 18,738* 37,770* 4 79.3 520 1 1.06
TDT5 111 199,419* 207,991* 1 71.3 710 1 1.01
3.3 The correlations and the differences
From an optimization point of view, TDT5SU and T11SU are
both utility functions while Ctrk is a cost function. Our objective
is to maximize the former or to minimize the latter on test
documents. The differences and correlations among these
objective functions can be analyzed through the shared counts
of A, B, C and D in their definitions. For example, both
TDT5SU and T11SU are positively correlated to the values of A
and D, and negatively correlated to the values of B and C; the
only difference between them is in their penalty ratios for
misses vs. false alarms, i.e., 10:1 in TDT5SU and 2:1 in T11SU.
The Ctrk function, on the other hand, is positively correlated to
the values of C and B, and negatively correlated to the values of
A and D; hence, it is negatively correlated to T11SU and
TDT5SU.
More importantly, there is a subtle and major difference
between Ctrk and the utility functions: T11SU and TDT5SU.
That is, Ctrk has a very different penalty ratio for misses vs.
false alarms: it favors recall-oriented systems to an extreme. At
first glance, one would think that the penalty ratio in Ctrk is
10:1 since 11 =w and 1.02 =w . However, this is not true if
02.0)( =TP is an inaccurate estimate of the on-topic documents
on average for the test corpus. Using TDT3 as an example, the
true percentage is:
002.0
37770
3.79
)( ≈=
+
=
N
n
TP
where N is the average size of the test sets in TDT3, and n+ is
the average number of positive examples per topic in the test
sets. Using 02.0)(ˆ =TP as an (inaccurate) estimate of 0.002
enlarges the intended penalty ratio of 10:1 to 100:1, roughly
speaking. To wit:
)1.010(
1
1.010
)3.7937770(37770
3.79
1011.0101
1011.0101
))(1(2)(1
)02.01(202.01)(
BC
NN
B
N
C
B
N
C
DB
B
N
CA
N
C
faPTPwmissPTPw
faPwmissPwTtrkC
×+×=×+×≈
−
×−×+××=
+
+
×−×+××=
×−⋅+××=
−×+××=
⎟
⎠
⎞
⎜
⎝
⎛
⎟
⎠
⎞
⎜
⎝
⎛
ρρ
where 10
002.0
02.0
)(
)(ˆ
===
TP
TP
ρ is the factor of enlargement in the
estimation of P(T) compared to the truth. Comparing the above
result to formula 2, we can see the actual penalty ratio for
misses vs. false alarms was 100:1 in the evaluations on TDT3
using Ctrk. Similarly, we can compute the enlargement factor
for TDT5 using the statistics in Table 1 as follows:
3.58
991,207/3.71
02.0
)(
)(ˆ
===
TP
TP
ρ
which means the actual penalty ratio for misses vs. false alarms
in the evaluation on TDT5 using Ctrk was approximately 583:1.
The implications of the above analysis are rather significant:
• Ctrk defined in the same formula does not necessarily
mean the same objective function in evaluation; instead,
the optimization criterion depends on the test corpus.
• Systems optimized for Ctrk would not optimize TDT5SU
(and T11SU) because the former favors high-recall
oriented to an extreme while the latter does not.
• Parameters tuned on one corpus (e.g., TDT3) might not
work for an evaluation on another corpus (say, TDT5)
unless we account for the previously-unknown subtle
dependency of Ctrk on data.
• Results in Ctrk in the past years of TDT evaluations may
not be directly comparable to each other because the
evaluation collections changed most years and hence the
penalty ratio in Ctrk varied.
Although these problems with Ctrk were not originally
anticipated, it offered an opportunity to examine the ability of
systems in trading off precision for extreme recall. This was a
challenging part of the TDT2004 evaluation for AF.
Comparing the metrics in TDT and TREC from a utility or cost
optimization point of view is important for understanding the
evaluation results of adaptive filtering methods. This is the first
time this issue is explicitly analyzed, to our knowledge.
4. METHODS
4.1 Incremental Rocchio for AF
We employed a common version of Rocchio-style classifiers
which computes a prototype vector per topic (T) as follows:
|)(|
'
|)(|
)()(
)(')(
TD
d
TD
d
TqTp
TDdTDd
−
∈
+
∈ ∑∑ −+
−+=
rr
rr
rr
γβα
The first term on the RHS is the weighted vector representation
of topic description whose elements are terms weights. The
second term is the weighted centroid of the set )(TD+ of
positive training examples, each of which is a vector of
withindocument term weights. The third term is the weighted centroid
of the set )(TD− of negative training examples which are the
nearest neighbors of the positive centroid. The three terms are
given pre-specified weights of βα, and γ , controlling the
relative influence of these components in the prototype.
The prototype of a topic is updated each time the system makes
a yes decision on a new document for that topic. If relevance
feedback is available (as is the case in TREC adaptive filtering),
the new document is added to the pool of
either )(TD+ or )(TD− , and the prototype is recomputed
accordingly; if relevance feedback is not available (as is the case
in TDT event tracking), the system"s prediction (yes) is
treated as the truth, and the new document is added to )(TD+ for
updating the prototype. Both cases are part of our experiments
in this paper (and part of the TDT 2004 evaluations for AF). To
distinguish the two, we call the first case simply Rocchio and
the second case PRF Rocchio where PRF stands for
pseudorelevance feedback.
The predictions on a new document are made by computing the
cosine similarity between each topic prototype and the
document vector, and then comparing the resulting scores
against a threshold:
⎩
⎨
⎧
−
+
=−
)(
)(
))),((cos(
no
yes
dTpsign new θ
rr
Threshold calibration in incremental Rocchio is a challenging
research topic. Multiple approaches have been developed. The
simplest is to use a universal threshold for all topics, tuned on a
validation set and fixed during the testing phase. More elaborate
methods include probabilistic threshold calibration which
converts the non-probabilistic similarity scores to probabilities
(i.e., )|( dTP
r
) for utility optimization [9][13], and margin-based
local regression for risk reduction [11].
It is beyond the scope of this paper to compare all the different
ways to adapt Rocchio-style methods for AF. Instead, our focus
here is to investigate the robustness of Rocchio-style methods in
terms of how much their performance depends on elaborate
system tuning, and how difficult (or how easy) it is to get good
performance through cross-corpus parameter optimization.
Hence, we decided to use a relatively simple version of Rocchio
as the baseline, i.e., with a universal threshold tuned on a
validation corpus and fixed for all topics in the testing phase.
This simple version of Rocchio has been commonly used in the
past TDT benchmark evaluations for topic tracking, and had
strong performance in the TDT2004 evaluations for adaptive
filtering with and without relevance feedback (Section 5.1).
Results of more complex variants of Rocchio are also discussed
when relevant.
4.2 Logistic Regression for AF
Logistic regression (LR) estimates the posterior probability of a
topic given a document using a sigmoid function
)1/(1),|1( xw
ewxyP
rrrr ⋅−
+==
where x
r
is the document vector whose elements are term
weights, w
r
is the vector of regression coefficients, and
}1,1{ −+∈y is the output variable corresponding to yes or
no with respect to a particular topic. Given a training set of
labeled documents { }),(,),,( 11 nn yxyxD
r
L
r
= , the
standard regression problem is defined as to find the maximum
likelihood estimates of the regression coefficients (the model
parameters):
{ } { }
{ }))exp(1(1logminarg
)|(logmaxarg)|(maxarg
ii xwyn
i
w
wDP
w
wDP
w
mlw
rr
r
r
r
r
r
r
⋅−+∑ ==
==
This is a convex optimization problem which can be solved
using a standard conjugate gradient algorithm in O(INF) time
for training per topic, where I is the average number of
iterations needed for convergence, and N and F are the number
of training documents and number of features respectively [14].
Once the regression coefficients are optimized on the training
data, the filtering prediction on each incoming document is
made as:
( )
⎩
⎨
⎧
−
+
=−
)(
)(
),|(
no
yes
wxyPsign optnew θ
rr
Note that w
r
is constantly updated whenever a new relevance
judgment is available in the testing phase of AF, while the
optimal threshold optθ is constant, depending only on the
predefined utility (or cost) function for evaluation. If T11SU is the
metric, for example, with the penalty ratio of 2:1 for misses and
false alarms (Section 3.1), the optimal threshold for LR
is 33.0)12/(1 =+ for all topics.
We modified the standard (above) version of LR to allow more
flexible optimization criteria as follows:
⎭
⎬
⎫
⎩
⎨
⎧
−++= ∑=
⋅− 2
1
)1log()(minarg μλ
rrr rr
r
weysw
n
i
xwy
i
w
map
ii
where )( iys is taken to be α , β and γ for query, positive
and negative documents respectively, which are similar to those
in Rocchio, giving different weights to the three kinds of
training examples: topic descriptions (queries), on-topic
documents and off-topic documents. The second term in the
objective function is for regularization, equivalent to adding a
Gaussian prior to the regression coefficients with mean μ
r
and
covariance variance matrix Ι⋅λ2/1 , where Ι is the identity
matrix. Tuning λ (≥0) is theoretically justified for reducing
model complexity (the effective degree of freedom) and
avoiding over-fitting on training data [5]. How to find an
effective μ
r
is an open issue for research, depending on the
user"s belief about the parameter space and the optimal range.
The solution of the modified objective function is called the
Maximum A Posteriori (MAP) estimate, which reduces to the
maximum likelihood solution for standard LR if 0=λ .
5. EVALUATIONS
We report our empirical findings in four parts: the TDT2004
official evaluation results, the cross-corpus parameter
optimization results, and the results corresponding to the
amounts of relevance feedback.
5.1 TDT2004 benchmark results
The TDT2004 evaluations for adaptive filtering were conducted
by NIST in November 2004. Multiple research teams
participated and multiple runs from each team were allowed.
Ctrk and TDT5SU were used as the metrics. Figure 2 and Figure
3 show the results; the best run from each team was selected
with respect to Ctrk or TDT5SU, respectively. Our Rocchio
(with adaptive profiles but fixed universal threshold for all
topics) had the best result in Ctrk, and our logistic regression
had the best result in TDT5SU. All the parameters of our runs
were tuned on the TDT3 corpus. Results for other sites are also
listed anonymously for comparison.
Ctrk
Ours 0.0324
Site2 0.0467
Site3 0.1366
Site4 0.2438
Metric = Ctrk (the lower the better)
0.0324 0.0467
0.1366
0.2438
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Ours Site2 Site3 Site4
Figure 2: TDT2004 results in Ctrk of systems using true
relevance feedback. (Ours is the Rocchio method.) We
also put the 1st
and 3rd
quartiles as sticks for each site.2
T11SU
Ours 0.7328
Site3 0.7281
Site2 0.6672
Site4 0.382
Metric = TDT5SU (the higher the better)
0.7328 0.7281
0.6672
0.382
0
0.2
0.4
0.6
0.8
1
Ours Site3 Site2 Site4
Figure 3:TDT2004 results in TDT5SU of systems using true
relevance feedback. (Ours is LR
with 0=μ
r
and 005.0=λ ).
CTrk
Ours 0.0707
Site2 0.1545
Site5 0.5669
Site4 0.6507
Site6 0.8973
Primary Topic Traking Results in TDT2004
0.0707
0.8973
0.6507
0.1545
0.5669
0
0.2
0.4
0.6
0.8
1
1.2
Ours Site2 Site5 Site4 Site6
Ctrk
Figure 4:TDT2004 results in Ctrk of systems without using
true relevance feedback. (Ours is PRF Rocchio.)
Adaptive filtering without using true relevance feedback was
also a part of the evaluations. In this case, systems had only one
labeled training example per topic during the entire training and
testing processes, although unlabeled test documents could be
used as soon as predictions on them were made. Such a setting
has been conventional for the Topic Tracking task in TDT until
2004. Figure 4 shows the summarized official submissions from
each team. Our PRF Rocchio (with a fixed threshold for all the
topics) had the best performance.
2
We use quartiles rather than standard deviations since the
former is more resistant to outliers.
5.2 Cross-corpus parameter optimization
How much the strong performance of our systems depends on
parameter tuning is an important question.
Both Rocchio and LR have parameters that must be
prespecified before the AF process. The shared parameters include
the sample weightsα , β and γ , the sample size of the negative
training documents (i.e., )(TD− ), the term-weighting scheme,
and the maximal number of non-zero elements in each
document vector. The method-specific parameters include the
decision threshold in Rocchio, and μ
r
, λ and MI (the maximum
number of iterations in training) in LR. Given that we only have
one labeled example per topic in the TDT5 training sets, it is
impossible to effectively optimize these parameters on the
training data, and we had to choose an external corpus for
validation. Among the choices of TREC10, TREC11 and TDT3,
we chose TDT3 (c.f. Section 2) because it is most similar to
TDT5 in terms of the nature of the topics (Section 2). We
optimized the parameters of our systems on TDT3, and fixed
those parameters in the runs on TDT5 for our submissions to
TDT2004. We also tested our methods on TREC10 and
TREC11 for further analysis. Since exhaustive testing of all
possible parameter settings is computationally intractable, we
followed a step-wise forward chaining procedure instead: we
pre-specified an order of the parameters in a method (Rocchio
or LR), and then tuned one parameter at the time while fixing
the settings of the remaining parameters. We repeated this
procedure for several passes as time allowed.
0.05
0.26
0.67
0.69
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
0.02 0.03 0.04 0.06 0.08 0.1 0.15 0.2 0.25 0.3
Threshold
TDT5SU
TDT3 TDT5 TREC10 TREC11
Figure 5: Performance curves of adaptive Rocchio
Figure 5 compares the performance curves in TDT5SU for
Rocchio on TDT3, TDT5, TREC10 and TREC11 when the
decision threshold varied. These curves peak at different
locations: the TDT3-optimal is closest to the TDT5-optimal
while the TREC10-optimal and TREC1-optimal are quite far
away from the TDT5-optimal. If we were using TREC10 or
TREC11 instead of TDT3 as the validation corpus for TDT5, or
if the TDT3 corpus were not available, we would have difficulty
in obtaining strong performance for Rocchio in TDT2004. The
difficulty comes from the ad-hoc (non-probabilistic) scores
generated by the Rocchio method: the distribution of the scores
depends on the corpus, making cross-corpus threshold
optimization a tricky problem.
Logistic regression has less difficulty with respect to threshold
tuning because it produces probabilistic scores of )|1Pr( xy =
upon which the optimal threshold can be directly computed if
probability estimation is accurate. Given the penalty ratio for
misses vs. false alarms as 2:1 in T11SU, 10:1 in TDT5SU and
583:1 in Ctrk (Section 3.3), the corresponding optimal
thresholds (t) are 0.33, 0.091 and 0.0017 respectively.
Although the theoretical threshold could be inaccurate, it still
suggests the range of near-optimal settings. With these threshold
settings in our experiments for LR, we focused on the
crosscorpus validation of the Bayesian prior parameters, that is, μ
r
and λ. Table 2 summarizes the results 3
. We measured the
performance of the runs on TREC10 and TREC11 using T11SU,
and the performance of the runs on TDT3 and TDT5 using
TDT5SU. For comparison we also include the best results of
Rocchio-based methods on these corpora, which are our own
results of Rocchio on TDT3 and TDT5, and the best results
reported by NIST for TREC10 and TREC11. From this set of
results, we see that LR significantly outperformed Rocchio on
all the corpora, even in the runs of standard LR without any
tuning, i.e. λ=0. This empirical finding is consistent with a
previous report [13] for LR on TREC11 although our results of
LR (0.585~0.608 in T11SU) are stronger than the results (0.49
for standard LR and 0.54 for LR using Rocchio prototype as the
prior) in that report. More importantly, our cross-benchmark
evaluation gives strong evidence for the robustness of LR. The
robustness, we believe, comes from the probabilistic nature of
the system-generated scores. That is, compared to the ad-hoc
scores in Rocchio, the normalized posterior probabilities make
the threshold optimization in LR a much easier problem.
Moreover, logistic regression is known to converge towards the
Bayes classifier asymptotically while Rocchio classifiers"
parameters do not.
Another interesting observation in these results is that the
performance of LR did not improve when using a Rocchio
prototype as the mean in the prior; instead, the performance
decreased in some cases. This observation does not support the
previous report by [13], but we are not surprised because we are
not convinced that Rocchio prototypes are more accurate than
LR models for topics in the early stage of the AF process, and
we believe that using a Rocchio prototype as the mean in the
Gaussian prior would introduce undesirable bias to LR. We also
believe that variance reduction (in the testing phase) should be
controlled by the choice of λ (but not μ
r
), for which we
conducted the experiments as shown in Figure 6.
Table 2: Results of LR with different Bayesian priors
Corpus TDT3 TDT5 TREC10 TREC11
LR(μ=0,λ=0) 0.7562 0.7737 0.585 0.5715
LR(μ=0,λ=0.01) 0.8384 0.7812 0.6077 0.5747
LR(μ=roc*,λ=0.01) 0.8138 0.7811 0.5803 0.5698
Best Rocchio 0.6628 0.6917 0.4964
0.475
3
The LR results (0.77~0.78) on TDT5 in this table are better
than our TDT2004 official result (0.73) because parameter
optimization has been improved afterwards.
4
The TREC10-best result (0.496 by Oracle) is only available in
T10U which is not directly comparable to the scores in
T11SU, just indicative.
*: μ
r
was set to the Rocchio prototype
0
0.2
0.4
0.6
0.8
0.000 0.001 0.005 0.050 0.500
Lambda
Performance
Ctrk on TDT3 TDT5SU on TDT3
TDT5SU on TDT5 T11SU on TREC11
Figure 6: LR with varying lambda.
The performance of LR is summarized with respect to λ tuning
on the corpora of TREC10, TREC11 and TDT3. The
performance on each corpus was measured using the
corresponding metrics, that is, T11SU for the runs on TREC10
and TREC11, and TDT5SU and Ctrk for the runs on TDT3,. In
the case of maximizing the utilities, the safe interval for λ is
between 0 and 0.01, meaning that the performance of
regularized LR is stable, the same as or improved slightly over
the performance of standard LR. In the case of minimizing Ctrk,
the safe range for λ is between 0 and 0.1, and setting λ between
0.005 and 0.05 yielded relatively large improvements over the
performance of standard LR because training a model for
extremely high recall is statistically more tricky, and hence
more regularization is needed. In either case, tuning λ is
relatively safe, and easy to do successfully by cross-corpus
tuning.
Another influential choice in our experiment settings is term
weighting: we examined the choices of binary, TF and TF-IDF
(the ltc version) schemes. We found TF-IDF most effective
for both Rocchio and LR, and used this setting in all our
experiments.
5.3 Percentages of labeled data
How much relevance feedback (RF) would be needed during the
AF process is a meaningful question in real-world applications.
To answer it, we evaluated Rocchio and LR on TDT with the
following settings:
• Basic Rocchio, no adaptation at all
• PRF Rocchio, updating topic profiles without using true
relevance feedback;
• Adaptive Rocchio, updating topic profiles using relevance
feedback on system-accepted documents plus 10
documents randomly sampled from the pool of
systemrejected documents;
• LR with 0
rr
=μ , 01.0=λ and threshold = 0.004;
• All the parameters in Rocchio tuned on TDT3.
Table 3 summarizes the results in Ctrk: Adaptive Rocchio with
relevance feedback on 0.6% of the test documents reduced the
tracking cost by 54% over the result of the PRF Rocchio, the
best system in the TDT2004 evaluation for topic tracking
without relevance feedback information. Incremental LR, on the
other hand, was weaker but still impressive. Recall that Ctrk is
an extremely high-recall oriented metric, causing frequent
updating of profiles and hence an efficiency problem in LR. For
this reason we set a higher threshold (0.004) instead of the
theoretically optimal threshold (0.0017) in LR to avoid an
untolerable computation cost. The computation time in
machine-hours was 0.33 for the run of adaptive Rocchio and 14
for the run of LR on TDT5 when optimizing Ctrk. Table 4
summarizes the results in TDT5SU; adaptive LR was the winner
in this case, with relevance feedback on 0.05% of the test
documents improving the utility by 20.9% over the results of
PRF Rocchio.
Table 3: AF methods on TDT5 (Performance in Ctrk)
Base Roc PRF Roc Adp Roc LR
% of RF 0% 0% 0.6% 0.2%
Ctrk 0.076 0.0707 0.0324 0.0382
±% +7% (baseline) -54% -46%
Table 4: AF methods on TDT5 (Performance in TDT5SU)
Base Roc PRF Roc Adp Roc LR(λ=.01)
% of RF 0% 0% 0.04% 0.05%
TDT5SU 0.57 0.6452 0.69 0.78
±% -11.7% (baseline) +6.9% +20.9%
Evidently, both Rocchio and LR are highly effective in adaptive
filtering, in terms of using of a small amount of labeled data to
significantly improve the model accuracy in statistical learning,
which is the main goal of AF.
5.4 Summary of Adaptation Process
After we decided the parameter settings using validation, we
perform the adaptive filtering in the following steps for each
topic: 1) Train the LR/Rocchio model using the provided
positive training examples and 30 randomly sampled negative
examples; 2) For each document in the test corpus: we first
make a prediction about relevance, and then get relevance
feedback for those (predicted) positive documents. 3) Model and
IDF statistics will be incrementally updated if we obtain its true
relevance feedback.
6. CONCLUDING REMARKS
We presented a cross-benchmark evaluation of incremental
Rocchio and incremental LR in adaptive filtering, focusing on
their robustness in terms of performance consistency with
respect to cross-corpus parameter optimization. Our main
conclusions from this study are the following:
• Parameter optimization in AF is an open challenge but has
not been thoroughly studied in the past.
• Robustness in cross-corpus parameter tuning is important
for evaluation and method comparison.
• We found LR more robust than Rocchio; it had the best
results (in T11SU) ever reported on TDT5, TREC10 and
TREC11 without extensive tuning.
• We found Rocchio performs strongly when a good
validation corpus is available, and a preferred choice when
optimizing Ctrk is the objective, favoring recall over
precision to an extreme.
For future research we want to study explicit modeling of the
temporal trends in topic distributions and content drifting.
Acknowledgments
This material is based upon work supported in parts by the
National Science Foundation (NSF) under grant IIS-0434035,
by the DoD under award 114008-N66001992891808 and by the
Defense Advanced Research Project Agency (DARPA) under
Contract No. NBCHD030010. Any opinions, findings and
conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of
the sponsors.
7. REFERENCES
[1] J. Allan. Incremental relevance feedback for information
filtering. In SIGIR-96, 1996.
[2] J. Callan. Learning while filtering documents. In SIGIR-98,
224-231, 1998.
[3] J. Fiscus and G. Duddington. Topic detection and tracking
overview. In Topic detection and tracking: event-based
information organization, 17-31, 2002.
[4] J. Fiscus and B. Wheatley. Overview of the TDT 2004
Evaluation and Results. In TDT-04, 2004.
[5] T. Hastie, R. Tibshirani and J. Friedman. Elements of
Statistical Learning. Springer, 2001.
[6] S. Robertson and D. Hull. The TREC-9 filtering track final
report. In TREC-9, 2000.
[7] S. Robertson and I. Soboroff. The TREC-10 filtering track
final report. In TREC-10, 2001.
[8] S. Robertson and I. Soboroff. The TREC 2002 filtering
track report. In TREC-11, 2002.
[9] S. Robertson and S. Walker. Microsoft Cambridge at
TREC-9. In TREC-9, 2000.
[10] R. Schapire, Y. Singer and A. Singhal. Boosting and
Rocchio applied to text filtering. In SIGIR-98, 215-223,
1998.
[11] Y. Yang and B. Kisiel. Margin-based local regression for
adaptive filtering. In CIKM-03, 2003.
[12] Y. Zhang and J. Callan. Maximum likelihood estimation
for filtering thresholds. In SIGIR-01, 2001.
[13] Y. Zhang. Using Bayesian priors to combine classifiers for
adaptive filtering. In SIGIR-04, 2004.
[14] J. Zhang and Y. Yang. Robustness of regularized linear
classification methods in text categorization. In SIGIR-03:
190-197, 2003.
[15] T. Zhang, F. J. Oles. Text Categorization Based on
Regularized Linear Classification Methods. Inf. Retr. 4(1):
5-31 (2001). | logistic regression;adaptive filter;gaussian;topic track;validation set;cost function;topic detection;statistical learning;posterior probability;sigmoid function;information retrieval;relevance feedback;optimization criterion;cross-benchmark evaluation;topic tracking;bias;systematic method for parameter tuning across multiple corpora;rocchio-style method;rocchio;cross-corpus parameter optimization;penalty ratio;robustness;granularity difference;adaptive filtering;external corpus;utility function;probabilistic threshold calibration;regularization;lr |
train_H-88 | Controlling Overlap in Content-Oriented XML Retrieval | The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation. | 1. INTRODUCTION
The representation of documents in XML provides an
opportunity for information retrieval systems to take
advantage of document structure, returning individual document
components when appropriate, rather than complete
documents in all circumstances. In response to a user query, an
XML information retrieval system might return a mixture
of paragraphs, sections, articles, bibliographic entries and
other components. This facility is of particular benefit when
a collection contains very long documents, such as product
manuals or books, where the user should be directed to the
most relevant portions of these documents.
<article>
<fm>
<atl>Text Compression for
Dynamic Document Databases</atl>
<au>Alistair Moffat</au>
<au>Justin Zobel</au>
<au>Neil Sharman</au>
<abs><p><b>Abstract</b> For ...</p></abs>
</fm>
<bdy>
<sec><st>INTRODUCTION</st>
<ip1>Modern document databases...</ip1>
<p>There are good reasons to compress...</p>
</sec>
<sec><st>REDUCING MEMORY REQUIREMENTS</st>...
<ss1><st>2.1 Method A</st>...
</sec>
...
</bdy>
</article>
Figure 1: A journal article encoded in XML.
Figure 1 provides an example of a journal article encoded
in XML, illustrating many of the important characteristics
of XML documents. Tags indicate the beginning and end of
each element, with elements varying widely in size, from one
word to thousands of words. Some elements, such as
paragraphs and sections, may be reasonably presented to the user
as retrieval results, but others are not appropriate. Elements
overlap each other - articles contain sections, sections
contain subsections, and subsections contain paragraphs. Each
of these characteristics affects the design of an XML IR
system, and each leads to fundamental problems that must be
solved in an successful system. Most of these fundamental
problems can be solved through the careful adaptation of
standard IR techniques, but the problems caused by overlap
are unique to this area [4,11] and form the primary focus of
this paper.
The article of figure 1 may be viewed as an XML tree,
as illustrated in figure 2. Formally, a collection of XML
documents may be represented as a forest of ordered, rooted
trees, consisting of a set of nodes N and a set of directed
edges E connecting these nodes. For each node x ∈ N , the
notation x.parent refers to the parent node of x, if one exists,
and the notation x.children refers to the set of child nodes
sec
bdyfm
atl au au au
abs
p
b
st ip1
sec
st
ss1
st
article
p
Figure 2: Example XML tree.
of x. Since an element may be represented by the node at
its root, the output of an XML IR system may be viewed as
a ranked list of the top-m nodes.
The direct application of a standard relevance ranking
technique to a set of XML elements can produce a result
in which the top ranks are dominated by many structurally
related elements. A high scoring section is likely to contain
several high scoring paragraphs and to be contained in an
high scoring article. For example, many of the elements in
figure 2 would receive a high score on the keyword query
text index compression algorithms. If each of these
elements are presented to a user as an individual and
separate result, she may waste considerable time reviewing and
rejecting redundant content.
One possible solution is to report only the highest
scoring element along a given path in the tree, and to remove
from the lower ranks any element containing it, or contained
within it. Unfortunately, this approach destroys some of the
possible benefits of XML IR. For example, an outer element
may contain a substantial amount of information that does
not appear in an inner element, but the inner element may
be heavily focused on the query topic and provide a short
overview of the key concepts. In such cases, it is reasonable
to report elements which contain, or are contained in, higher
ranking elements. Even when an entire book is relevant, a
user may still wish to have the most important paragraphs
highlighted, to guide her reading and to save time [6].
This paper presents a method for controlling overlap.
Starting with an initial element ranking, a re-ranking algorithm
adjusts the scores of lower ranking elements that contain, or
are contained within, higher ranking elements, reflecting the
fact that this information may now be redundant. For
example, once an element representing a section appears in the
ranking, the scores for the paragraphs it contains and the
article that contains it are reduced. The inspiration for this
strategy comes partially from recent work on structured
documents retrieval, where terms appearing in different fields,
such as the title and body, are given different weights [20].
Extending that approach, the re-ranking algorithm varies
weights dynamically as elements are processed.
The remainder of the paper is organized as follows: After
a discussion of background work and evaluation
methodology, a baseline retrieval method is presented in section 4.
This baseline method represents a reasonable adaptation of
standard IR technology to XML. Section 5 then outlines a
strategy for controlling overlap, using the baseline method as
a starting point. A re-ranking algorithm implementing this
strategy is presented in section 6 and evaluated in section 7.
Section 8 discusses an extended version of the algorithm.
2. BACKGROUND
This section provides a general overview of XML
information retrieval and discusses related work, with an emphasis
on the fundamental problems mentioned in the introduction.
Much research in the area of XML retrieval views it from a
traditional database perspective, being concerned with such
problems as the implementation of structured query
languages [5] and the processing of joins [1]. Here, we take
a content oriented IR perceptive, focusing on XML
documents that primarily contain natural language data and
queries that are primarily expressed in natural language.
We assume that these queries indicate only the nature of
desired content, not its structure, and that the role of the
IR system is to determine which elements best satisfy the
underlying information need. Other IR research has
considered mixed queries, in which both content and structural
requirements are specified [2,6,14,17,23].
2.1 Term and Document Statistics
In traditional information retrieval applications the
standard unit of retrieval is taken to be the document.
Depending on the application, this term might be interpreted
to encompass many different objects, including web pages,
newspaper articles and email messages.
When applying standard relevance ranking techniques in
the context of XML IR, a natural approach is to treat each
element as a separate document, with term statistics
available for each [16]. In addition, most ranking techniques
require global statistics (e.g. inverse document frequency)
computed over the collection as a whole. If we consider this
collection to include all elements that might be returned by
the system, a specific occurrence of a term may appear in
several different documents, perhaps in elements
representing a paragraph, a subsection, a section and an article.
It is not appropriate to compute inverse document frequency
under the assumption that the term is contained in all of
these elements, since the number of elements that contain a
term depends entirely on the structural arrangement of the
documents [13,23].
2.2 Retrievable Elements
While an XML IR system might potentially retrieve any
element, many elements may not be appropriate as retrieval
results. This is usually the case when elements contain very
little text [10]. For example, a section title containing only
the query terms may receive a high score from a ranking
algorithm, but alone it would be of limited value to a user, who
might prefer the actual section itself. Other elements may
reflect the document"s physical, rather than logical,
structure, which may have little or no meaning to a user. An
effective XML IR system must return only those elements
that have sufficient content to be usable and are able to
stand alone as independent objects [15,18]. Standard
document components such as paragraphs, sections, subsections,
and abstracts usually meet these requirements; titles,
italicized phrases, and individual metadata fields often do not.
2.3 Evaluation Methodology
Over the past three years, the INitiative for the
Evaluation of XML Retrieval (INEX) has encouraged research into
XML information retrieval technology [7,8]. INEX is an
experimental conference series, similar to TREC, with groups
from different institutions completing one or more
experimental tasks using their own tools and systems, and
comparing their results at the conference itself. Over 50 groups
participated in INEX 2004, and the conference has become
as influential in the area of XML IR as TREC is in other IR
areas. The research described in this paper, as well as much
of the related work it cites, depends on the test collections
developed by INEX.
Overlap causes considerable problems with retrieval
evaluation, and the INEX organizers and participants have
wrestled with these problems since the beginning. While
substantial progress has been made, these problem are still not
completely solved. Kazai et al. [11] provide a detailed
exposition of the overlap problem in the context of INEX
retrieval evaluation and discuss both current and proposed
evaluation metrics. Many of these metrics are applied to
evaluate the experiments reported in this paper, and they
are briefly outlined in the next section.
3. INEX 2004
Space limitations prevent the inclusion of more than a
brief summary of INEX 2004 tasks and evaluation
methodology. For detailed information, the proceedings of the
conference itself should be consulted [8].
3.1 Tasks
For the main experimental tasks, INEX 2004 participants
were provided with a collection of 12,107 articles taken from
the IEEE Computer Societies magazines and journals
between 1995 and 2002. Each document is encoded in XML
using a common DTD, with the document of figures 1 and 2
providing one example.
At INEX 2004, the two main experimental tasks were both
adhoc retrieval tasks, investigating the performance of
systems searching a static collection using previously unseen
topics. The two tasks differed in the types of topics they
used. For one task, the content-only or CO task, the
topics consist of short natural language statements with no
direct reference to the structure of the documents in the
collection. For this task, the IR system is required to select the
elements to be returned. For the other task, the
contentand-structure or CAS task, the topics are written in an
XML query language [22] and contain explicit references to
document structure, which the IR system must attempt to
satisfy. Since the work described in this paper is directed
at the content-only task, where the IR system receives no
guidance regarding the elements to return, the CAS task is
ignored in the remainder of our description.
In 2004, 40 new CO topics were selected by the conference
organizers from contributions provided by the conference
participants. Each topic includes a short keyword query,
which is executed over the collection by each participating
group on their own XML IR system. Each group could
submit up to three experimental runs consisting of the top
m = 1500 elements for each topic.
3.2 Relevance Assessment
Since XML IR is concerned with locating those elements
that provide complete coverage of a topic while containing as
little extraneous information as possible, simple relevant
vs. not relevant judgments are not sufficient. Instead, the
INEX organizers adopted two dimensions for relevance
assessment: The exhaustivity dimension reflects the degree to
which an element covers the topic, and the specificity
dimension reflects the degree to which an element is focused on the
topic. A four-point scale is used in both dimensions. Thus,
a (3,3) element is highly exhaustive and highly specific, a
(1,3) element is marginally exhaustive and highly specific,
and a (0,0) element is not relevant. Additional information
on the assessment methodology may be found in Piwowarski
and Lalmas [19], who provide a detailed rationale.
3.3 Evaluation Metrics
The principle evaluation metric used at INEX 2004 is a
version of mean average precision (MAP), adjusted by
various quantization functions to give different weights to
different elements, depending on their exhaustivity and specificity
values. One variant, the strict quantization function gives a
weight of 1 to (3,3) elements and a weight of 0 to all others.
This variant is essentially the familiar MAP value, with (3,3)
elements treated as relevant and all other elements treated
as not relevant. Other quantization functions are designed
to give partial credit to elements which are near misses,
due to a lack or exhaustivity and/or specificity. Both the
generalized quantization function and the specificity-oriented
generalization (sog) function credit elements according to
their degree of relevance [11], with the second function
placing greater emphasis on specificity. This paper reports
results of this metric using all three of these quantization
functions. Since this metric was first introduced at INEX 2002,
it is generally referred as the inex-2002 metric.
The inex-2002 metric does not penalize overlap. In
particular, both the generalized and sog quantization functions
give partial credit to a near miss even when a (3,3)
element overlapping it is reported at a higher rank. To address
this problem, Kazai et al. [11] propose an XML cumulated
gain metric, which compares the cumulated gain [9] of a
ranked list to an ideal gain vector. This ideal gain vector
is constructed from the relevance judgments by
eliminating overlap and retaining only best element along a given
path. Thus, the XCG metric rewards retrieval runs that
avoid overlap. While XCG was not used officially at INEX
2004, a version of it is likely to be used in the future.
At INEX 2003, yet another metric was introduced to
ameliorate the perceived limitations of the inex-2002 metric.
This inex-2003 metric extends the definitions of precision
and recall to consider both the size of reported components
and the overlap between them. Two versions were created,
one that considered only component size and another that
considered both size and overlap. While the inex-2003
metric exhibits undesirable anomalies [11], and was not used in
2004, values are reported in the evaluation section to provide
an additional instrument for investigating overlap.
4. BASELINE RETRIEVAL METHOD
This section provides an overview of baseline XML
information retrieval method currently used in the MultiText
IR system, developed by the Information Retrieval Group at
the University of Waterloo [3]. This retrieval method results
from the adaptation and tuning of the Okapi BM25
measure [21] to the XML information retrieval task. The
MultiText system performed respectably at INEX 2004, placing
in the top ten under all of the quantization functions, and
placing first when the quantization function emphasized
exhaustivity.
To support retrieval from XML and other structured
document types, the system provides generalized queries of the
form:
rank X by Y
where X is a sub-query specifying a set of document elements
to be ranked and Y is a vector of sub-queries specifying
individual retrieval terms.
For our INEX 2004 runs, the sub-query X specified a list
of retrievable elements as those with tag names as follows:
abs app article bb bdy bm fig fm ip1
li p sec ss1 ss2 vt
This list includes bibliographic entries (bb) and figure
captions (fig) as well as paragraphs, sections and subsections.
Prior to INEX 2004, the INEX collection and the INEX 2003
relevance judgments were manually analyzed to select these
tag names. Tag names were selected on the basis of their
frequency in the collection, the average size of their
associated elements, and the relative number of positive relevance
judgments they received. Automating this selection process
is planned as future work.
For INEX 2004, the term vector Y was derived from the
topic by splitting phrases into individual words, eliminating
stopwords and negative terms (those starting with -), and
applying a stemmer. For example, keyword field of topic
166
+"tree edit distance" + XML -image
became the four-term query
"$tree" "$edit" "$distance" "$xml"
where the $ operator within a quoted string stems the
term that follows it.
Our implementation of Okapi BM25 is derived from the
formula of Robertson et al. [21] by setting parameters k2 = 0
and k3 = ∞. Given a term set Q, an element x is assigned
the score
t∈Q
w(1)
qt
(k1 + 1)xt
K + xt
(1)
where
w(1)
= log ¡
D − Dt + 0.5
Dt + 0.5 ¢
D = number of documents in the corpus
Dt = number of documents containing t
qt = frequency that t occurs in the topic
xt = frequency that t occurs in x
K = k1((1 − b) + b · lx/lavg)
lx = length of x
lavg = average document length
0.07
0.08
0.09
0.10
0.11
0.12
0.13
0.14
0.15
0 2 4 6 8 10 12 14 16
MeanAveragePrecision(inex-2002)
k1
strict
generalized
sog
Figure 3: Impact of k1 on inex-2002 mean average
precision with b = 0.75 (INEX 2003 CO topics).
Prior to INEX 2004, the INEX 2003 topics and judgments
were used to tune the b and k1 parameters, and the impact
of this tuning is discussed later in this section.
For the purposes of computing document-level statistics
(D, Dt and lavg) a document is defined to be an article.
These statistics are used for ranking all element types.
Following the suggestion of Kamps et al. [10], the retrieval
results are filtered to eliminate very short elements, those less
than 25 words in length.
The use of article statistics for all element types might
be questioned. This approach may be justified by
viewing the collection as a set of articles to be searched using
standard document-oriented techniques, where only articles
may be returned. The score computed for an element is
essentially the score it would receive if it were added to the
collection as a new document, ignoring the minor
adjustments needed to the document-level statistics. Nonetheless,
we plan to examine this issue again in the future.
In our experience, the performance of BM25 typically
benefits from tuning the b and k1 parameters to the
collection, whenever training queries are available for this
purpose. Prior to INEX 2004, we trained the MultiText system
using the INEX 2003 queries. As a starting point we used
the values b = 0.75 and k1 = 1.2, which perform well on
TREC adhoc collections and are used as default values in
our system. The results were surprising. Figure 3 shows the
result of varying k1 with b = 0.75 on the MAP values under
three quantization functions. In our experience, optimal
values for k1 are typically in the range 0.0 to 2.0. In this case,
large values are required for good performance. Between
k1 = 1.0 and k1 = 6.0 MAP increases by over 15% under
the strict quantization. Similar improvements are seen
under the generalized and sog quantizations. In contrast, our
default value of b = 0.75 works well under all quantization
functions (figure 4). After tuning over a wide range of
values under several quantization functions, we selected values
of k = 10.0 and b = 0.80 for our INEX 2004 experiments,
and these values are used for the experiments reported in
section 7.
0.07
0.08
0.09
0.10
0.11
0.12
0.13
0.14
0.15
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
MeanAveragePrecision(inex-2002)
b
strict
generalized
sog
Figure 4: Impact of b on inex-2002 mean average
precision with k1 = 10 (INEX 2003 CO topics).
5. CONTROLLING OVERLAP
Starting with an element ranking generated by the
baseline method described in the previous section, elements are
re-ranked to control overlap by iteratively adjusting the scores
of those elements containing or contained in higher ranking
elements. At a conceptual level, re-ranking proceeds as
follows:
1. Report the highest ranking element.
2. Adjust the scores of the unreported elements.
3. Repeat steps 1 and 2 until m elements are reported.
One approach to adjusting the scores of unreported elements
in step 2 might be based on the Okapi BM25 scores of the
involved elements. For example, assume a paragraph with
score p is reported in step 1. In step 2, the section
containing the paragraph might then have its score s lowered
by an amount α · p to reflect the reduced contribution the
paragraph should make to the section"s score.
In a related context, Robertson et al. [20] argue strongly
against the linear combination of Okapi scores in this
fashion. That work considers the problem of assigning different
weights to different document fields, such as the title and
body associated with Web pages. A common approach to
this problem scores the title and body separately and
generates a final score as a linear combination of the two.
Robertson et al. discuss the theoretical flaws in this approach and
demonstrate experimentally that it can actually harm
retrieval effectiveness. Instead, they apply the weights at the
term frequency level, with an occurrence of a query term
t in the title making a greater contribution to the score
than an occurrence in the body. In equation 1, xt becomes
α0 · yt + α1 · zt, where yt is the number of times t occurs in
the title and zt is the number of times t occurs in the body.
Translating this approach to our context, the
contribution of terms appearing in elements is dynamically reduced
as they are reported. The next section presents and
analysis a simple re-ranking algorithm that follows this strategy.
The algorithm is evaluated experimentally in section 7. One
limitation of the algorithm is that the contribution of terms
appearing in reported elements is reduced by the same
factor regardless of the number of reported elements in which
it appears. In section 8 the algorithm is extended to apply
increasing weights, lowering the score, when a term appears
in more than one reported element.
6. RE-RANKING ALGORITHM
The re-ranking algorithm operates over XML trees, such
as the one appearing in figure 2. Input to the algorithm is
a list of n elements ranked according to their initial BM25
scores. During the initial ranking the XML tree is
dynamically re-constructed to include only those nodes with
nonzero BM25 scores, so n may be considerably less than |N |.
Output from the algorithm is a list of the top m elements,
ranked according to their adjusted scores.
An element is represented by the node x ∈ N at its root.
Associated with this node are fields storing the length of
element, term frequencies, and other information required
by the re-ranking algorithm, as follows:
x.f - term frequency vector
x.g - term frequency adjustments
x.l - element length
x.score - current Okapi BM25 score
x.reported - boolean flag, initially false
x.children - set of child nodes
x.parent - parent node, if one exists
These fields are populated during the initial ranking process,
and updated as the algorithm progresses. The vector x.f
contains term frequency information corresponding to each
term in the query. The vector x.g is initially zero and is
updated by the algorithm as elements are reported.
The score field contains the current BM25 score for the
element, which will change as the values in x.g change. The
score is computed using equation 1, with the xt value for
each term determined by a combination of the values in x.f
and x.g. Given a term t ∈ Q, let ft be the component of
x.f corresponding to t, and let gt be the component of x.g
corresponding to t, then:
xt = ft − α · gt (2)
For processing by the re-ranking algorithm, nodes are
stored in priority queues, ordered by decreasing score. Each
priority queue PQ supports three operations:
PQ.front() - returns the node with greatest score
PQ.add (x) - adds node x to the queue
PQ.remove(x) - removes node x from the queue
When implemented using standard data structures, the front
operation requires O(1) time, and the other operations
require O(log n) time, where n is the size of the queue.
The core of the re-ranking algorithm is presented in
figure 5. The algorithm takes as input the priority queue S
containing the initial ranking, and produces the top-m
reranked nodes in the priority queue F. After initializing F to
be empty on line 1, the algorithm loops m times over lines
215, transferring at least one node from S to F during each
iteration. At the start of each iteration, the unreported node
at the front of S has the greatest adjusted score, and it is
removed and added to F. The algorithm then traverses the
1 F ← ∅
2 for i ← 1 to m do
3 x ← S.front()
4 S.remove(x)
5 x.reported ← true
6 F.add(x)
7
8 foreach y ∈ x.children do
9 Down (y)
10 end do
11
12 if x is not a root node then
13 Up (x, x.parent)
14 end if
15 end do
Figure 5: Re-Ranking Algorithm - As input, the
algorithm takes a priority queue S, containing XML
nodes ranked by their initial scores, and returns
its results in priority queue F, ranked by adjusted
scores.
1 Up(x, y) ≡
2 S.remove(y)
3 y.g ← y.g + x.f − x.g
4 recompute y.score
5 S.add(y)
6 if y is not a root node then
7 Up (x, y.parent)
8 end if
9
10 Down(x) ≡
11 if not x.reported then
12 S.remove(x)
14 x.g ← x.f
15 recompute x.score
16 if x.score > 0 then
17 F.add(x)
18 end if
19 x.reported ← true
20 foreach y ∈ x.children do
21 Down (y)
22 end do
23 end if
Figure 6: Tree traversal routines called by the
reranking algorithm.
0.0
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.25
0.26
0.27
0.28
0.29
0.30
0.31
0.32
0.33
0.34
0.35
0.36
MeanAveragePrecision(inex-2002)
XMLCumulatedGain(XCG)
alpha
MAP (strict)
MAP (generalized)
MAP (sog)
XCG (sog2)
Figure 7: Impact of α on XCG and inex-2002 MAP
(INEX 2004 CO topics; assessment set I).
node"s ancestors (lines 8-10) and descendants (lines 12-14)
adjusting the scores of these nodes.
The tree traversal routines, Up and Down are given in
figure 6. The Up routine removes each ancestor node from S,
adjusts its term frequency values, recomputes its score, and
adds it back into S. The adjustment of the term frequency
values (line 3) adds to y.g only the previously unreported
term occurrences in x. Re-computation of the score on line 4
uses equations 1 and 2. The Down routine performs a similar
operation on each descendant. However, since the contents
of each descendant are entirely contained in a reported
element its final score may be computed, and it is removed
from S and added to F.
In order to determine the time complexity of the
algorithm, first note that a node may be an argument to Down
at most once. Thereafter, the reported flag of its parent is
true. During each call to Down a node may be moved from
S to F, requiring O(log n) time. Thus, the total time for all
calls to Down is O(n log n), and we may temporarily ignore
lines 8-10 of figure 5 when considering the time complexity
of the loop over lines 2-15. During each iteration of this loop,
a node and each of its ancestors are removed from a priority
queue and then added back into a priority queue. Since a
node may have at most h ancestors, where h is the maximum
height of any tree in the collection, each of the m iterations
requires O(h log n) time. Combining these observations
produces an overall time complexity of O((n + mh) log n).
In practice, re-ranking an INEX result set requires less
than 200ms on a three-year-old desktop PC.
7. EVALUATION
None of the metrics described in section 3.3 is a close fit
with the view of overlap advocated by this paper.
Nonetheless, when taken together they provide insight into the
behaviour of the re-ranking algorithm. The INEX evaluation
packages (inex_eval and inex_eval_ng) were used to
compute values for the inex-2002 and inex-2003 metrics. Values
for the XCG metrics were computed using software supplied
by its inventors [11].
Figure 7 plots the three variants of inex-2002 MAP metric
together with the XCG metric. Values for these metrics
0.0
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
MeanAveragePrecision(inex-2003)
alpha
strict, overlap not considered
strict, overlap considered
generalized, overlap not considered
generalized, overlap considered
Figure 8: Impact of α on inex-2003 MAP (INEX
2004 CO topics; assessment set I).
are plotted for values of α between 0.0 and 1.0. Recalling
that the XCG metric is designed to penalize overlap, while
the inex-2002 metric ignores overlap, the conflict between
the metrics is obvious. The MAP values at one extreme
(α = 0.0) and the XCG value at the other extreme (α =
1.0) represent retrieval performance comparable to the best
systems at INEX 2004 [8,12].
Figure 8 plots values of the inex-2003 MAP metric for two
quantizations, with and without consideration of overlap.
Once again, conflict is apparent, with the influence of α
substantially lessened when overlap is considered.
8. EXTENDED ALGORITHM
One limitation of the re-ranking algorithm is that a single
weight α is used to adjust the scores of both the ancestors
and descendants of reported elements. An obvious extension
is to use different weights in these two cases. Furthermore,
the same weight is used regardless of the number of times
an element is contained in a reported element. For example,
a paragraph may form part of a reported section and then
form part of a reported article. Since the user may now
have seen this paragraph twice, its score should be further
lowered by increasing the value of the weight.
Motivated by these observations, the re-ranking algorithm
may be extended with a series of weights
1 = β0 ≥ β1 ≥ β2 ≥ ... ≥ βM ≥ 0.
where βj is the weight applied to a node that has been a
descendant of a reported node j times. Note that an upper
bound on M is h, the maximum height of any XML tree
in the collection. However, in practice M is likely to be
relatively small (perhaps 3 or 4).
Figure 9 presents replacements for the Up and Down
routines of figure 6, incorporating this series of weights. One
extra field is required in each node, as follows:
x.j - down count
The value of x.j is initially set to zero in all nodes and is
incremented each time Down is called with x as its argument.
When computing the score of node, the value of x.j selects
1 Up(x, y) ≡
2 if not y.reported then
3 S.remove(y)
4 y.g ← y.g + x.f − x.g
5 recompute y.score
6 S.add(y)
8 if y is not a root node then
9 Up (x, y.parent)
10 end if
11 end if
12
13 Down(x) ≡
14 if x.j < M then
15 x.j ← x.j + 1
16 if not x.reported then
17 S.remove(x)
18 recompute x.score
19 S.add(x)
20 end if
21 foreach y ∈ x.children do
22 Down (y)
23 end do
24 end if
Figure 9: Extended tree traversal routines.
the weight to be applied to the node by adjusting the value
of xt in equation 1, as follows:
xt = βx.j · (ft − α · gt) (3)
where ft and gt are the components of x.f and x.g
corresponding to term t.
A few additional changes are required to extend Up and
Down. The Up routine returns immediately (line 2) if its
argument has already been reported, since term frequencies
have already been adjusted in its ancestors. The Down
routine does not report its argument, but instead recomputes
its score and adds it back into S.
A node cannot be an argument to Down more than M +1
times, which in turn implies an overall time complexity of
O((nM + mh) log n). Since M ≤ h and m ≤ n, the time
complexity is also O(nh log n).
9. CONCLUDING DISCUSSION
When generating retrieval results over an XML collection,
some overlap in the results should be tolerated, and may be
beneficial. For example, when a highly exhaustive and fairly
specific (3,2) element contains a much smaller (2,3) element,
both should be reported to the user, and retrieval algorithms
and evaluation metrics should respect this relationship. The
algorithm presented in this paper controls overlap by
weighting the terms occurring in reported elements to reflect their
reduced importance.
Other approaches may also help to control overlap. For
example, when XML retrieval results are presented to users
it may be desirable to cluster structurally related elements
together, visually illustrating the relationships between them.
While this style of user interface may help a user cope with
overlap, the strategy presented in this paper continues to be
applicable, by determining the best elements to include in
each cluster.
At Waterloo, we continue to develop and test our ideas
for INEX 2005. In particular, we are investigating methods
for learning the α and βj weights. We are also re-evaluating
our approach to document statistics and examining
appropriate adjustments to the k1 parameter as term weights
change [20].
10. ACKNOWLEDGMENTS
Thanks to Gabriella Kazai and Arjen de Vries for
providing an early version of their software for computing the XCG
metric, and thanks to Phil Tilker and Stefan B¨uttcher for
their help with the experimental evaluation. In part,
funding for this project was provided by IBM Canada through
the National Institute for Software Research.
11. REFERENCES
[1] N. Bruno, N. Koudas, and D. Srivastava. Holistic twig
joins: Optimal XML pattern matching. In Proceedings
of the 2002 ACM SIGMOD International Conference
on the Management of Data, pages 310-321, Madison,
Wisconsin, June 2002.
[2] D. Carmel, Y. S. Maarek, M. Mandelbrod, Y. Mass,
and A. Soffer. Searching XML documents via XML
fragments. In Proceedings of the 26th Annual
International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages
151-158, Toronto, Canada, 2003.
[3] C. L. A. Clarke and P. L. Tilker. MultiText
experiments for INEX 2004. In INEX 2004 Workshop
Proceedings, 2004. Published in LNCS 3493 [8].
[4] A. P. de Vries, G. Kazai, and M. Lalmas. Tolerance to
irrelevance: A user-effort oriented evaluation of
retrieval systems without predefined retrieval unit. In
RIAO 2004 Conference Proceedings, pages 463-473,
Avignon, France, April 2004.
[5] D. DeHaan, D. Toman, M. P. Consens, and M. T.
¨Ozsu. A comprehensive XQuery to SQL translation
using dynamic interval encoding. In Proceedings of the
2003 ACM SIGMOD International Conference on the
Management of Data, San Diego, June 2003.
[6] N. Fuhr and K. Großjohann. XIRQL: A query
language for information retrieval in XML documents.
In Proceedings of the 24th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 172-180, New Orleans,
September 2001.
[7] N. Fuhr, M. Lalmas, and S. Malik, editors. Initiative
for the Evaluation of XML Retrieval. Proceedings of
the Second Workshop (INEX 2003), Dagstuhl,
Germany, December 2003.
[8] N. Fuhr, M. Lalmas, S. Malik, and Zolt´an Szl´avik,
editors. Initiative for the Evaluation of XML
Retrieval. Proceedings of the Third Workshop (INEX
2004), Dagstuhl, Germany, December 2004. Published
as Advances in XML Information Retrieval, Lecture
Notes in Computer Science, volume 3493, Springer,
2005.
[9] K. J¨avelin and J. Kek¨al¨ainen. Cumulated gain-based
evaluation of IR techniques. ACM Transactions on
Information Systems, 20(4):422-446, 2002.
[10] J. Kamps, M. de Rijke, and B. Sigurbj¨ornsson. Length
normalization in XML retrieval. In Proceedings of the
27th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 80-87, Sheffield, UK, July 2004.
[11] G. Kazai, M. Lalmas, and A. P. de Vries. The overlap
problem in content-oriented XML retrieval evaluation.
In Proceedings of the 27th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 72-79, Sheffield, UK,
July 2004.
[12] G. Kazai, M. Lalmas, and A. P. de Vries. Reliability
tests for the XCG and inex-2002 metrics. In INEX
2004 Workshop Proceedings, 2004. Published in LNCS
3493 [8].
[13] J. Kek¨al¨ainen, M. Junkkari, P. Arvola, and T. Aalto.
TRIX 2004 - Struggling with the overlap. In INEX
2004 Workshop Proceedings, 2004. Published in LNCS
3493 [8].
[14] S. Liu, Q. Zou, and W. W. Chu. Configurable
indexing and ranking for XML information retrieval.
In Proceedings of the 27th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 88-95, Sheffield, UK,
July 2004.
[15] Y. Mass and M. Mandelbrod. Retrieving the most
relevant XML components. In INEX 2003 Workshop
Proceedings, Dagstuhl, Germany, December 2003.
[16] Y. Mass and M. Mandelbrod. Component ranking and
automatic query refinement for XML retrieval. In
INEX 2004 Workshop Proceedings, 2004. Published in
LNCS 3493 [8].
[17] P. Ogilvie and J. Callan. Hierarchical language models
for XML component retrieval. In INEX 2004
Workshop Proceedings, 2004. Published in LNCS
3493 [8].
[18] J. Pehcevski, J. A. Thom, and A. Vercoustre. Hybrid
XML retrieval re-visited. In INEX 2004 Workshop
Proceedings, 2004. Published in LNCS 3493 [8].
[19] B. Piwowarski and M. Lalmas. Providing consistent
and exhaustive relevance assessments for XML
retrieval evaluation. In Proceedings of the 13th ACM
Conference on Information and Knowledge
Management, pages 361-370, Washington, DC,
November 2004.
[20] S. Robertson, H. Zaragoza, and M. Taylor. Simple
BM25 extension to multiple weighted fields. In
Proceedings of the 13th ACM Conference on
Information and Knowledge Management, pages
42-50, Washington, DC, November 2004.
[21] S. E. Robertson, S. Walker, and M. Beaulieu. Okapi at
TREC-7: Automatic ad-hoc, filtering, VLC and
interactive track. In Proceedings of the Seventh Text
REtrieval Conference, Gaithersburg, MD, November
1998.
[22] A. Trotman and B. Sigurbj¨ornsson. NEXI, now and
next. In INEX 2004 Workshop Proceedings, 2004.
Published in LNCS 3493 [8].
[23] J. Vittaut, B. Piwowarski, and P. Gallinari. An
algebra for structured queries in bayesian networks. In
INEX 2004 Workshop Proceedings, 2004. Published in
LNCS 3493 [8]. | baseline retrieval;inex;extended tree traversal routine;ideal gain vector;xml ir;xml cumulated gain metric;priority queue;cumulated gain;multitext system;term frequency vector;rank;sog quantization;xml;information retrieval;time complexity;xcg metric reward retrieval;re-ranking algorithm |
train_H-90 | Context-Sensitive Information Retrieval Using Implicit Feedback | A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially. | 1. INTRODUCTION
In most existing information retrieval models, the retrieval
problem is treated as involving one single query and a set of documents.
From a single query, however, the retrieval system can only have
very limited clue about the user"s information need. An optimal
retrieval system thus should try to exploit as much additional context
information as possible to improve retrieval accuracy, whenever it
is available. Indeed, context-sensitive retrieval has been identified
as a major challenge in information retrieval research[2].
There are many kinds of context that we can exploit. Relevance
feedback [14] can be considered as a way for a user to provide
more context of search and is known to be effective for
improving retrieval accuracy. However, relevance feedback requires that
a user explicitly provides feedback information, such as specifying
the category of the information need or marking a subset of
retrieved documents as relevant documents. Since it forces the user
to engage additional activities while the benefits are not always
obvious to the user, a user is often reluctant to provide such feedback
information. Thus the effectiveness of relevance feedback may be
limited in real applications.
For this reason, implicit feedback has attracted much attention
recently [11, 13, 18, 17, 12]. In general, the retrieval results using the
user"s initial query may not be satisfactory; often, the user would
need to revise the query to improve the retrieval/ranking accuracy
[8]. For a complex or difficult information need, the user may need
to modify his/her query and view ranked documents with many
iterations before the information need is completely satisfied. In such
an interactive retrieval scenario, the information naturally available
to the retrieval system is more than just the current user query and
the document collection - in general, all the interaction history can
be available to the retrieval system, including past queries,
information about which documents the user has chosen to view, and even
how a user has read a document (e.g., which part of a document the
user spends a lot of time in reading). We define implicit feedback
broadly as exploiting all such naturally available interaction history
to improve retrieval results.
A major advantage of implicit feedback is that we can improve
the retrieval accuracy without requiring any user effort. For
example, if the current query is java, without knowing any extra
information, it would be impossible to know whether it is intended
to mean the Java programming language or the Java island in
Indonesia. As a result, the retrieved documents will likely have both
kinds of documents - some may be about the programming
language and some may be about the island. However, any particular
user is unlikely searching for both types of documents. Such an
ambiguity can be resolved by exploiting history information. For
example, if we know that the previous query from the user is cgi
programming, it would strongly suggest that it is the programming
language that the user is searching for.
Implicit feedback was studied in several previous works. In [11],
Joachims explored how to capture and exploit the clickthrough
information and demonstrated that such implicit feedback
information can indeed improve the search accuracy for a group of
people. In [18], a simulation study of the effectiveness of different
implicit feedback algorithms was conducted, and several retrieval
models designed for exploiting clickthrough information were
proposed and evaluated. In [17], some existing retrieval algorithms are
adapted to improve search results based on the browsing history of
a user. Other related work on using context includes personalized
search [1, 3, 4, 7, 10], query log analysis [5], context factors [12],
and implicit queries [6].
While the previous work has mostly focused on using
clickthrough information, in this paper, we use both clickthrough
information and preceding queries, and focus on developing new
context-sensitive language models for retrieval. Specifically, we
develop models for using implicit feedback information such as
query and clickthrough history of the current search session to
improve retrieval accuracy. We use the KL-divergence retrieval model
[19] as the basis and propose to treat context-sensitive retrieval as
estimating a query language model based on the current query and
any search context information. We propose several statistical
language models to incorporate query and clickthrough history into
the KL-divergence model.
One challenge in studying implicit feedback models is that there
does not exist any suitable test collection for evaluation. We thus
use the TREC AP data to create a test collection with implicit
feedback information, which can be used to quantitatively evaluate
implicit feedback models. To the best of our knowledge, this is the
first test set for implicit feedback. We evaluate the proposed
models using this data set. The experimental results show that using
implicit feedback information, especially the clickthrough data, can
substantially improve retrieval performance without requiring
additional effort from the user.
The remaining sections are organized as follows. In Section 2,
we attempt to define the problem of implicit feedback and introduce
some terms that we will use later. In Section 3, we propose several
implicit feedback models based on statistical language models. In
Section 4, we describe how we create the data set for implicit
feedback experiments. In Section 5, we evaluate different implicit
feedback models on the created data set. Section 6 is our conclusions
and future work.
2. PROBLEM DEFINITION
There are two kinds of context information we can use for
implicit feedback. One is short-term context, which is the immediate
surrounding information which throws light on a user"s current
information need in a single session. A session can be considered as a
period consisting of all interactions for the same information need.
The category of a user"s information need (e.g., kids or sports),
previous queries, and recently viewed documents are all examples of
short-term context. Such information is most directly related to the
current information need of the user and thus can be expected to be
most useful for improving the current search. In general, short-term
context is most useful for improving search in the current session,
but may not be so helpful for search activities in a different
session. The other kind of context is long-term context, which refers
to information such as a user"s education level and general interest,
accumulated user query history and past user clickthrough
information; such information is generally stable for a long time and is
often accumulated over time. Long-term context can be applicable
to all sessions, but may not be as effective as the short-term context
in improving search accuracy for a particular session. In this paper,
we focus on the short-term context, though some of our methods
can also be used to naturally incorporate some long-term context.
In a single search session, a user may interact with the search
system several times. During interactions, the user would
continuously modify the query. Therefore for the current query Qk
(except for the first query of a search session) , there is a query history,
HQ = (Q1, ..., Qk−1) associated with it, which consists of the
preceding queries given by the same user in the current session. Note
that we assume that the session boundaries are known in this paper.
In practice, we need techniques to automatically discover session
boundaries, which have been studied in [9, 16]. Traditionally, the
retrieval system only uses the current query Qk to do retrieval. But
the short-term query history clearly may provide useful clues about
the user"s current information need as seen in the java example
given in the previous section. Indeed, our previous work [15] has
shown that the short-term query history is useful for improving
retrieval accuracy.
In addition to the query history, there may be other short-term
context information available. For example, a user would
presumably frequently click some documents to view. We refer to data
associated with these actions as clickthrough history. The
clickthrough data may include the title, summary, and perhaps also the
content and location (e.g., the URL) of the clicked document.
Although it is not clear whether a viewed document is actually
relevant to the user"s information need, we may safely assume that
the displayed summary/title information about the document is
attractive to the user, thus conveys information about the user"s
information need. Suppose we concatenate all the displayed text
information about a document (usually title and summary) together, we
will also have a clicked summary Ci in each round of retrieval. In
general, we may have a history of clicked summaries C1, ..., Ck−1.
We will also exploit such clickthrough history HC = (C1, ..., Ck−1)
to improve our search accuracy for the current query Qk. Previous
work has also shown positive results using similar clickthrough
information [11, 17].
Both query history and clickthrough history are implicit
feedback information, which naturally exists in interactive information
retrieval, thus no additional user effort is needed to collect them. In
this paper, we study how to exploit such information (HQ and HC ),
develop models to incorporate the query history and clickthrough
history into a retrieval ranking function, and quantitatively evaluate
these models.
3. LANGUAGE MODELS FOR
CONTEXTSENSITIVEINFORMATIONRETRIEVAL
Intuitively, the query history HQ and clickthrough history HC
are both useful for improving search accuracy for the current query
Qk. An important research question is how we can exploit such
information effectively. We propose to use statistical language
models to model a user"s information need and develop four specific
context-sensitive language models to incorporate context
information into a basic retrieval model.
3.1 Basic retrieval model
We use the Kullback-Leibler (KL) divergence method [19] as
our basic retrieval method. According to this model, the retrieval
task involves computing a query language model θQ for a given
query and a document language model θD for a document and then
computing their KL divergence D(θQ||θD), which serves as the
score of the document.
One advantage of this approach is that we can naturally
incorporate the search context as additional evidence to improve our
estimate of the query language model.
Formally, let HQ = (Q1, ..., Qk−1) be the query history and
the current query be Qk. Let HC = (C1, ..., Ck−1) be the
clickthrough history. Note that Ci is the concatenation of all clicked
documents" summaries in the i-th round of retrieval since we may
reasonably treat all these summaries equally. Our task is to estimate
a context query model, which we denote by p(w|θk), based on the
current query Qk, as well as the query history HQ and clickthrough
history HC . We now describe several different language models for
exploiting HQ and HC to estimate p(w|θk). We will use c(w, X)
to denote the count of word w in text X, which could be either a
query or a clicked document"s summary or any other text. We will
use |X| to denote the length of text X or the total number of words
in X.
3.2 Fixed Coefficient Interpolation (FixInt)
Our first idea is to summarize the query history HQ with a
unigram language model p(w|HQ) and the clickthrough history HC
with another unigram language model p(w|HC ). Then we linearly
interpolate these two history models to obtain the history model
p(w|H). Finally, we interpolate the history model p(w|H) with
the current query model p(w|Qk). These models are defined as
follows.
p(w|Qi) =
c(w, Qi)
|Qi|
p(w|HQ) =
1
k − 1
i=k−1
i=1
p(w|Qi)
p(w|Ci) =
c(w, Ci)
|Ci|
p(w|HC ) =
1
k − 1
i=k−1
i=1
p(w|Ci)
p(w|H) = βp(w|HC ) + (1 − β)p(w|HQ)
p(w|θk) = αp(w|Qk) + (1 − α)p(w|H)
where β ∈ [0, 1] is a parameter to control the weight on each
history model, and where α ∈ [0, 1] is a parameter to control the
weight on the current query and the history information.
If we combine these equations, we see that
p(w|θk) = αp(w|Qk) + (1 − α)[βp(w|HC ) + (1 − β)p(w|HQ)]
That is, the estimated context query model is just a fixed coefficient
interpolation of three models p(w|Qk), p(w|HQ), and p(w|HC ).
3.3 Bayesian Interpolation (BayesInt)
One possible problem with the FixInt approach is that the
coefficients, especially α, are fixed across all the queries. But intuitively,
if our current query Qk is very long, we should trust the current
query more, whereas if Qk has just one word, it may be beneficial
to put more weight on the history. To capture this intuition, we treat
p(w|HQ) and p(w|HC ) as Dirichlet priors and Qk as the observed
data to estimate a context query model using Bayesian estimator.
The estimated model is given by
p(w|θk) =
c(w, Qk) + µp(w|HQ) + νp(w|HC )
|Qk| + µ + ν
=
|Qk|
|Qk| + µ + ν
p(w|Qk)+
µ + ν
|Qk| + µ + ν
[
µ
µ + ν
p(w|HQ)+
ν
µ + ν
p(w|HC )]
where µ is the prior sample size for p(w|HQ) and ν is the prior
sample size for p(w|HC ). We see that the only difference between
BayesInt and FixInt is the interpolation coefficients are now
adaptive to the query length. Indeed, when viewing BayesInt as FixInt,
we see that α = |Qk|
|Qk|+µ+ν
, β = ν
ν+µ
, thus with fixed µ and ν,
we will have a query-dependent α. Later we will show that such an
adaptive α empirically performs better than a fixed α.
3.4 Online Bayesian Updating (OnlineUp)
Both FixInt and BayesInt summarize the history information by
averaging the unigram language models estimated based on
previous queries or clicked summaries. This means that all previous
queries are treated equally and so are all clicked summaries.
However, as the user interacts with the system and acquires more
knowledge about the information in the collection, presumably, the
reformulated queries will become better and better. Thus assigning
decaying weights to the previous queries so as to trust a recent query
more than an earlier query appears to be reasonable. Interestingly,
if we incrementally update our belief about the user"s information
need after seeing each query, we could naturally obtain decaying
weights on the previous queries. Since such an incremental online
updating strategy can be used to exploit any evidence in an
interactive retrieval system, we present it in a more general way.
In a typical retrieval system, the retrieval system responds to
every new query entered by the user by presenting a ranked list
of documents. In order to rank documents, the system must have
some model for the user"s information need. In the KL divergence
retrieval model, this means that the system must compute a query
model whenever a user enters a (new) query. A principled way of
updating the query model is to use Bayesian estimation, which we
discuss below.
3.4.1 Bayesian updating
We first discuss how we apply Bayesian estimation to update a
query model in general. Let p(w|φ) be our current query model
and T be a new piece of text evidence observed (e.g., T can be a
query or a clicked summary). To update the query model based on
T, we use φ to define a Dirichlet prior parameterized as
Dir(µT p(w1|φ), ..., µT p(wN |φ))
where µT is the equivalent sample size of the prior. We use
Dirichlet prior because it is a conjugate prior for multinomial
distributions. With such a conjugate prior, the predictive distribution of φ
(or equivalently, the mean of the posterior distribution of φ is given
by
p(w|φ) =
c(w, T) + µT p(w|φ)
|T| + µT
(1)
where c(w, T) is the count of w in T and |T| is the length of T.
Parameter µT indicates our confidence in the prior expressed in
terms of an equivalent text sample comparable with T. For
example, µT = 1 indicates that the influence of the prior is equivalent to
adding one extra word to T.
3.4.2 Sequential query model updating
We now discuss how we can update our query model over time
during an interactive retrieval process using Bayesian estimation.
In general, we assume that the retrieval system maintains a current
query model φi at any moment. As soon as we obtain some implicit
feedback evidence in the form of a piece of text Ti, we will update
the query model.
Initially, before we see any user query, we may already have
some information about the user. For example, we may have some
information about what documents the user has viewed in the past.
We use such information to define a prior on the query model,
which is denoted by φ0. After we observe the first query Q1, we
can update the query model based on the new observed data Q1.
The updated query model φ1 can then be used for ranking
documents in response to Q1. As the user views some documents, the
displayed summary text for such documents C1 (i.e., clicked
summaries) can serve as some new data for us to further update the
query model to obtain φ1. As we obtain the second query Q2 from
the user, we can update φ1 to obtain a new model φ2. In general,
we may repeat such an updating process to iteratively update the
query model.
Clearly, we see two types of updating: (1) updating based on a
new query Qi; (2) updating based on a new clicked summary Ci. In
both cases, we can treat the current model as a prior of the context
query model and treat the new observed query or clicked summary
as observed data. Thus we have the following updating equations:
p(w|φi) =
c(w, Qi) + µip(w|φi−1)
|Qi| + µi
p(w|φi) =
c(w, Ci) + νip(w|φi)
|Ci| + νi
where µi is the equivalent sample size for the prior when updating
the model based on a query, while νi is the equivalent sample size
for the prior when updating the model based on a clicked summary.
If we set µi = 0 (or νi = 0) we essentially ignore the prior model,
thus would start a completely new query model based on the query
Qi (or the clicked summary Ci). On the other hand, if we set µi =
+∞ (or νi = +∞) we essentially ignore the observed query (or
the clicked summary) and do not update our model. Thus the model
remains the same as if we do not observe any new text evidence. In
general, the parameters µi and νi may have different values for
different i. For example, at the very beginning, we may have very
sparse query history, thus we could use a smaller µi, but later as the
query history is richer, we can consider using a larger µi. But in
our experiments, unless otherwise stated, we set them to the same
constants, i.e., ∀i, j, µi = µj, νi = νj.
Note that we can take either p(w|φi) or p(w|φi) as our context
query model for ranking documents. This suggests that we do not
have to wait until a user enters a new query to initiate a new round
of retrieval; instead, as soon as we collect clicked summary Ci, we
can update the query model and use p(w|φi) to immediately rerank
any documents that a user has not yet seen.
To score documents after seeing query Qk, we use p(w|φk), i.e.,
p(w|θk) = p(w|φk)
3.5 Batch Bayesian updating (BatchUp)
If we set the equivalent sample size parameters to fixed
constant, the OnlineUp algorithm would introduce a decaying factor
- repeated interpolation would cause the early data to have a low
weight. This may be appropriate for the query history as it is
reasonable to believe that the user becomes better and better at query
formulation as time goes on, but it is not necessarily appropriate for
the clickthrough information, especially because we use the
displayed summary, rather than the actual content of a clicked
document. One way to avoid applying a decaying interpolation to
the clickthrough data is to do OnlineUp only for the query history
Q = (Q1, ..., Qi−1), but not for the clickthrough data C. We first
buffer all the clickthrough data together and use the whole chunk
of clickthrough data to update the model generated through
running OnlineUp on previous queries. The updating equations are as
follows.
p(w|φi) =
c(w, Qi) + µip(w|φi−1)
|Qi| + µi
p(w|ψi) =
i−1
j=1 c(w, Cj) + νip(w|φi)
i−1
j=1 |Cj| + νi
where µi has the same interpretation as in OnlineUp, but νi now
indicates to what extent we want to trust the clicked summaries. As
in OnlineUp, we set all µi"s and νi"s to the same value. And to rank
documents after seeing the current query Qk, we use
p(w|θk) = p(w|ψk)
4. DATA COLLECTION
In order to quantitatively evaluate our models, we need a data set
which includes not only a text database and testing topics, but also
query history and clickthrough history for each topic. Since there
is no such data set available to us, we have to create one. There
are two choices. One is to extract topics and any associated query
history and clickthrough history for each topic from the log of a
retrieval system (e.g., search engine). But the problem is that we
have no relevance judgments on such data. The other choice is to
use a TREC data set, which has a text database, topic description
and relevance judgment file. Unfortunately, there are no query
history and clickthrough history data. We decide to augment a TREC
data set by collecting query history and clickthrough history data.
We select TREC AP88, AP89 and AP90 data as our text database,
because AP data has been used in several TREC tasks and has
relatively complete judgments. There are altogether 242918 news
articles and the average document length is 416 words. Most articles
have titles. If not, we select the first sentence of the text as the
title. For the preprocessing, we only do case folding and do not do
stopword removal or stemming.
We select 30 relatively difficult topics from TREC topics 1-150.
These 30 topics have the worst average precision performance among
TREC topics 1-150 according to some baseline experiments using
the KL-Divergence model with Bayesian prior smoothing [20]. The
reason why we select difficult topics is that the user then would
have to have several interactions with the retrieval system in order
to get satisfactory results so that we can expect to collect a
relatively richer query history and clickthrough history data from the
user. In real applications, we may also expect our models to be
most useful for such difficult topics, so our data collection strategy
reflects the real world applications well.
We index the TREC AP data set and set up a search engine and
web interface for TREC AP news articles. We use 3 subjects to do
experiments to collect query history and clickthrough history data.
Each subject is assigned 10 topics and given the topic descriptions
provided by TREC. For each topic, the first query is the title of
the topic given in the original TREC topic description. After the
subject submits the query, the search engine will do retrieval and
return a ranked list of search results to the subject. The subject will
browse the results and maybe click one or more results to browse
the full text of article(s). The subject may also modify the query to
do another search. For each topic, the subject composes at least 4
queries. In our experiment, only the first 4 queries for each topic
are used. The user needs to select the topic number from a
selection menu before submitting the query to the search engine so that
we can easily detect the session boundary, which is not the focus of
our study. We use a relational database to store user interactions,
including the submitted queries and clicked documents. For each
query, we store the query terms and the associated result pages.
And for each clicked document, we store the summary as shown
on the search result page. The summary of the article is query
dependent and is computed online using fixed-length passage retrieval
(KL divergence model with Bayesian prior smoothing).
Among 120 (4 for each of 30 topics) queries which we study in
the experiment, the average query length is 3.71 words. Altogether
there are 91 documents clicked to view. So on average, there are
around 3 clicks per topic. The average length of clicked summary
FixInt BayesInt OnlineUp BatchUp
Query (α = 0.1, β = 1.0) (µ = 0.2, ν = 5.0) (µ = 5.0, ν = 15.0) (µ = 2.0, ν = 15.0)
MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs
q1 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317
q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150
q2 + HQ + HC 0.0324 0.1117 0.0345 0.1117 0.0215 0.0733 0.0342 0.1100
Improve. 3.8% -2.9% 10.6% -2.9% -31.1% -36.3% 9.6% -4.3%
q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483
q3 + HQ + HC 0.0726 0.1967 0.0816 0.2067 0.0706 0.1783 0.0810 0.2067
Improve 72.4% 32.6% 93.8% 39.4% 67.7% 20.2% 92.4% 39.4%
q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933
q4 + HQ + HC 0.0891 0.2233 0.0955 0.2317 0.0792 0.2067 0.0950 0.2250
Improve 66.2% 15.5% 78.2% 19.9% 47.8% 6.9% 77.2% 16.4%
Table 1: Effect of using query history and clickthrough data for document ranking.
is 34.4 words. Among 91 clicked documents, 29 documents are
judged relevant according to TREC judgment file. This data set is
publicly available 1
.
5. EXPERIMENTS
5.1 Experiment design
Our major hypothesis is that using search context (i.e., query
history and clickthrough information) can help improve search
accuracy. In particular, the search context can provide extra information
to help us estimate a better query model than using just the current
query. So most of our experiments involve comparing the retrieval
performance using the current query only (thus ignoring any
context) with that using the current query as well as the search context.
Since we collected four versions of queries for each topic, we
make such comparisons for each version of queries. We use two
performance measures: (1) Mean Average Precision (MAP): This
is the standard non-interpolated average precision and serves as a
good measure of the overall ranking accuracy. (2) Precision at 20
documents (pr@20docs): This measure does not average well, but
it is more meaningful than MAP and reflects the utility for users
who only read the top 20 documents. In all cases, the reported
figure is the average over all of the 30 topics.
We evaluate the four models for exploiting search context (i.e.,
FixInt, BayesInt, OnlineUp, and BatchUp). Each model has
precisely two parameters (α and β for FixInt; µ and ν for others).
Note that µ and ν may need to be interpreted differently for
different methods. We vary these parameters and identify the optimal
performance for each method. We also vary the parameters to study
the sensitivity of our algorithms to the setting of the parameters.
5.2 Result analysis
5.2.1 Overall effect of search context
We compare the optimal performances of four models with those
using the current query only in Table 1. A row labeled with qi is
the baseline performance and a row labeled with qi + HQ + HC
is the performance of using search context. We can make several
observations from this table:
1. Comparing the baseline performances indicates that on average
reformulated queries are better than the previous queries with the
performance of q4 being the best. Users generally formulate better
and better queries.
2. Using search context generally has positive effect, especially
when the context is rich. This can be seen from the fact that the
1
http://sifaka.cs.uiuc.edu/ir/ucair/QCHistory.zip
improvement for q4 and q3 is generally more substantial compared
with q2. Actually, in many cases with q2, using the context may
hurt the performance, probably because the history at that point is
sparse. When the search context is rich, the performance
improvement can be quite substantial. For example, BatchUp achieves
92.4% improvement in the mean average precision over q3 and
77.2% improvement over q4. (The generally low precisions also
make the relative improvement deceptively high, though.)
3. Among the four models using search context, the performances
of FixInt and OnlineUp are clearly worse than those of BayesInt
and BatchUp. Since BayesInt performs better than FixInt and the
main difference between BayesInt and FixInt is that the former uses
an adaptive coefficient for interpolation, the results suggest that
using adaptive coefficient is quite beneficial and a Bayesian style
interpolation makes sense. The main difference between OnlineUp
and BatchUp is that OnlineUp uses decaying coefficients to
combine the multiple clicked summaries, while BatchUp simply
concatenates all clicked summaries. Therefore the fact that BatchUp
is consistently better than OnlineUp indicates that the weights for
combining the clicked summaries indeed should not be decaying.
While OnlineUp is theoretically appealing, its performance is
inferior to BayesInt and BatchUp, likely because of the decaying
coefficient. Overall, BatchUp appears to be the best method when we
vary the parameter settings.
We have two different kinds of search context - query history
and clickthrough data. We now look into the contribution of each
kind of context.
5.2.2 Using query history only
In each of four models, we can turn off the clickthrough
history data by setting parameters appropriately. This allows us to
evaluate the effect of using query history alone. We use the same
parameter setting for query history as in Table 1. The results are
shown in Table 2. Here we see that in general, the benefit of using
query history is very limited with mixed results. This is different
from what is reported in a previous study [15], where using query
history is consistently helpful. Another observation is that the
context runs perform poorly at q2, but generally perform (slightly)
better than the baselines for q3 and q4. This is again likely because
at the beginning the initial query, which is the title in the original
TREC topic description, may not be a good query; indeed, on
average, performances of these first-generation queries are clearly
poorer than those of all other user-formulated queries in the later
generations. Yet another observation is that when using query
history only, the BayesInt model appears to be better than other
models. Since the clickthrough data is ignored, OnlineUp and BatchUp
FixInt BayesInt OnlineUp BatchUp
Query (α = 0.1, β = 0) (µ = 0.2,ν = 0) (µ = 5.0,ν = +∞) (µ = 2.0, ν = +∞)
MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs
q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150
q2 + HQ 0.0097 0.0317 0.0311 0.1200 0.0213 0.0783 0.0287 0.0967
Improve. -68.9% -72.4% -0.3% 4.3% -31.7% -31.9% -8.0% -15.9%
q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483
q3 + HQ 0.0261 0.0917 0.0451 0.1517 0.0444 0.1333 0.0455 0.1450
Improve -38.2% -38.2% 7.1% 2.3% 5.5% -10.1% 8.1% -2.2%
q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933
q4 + HQ 0.0428 0.1467 0.0537 0.1917 0.0550 0.1733 0.0552 0.1917
Improve -20.1% -24.1% 0.2% -0.8% 3.0% -10.3% 3.0% -0.8%
Table 2: Effect of using query history only for document ranking.
µ 0 0.5 1 2 3 4 5 6 7 8 9
q2 + HQ MAP 0.0312 0.0313 0.0308 0.0287 0.0257 0.0231 0.0213 0.0194 0.0183 0.0182 0.0164
q3 + HQ MAP 0.0421 0.0442 0.0441 0.0455 0.0457 0.0458 0.0444 0.0439 0.0430 0.0390 0.0335
q4 + HQ MAP 0.0536 0.0546 0.0547 0.0552 0.0544 0.0548 0.0550 0.0541 0.0534 0.0525 0.0513
Table 3: Average Precision of BatchUp using query history only
are essentially the same algorithm. The displayed results thus
reflect the variation caused by parameter µ. A smaller setting of 2.0
is seen better than a larger value of 5.0. A more complete picture
of the influence of the setting of µ can be seen from Table 3, where
we show the performance figures for a wider range of values of µ.
The value of µ can be interpreted as how many words we regard
the query history is worth. A larger value thus puts more weight
on the history and is seen to hurt the performance more when the
history information is not rich. Thus while for q4 the best
performance tends to be achieved for µ ∈ [2, 5], only when µ = 0.5 we
see some small benefit for q2. As we would expect, an excessively
large µ would hurt the performance in general, but q2 is hurt most
and q4 is barely hurt, indicating that as we accumulate more and
more query history information, we can put more and more weight
on the history information. This also suggests that a better strategy
should probably dynamically adjust parameters according to how
much history information we have.
The mixed query history results suggest that the positive effect
of using implicit feedback information may have largely come from
the use of clickthrough history, which is indeed true as we discuss
in the next subsection.
5.2.3 Using clickthrough history only
We now turn off the query history and only use the clicked
summaries plus the current query. The results are shown in Table 4. We
see that the benefit of using clickthrough information is much more
significant than that of using query history. We see an overall
positive effect, often with significant improvement over the baseline. It
is also clear that the richer the context data is, the more
improvement using clicked summaries can achieve. Other than some
occasional degradation of precision at 20 documents, the improvement
is fairly consistent and often quite substantial.
These results show that the clicked summary text is in general
quite useful for inferring a user"s information need. Intuitively,
using the summary text, rather than the actual content of the
document, makes more sense, as it is quite possible that the document
behind a seemingly relevant summary is actually non-relevant.
29 out of the 91 clicked documents are relevant. Updating the
query model based on such summaries would bring up the ranks
of these relevant documents, causing performance improvement.
However, such improvement is really not beneficial for the user as
the user has already seen these relevant documents. To see how
much improvement we have achieved on improving the ranks of
the unseen relevant documents, we exclude these 29 relevant
documents from our judgment file and recompute the performance of
BayesInt and the baseline using the new judgment file. The results
are shown in Table 5. Note that the performance of the baseline
method is lower due to the removal of the 29 relevant documents,
which would have been generally ranked high in the results. From
Table 5, we see clearly that using clicked summaries also helps
improve the ranks of unseen relevant documents significantly.
Query BayesInt(µ = 0, ν = 5.0)
MAP pr@20docs
q2 0.0263 0.100
q2 + HC 0.0314 0.100
Improve. 19.4% 0%
q3 0.0331 0.125
q3 + HC 0.0661 0.178
Improve 99.7% 42.4%
q4 0.0442 0.165
q4 + HC 0.0739 0.188
Improve 67.2% 13.9%
Table 5: BayesInt evaluated on unseen relevant documents
One remaining question is whether the clickthrough data is still
helpful if none of the clicked documents is relevant. To answer
this question, we took out the 29 relevant summaries from our
clickthrough history data HC to obtain a smaller set of clicked
summaries HC , and re-evaluated the performance of the BayesInt
method using HC with the same setting of parameters as in
Table 4. The results are shown in Table 6. We see that although the
improvement is not as substantial as in Table 4, the average
precision is improved across all generations of queries. These results
should be interpreted as very encouraging as they are based on only
62 non-relevant clickthroughs. In reality, a user would more likely
click some relevant summaries, which would help bring up more
relevant documents as we have seen in Table 4 and Table 5.
FixInt BayesInt OnlineUp BatchUp
Query (α = 0.1, β = 1) (µ = 0, ν = 5.0) (µk = 5.0, ν = 15, ∀i < k, µi = +∞) (µ = 0, ν = 15)
MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs
q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150
q2 + HC 0.0324 0.1117 0.0338 0.1133 0.0358 0.1300 0.0344 0.1167
Improve. 3.8% -2.9% 8.3% -1.5% 14.7% 13.0% 10.3% 1.5%
q3 0.0421 0.1483 0.0421 0.1483 0.04210 0.1483 0.0420 0.1483
q3 + HC 0.0726 0.1967 0.0766 0.2033 0.0622 0.1767 0.0513 0.1650
Improve 72.4% 32.6% 81.9% 37.1% 47.7% 19.2% 21.9% 11.3%
q4 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930
q4 + HC 0.0891 0.2233 0.0925 0.2283 0.0772 0.2217 0.0623 0.2050
Improve 66.2% 15.5% 72.6% 18.1% 44.0% 14.7% 16.2% 6.1%
Table 4: Effect of using clickthrough data only for document ranking.
Query BayesInt(µ = 0, ν = 5.0)
MAP pr@20docs
q2 0.0312 0.1150
q2 + HC 0.0313 0.0950
Improve. 0.3% -17.4%
q3 0.0421 0.1483
q3 + HC 0.0521 0.1820
Improve 23.8% 23.0%
q4 0.0536 0.1930
q4 + HC 0.0620 0.1850
Improve 15.7% -4.1%
Table 6: Effect of using only non-relevant clickthrough data
5.2.4 Additive effect of context information
By comparing the results across Table 1, Table 2 and Table 4,
we can see that the benefit of the query history information and
that of clickthrough information are mostly additive, i.e.,
combining them can achieve better performance than using each alone,
but most improvement has clearly come from the clickthrough
information. In Table 7, we show this effect for the BatchUp method.
5.2.5 Parameter sensitivity
All four models have two parameters to control the relative weights
of HQ, HC , and Qk, though the parameterization is different from
model to model. In this subsection, we study the parameter
sensitivity for BatchUp, which appears to perform relatively better than
others. BatchUp has two parameters µ and ν.
We first look at µ. When µ is set to 0, the query history is not
used at all, and we essentially just use the clickthrough data
combined with the current query. If we increase µ, we will gradually
incorporate more information from the previous queries. In Table 8,
we show how the average precision of BatchUp changes as we vary
µ with ν fixed to 15.0, where the best performance of BatchUp is
achieved. We see that the performance is mostly insensitive to the
change of µ for q3 and q4, but is decreasing as µ increases for q2.
The pattern is also similar when we set ν to other values.
In addition to the fact that q1 is generally worse than q2, q3, and
q4, another possible reason why the sensitivity is lower for q3 and
q4 may be that we generally have more clickthrough data
available for q3 and q4 than for q2, and the dominating influence of the
clickthrough data has made the small differences caused by µ less
visible for q3 and q4.
The best performance is generally achieved when µ is around
2.0, which means that the past query information is as useful as
about 2 words in the current query. Except for q2, there is clearly
some tradeoff between the current query and the previous queries
Query MAP pr@20docs
q2 0.0312 0.1150
q2 + HQ 0.0287 0.0967
Improve. -8.0% -15.9%
q2 + HC 0.0344 0.1167
Improve. 10.3% 1.5%
q2 + HQ + HC 0.0342 0.1100
Improve. 9.6% -4.3%
q3 0.0421 0.1483
q3 + HQ 0.0455 0.1450
Improve 8.1% -2.2%
q3 + HC 0.0513 0.1650
Improve 21.9% 11.3%
q3 + HQ + HC 0.0810 0.2067
Improve 92.4% 39.4%
q4 0.0536 0.1930
q4 + HQ 0.0552 0.1917
Improve 3.0% -0.8%
q4 + HC 0.0623 0.2050
Improve 16.2% 6.1%
q4 + HQ + HC 0.0950 0.2250
Improve 77.2% 16.4%
Table 7: Additive benefit of context information
and using a balanced combination of them achieves better
performance than using each of them alone.
We now turn to the other parameter ν. When ν is set to 0, we
only use the clickthrough data; When ν is set to +∞, we only use
the query history and the current query. With µ set to 2.0, where
the best performance of BatchUp is achieved, we vary ν and show
the results in Table 9. We see that the performance is also not very
sensitive when ν ≤ 30, with the best performance often achieved
at ν = 15. This means that the combined information of query
history and the current query is as useful as about 15 words in the
clickthrough data, indicating that the clickthrough information is
highly valuable.
Overall, these sensitivity results show that BatchUp not only
performs better than other methods, but also is quite robust.
6. CONCLUSIONS AND FUTURE WORK
In this paper, we have explored how to exploit implicit
feedback information, including query history and clickthrough history
within the same search session, to improve information retrieval
performance. Using the KL-divergence retrieval model as the
basis, we proposed and studied four statistical language models for
context-sensitive information retrieval, i.e., FixInt, BayesInt,
OnlineUp and BatchUp. We use TREC AP Data to create a test set
µ 0 1 2 3 4 5 6 7 8 9 10
MAP 0.0386 0.0366 0.0342 0.0315 0.0290 0.0267 0.0250 0.0236 0.0229 0.0223 0.0219
q2 + HQ + HC pr@20 0.1333 0.1233 0.1100 0.1033 0.1017 0.0933 0.0833 0.0767 0.0783 0.0767 0.0750
MAP 0.0805 0.0807 0.0811 0.0814 0.0813 0.0808 0.0804 0.0799 0.0795 0.0790 0.0788
q3 + HQ + HC pr@20 0.210 0.2150 0.2067 0.205 0.2067 0.205 0.2067 0.2067 0.2050 0.2017 0.2000
MAP 0.0929 0.0947 0.0950 0.0940 0.0941 0.0940 0.0942 0.0937 0.0936 0.0932 0.0929
q4 + HQ + HC pr@20 0.2183 0.2217 0.2250 0.2217 0.2233 0.2267 0.2283 0.2333 0.2333 0.2350 0.2333
Table 8: Sensitivity of µ in BatchUp
ν 0 1 2 5 10 15 30 100 300 500
MAP 0.0278 0.0287 0.0296 0.0315 0.0334 0.0342 0.0328 0.0311 0.0296 0.0290
q2 + HQ + HC pr@20 0.0933 0.0950 0.0950 0.1000 0.1050 0.1100 0.1150 0.0983 0.0967 0.0967
MAP 0.0728 0.0739 0.0751 0.0786 0.0809 0.0811 0.0770 0.0634 0.0511 0.0491
q3 + HQ + HC pr@20 0.1917 0.1933 0.1950 0.2100 0.2000 0.2067 0.2017 0.1783 0.1600 0.1550
MAP 0.0895 0.0903 0.0914 0.0932 0.0944 0.0950 0.0919 0.0761 0.0664 0.0625
q4 + HQ + HC pr@20 0.2267 0.2233 0.2283 0.2317 0.2233 0.2250 0.2283 0.2200 0.2067 0.2033
Table 9: Sensitivity of ν in BatchUp
for evaluating implicit feedback models. Experiment results show
that using implicit feedback, especially clickthrough history, can
substantially improve retrieval performance without requiring any
additional user effort.
The current work can be extended in several ways: First, we
have only explored some very simple language models for
incorporating implicit feedback information. It would be interesting to
develop more sophisticated models to better exploit query history
and clickthrough history. For example, we may treat a clicked
summary differently depending on whether the current query is a
generalization or refinement of the previous query. Second, the
proposed models can be implemented in any practical systems. We are
currently developing a client-side personalized search agent, which
will incorporate some of the proposed algorithms. We will also do
a user study to evaluate effectiveness of these models in the real
web search. Finally, we should further study a general retrieval
framework for sequential decision making in interactive
information retrieval and study how to optimize some of the parameters in
the context-sensitive retrieval models.
7. ACKNOWLEDGMENTS
This material is based in part upon work supported by the
National Science Foundation under award numbers IIS-0347933 and
IIS-0428472. We thank the anonymous reviewers for their useful
comments.
8. REFERENCES
[1] E. Adar and D. Karger. Haystack: Per-user information
environments. In Proceedings of CIKM 1999, 1999.
[2] J. Allan and et al. Challenges in information retrieval and
language modeling. Workshop at University of Amherst,
2002.
[3] K. Bharat. Searchpad: Explicit capture of search context to
support web search. In Proceeding of WWW 2000, 2000.
[4] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.
Relevance feedback and personalization: A language
modeling perspective. In Proeedings of Second DELOS
Workshop: Personalisation and Recommender Systems in
Digital Libraries, 2001.
[5] H. Cui, J.-R. Wen, J.-Y. Nie, and W.-Y. Ma. Probabilistic
query expansion using query logs. In Proceedings of WWW
2002, 2002.
[6] S. T. Dumais, E. Cutrell, R. Sarin, and E. Horvitz. Implicit
queries (IQ) for contextualized search (demo description). In
Proceedings of SIGIR 2004, page 594, 2004.
[7] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan,
G. Wolfman, and E. Ruppin. Placing search in context: The
concept revisited. In Proceedings of WWW 2002, 2001.
[8] C. Huang, L. Chien, and Y. Oyang. Query session based term
suggestion for interactive web search. In Proceedings of
WWW 2001, 2001.
[9] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic
web log session identification with statistical language
models. Journal of the American Society for Information
Science and Technology, 55(14):1290-1303, 2004.
[10] G. Jeh and J. Widom. Scaling personalized web search. In
Proceeding of WWW 2003, 2003.
[11] T. Joachims. Optimizing search engines using clickthrough
data. In Proceedings of SIGKDD 2002, 2002.
[12] D. Kelly and N. J. Belkin. Display time as implicit feedback:
Understanding task effects. In Proceedings of SIGIR 2004,
2004.
[13] D. Kelly and J. Teevan. Implicit feedback for inferring user
preference. SIGIR Forum, 32(2), 2003.
[14] J. Rocchio. Relevance feedback information retrieval. In The
Smart Retrieval System-Experiments in Automatic Document
Processing, pages 313-323, Kansas City, MO, 1971.
Prentice-Hall.
[15] X. Shen and C. Zhai. Exploiting query history for document
ranking in interactive information retrieval (poster). In
Proceedings of SIGIR 2003, 2003.
[16] S. Sriram, X. Shen, and C. Zhai. A session-based search
engine (poster). In Proceedings of SIGIR 2004, 2004.
[17] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web
search based on user profile constructed without any effort
from users. In Proceedings of WWW 2004, 2004.
[18] R. W. White, J. M. Jose, C. J. van Rijsbergen, and
I. Ruthven. A simulated study of implicit feedback models.
In Proceedings of ECIR 2004, pages 311-326, 2004.
[19] C. Zhai and J. Lafferty. Model-based feedback in the
KL-divergence retrieval model. In Proceedings of CIKM
2001, 2001.
[20] C. Zhai and J. Lafferty. A study of smoothing methods for
language models applied to ad-hoc information retrieval. In
Proceedings of SIGIR 2001, 2001. | kl-divergence retrieval model;context-sensitive language;long-term context;interactive retrieval;trec datum set;query expansion;mean average precision;fixed coefficient interpolation;query history information;implicit feedback information;current query;query history;context;retrieval accuracy;relevance feedback;clickthrough information;short-term context;bayesian estimation |
train_H-92 | Improving Web Search Ranking by Incorporating User Behavior Information | We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. | 1. INTRODUCTION
Millions of users interact with search engines daily. They issue
queries, follow some of the links in the results, click on ads, spend
time on pages, reformulate their queries, and perform other
actions. These interactions can serve as a valuable source of
information for tuning and improving web search result ranking
and can compliment more costly explicit judgments.
Implicit relevance feedback for ranking and personalization has
become an active area of research. Recent work by Joachims and
others exploring implicit feedback in controlled environments
have shown the value of incorporating implicit feedback into the
ranking process. Our motivation for this work is to understand
how implicit feedback can be used in a large-scale operational
environment to improve retrieval. How does it compare to and
compliment evidence from page content, anchor text, or link-based
features such as inlinks or PageRank? While it is intuitive that
user interactions with the web search engine should reveal at least
some information that could be used for ranking, estimating user
preferences in real web search settings is a challenging problem,
since real user interactions tend to be more noisy than
commonly assumed in the controlled settings of previous studies.
Our paper explores whether implicit feedback can be helpful in
realistic environments, where user feedback can be noisy (or
adversarial) and a web search engine already uses hundreds of
features and is heavily tuned. To this end, we explore different
approaches for ranking web search results using real user behavior
obtained as part of normal interactions with the web search
engine.
The specific contributions of this paper include:
• Analysis of alternatives for incorporating user behavior
into web search ranking (Section 3).
• An application of a robust implicit feedback model
derived from mining millions of user interactions with a
major web search engine (Section 4).
• A large scale evaluation over real user queries and search
results, showing significant improvements derived from
incorporating user feedback (Section 6).
We summarize our findings and discuss extensions to the current
work in Section 7, which concludes the paper.
2. BACKGROUND AND RELATED WORK
Ranking search results is a fundamental problem in information
retrieval. Most common approaches primarily focus on similarity
of query and a page, as well as the overall page quality [3,4,24].
However, with increasing popularity of search engines, implicit
feedback (i.e., the actions users take when interacting with the
search engine) can be used to improve the rankings.
Implicit relevance measures have been studied by several research
groups. An overview of implicit measures is compiled in Kelly and
Teevan [14]. This research, while developing valuable insights
into implicit relevance measures, was not applied to improve the
ranking of web search results in realistic settings.
Closely related to our work, Joachims [11] collected implicit
measures in place of explicit measures, introducing a technique
based entirely on clickthrough data to learn ranking functions. Fox
et al. [8] explored the relationship between implicit and explicit
measures in Web search, and developed Bayesian models to
correlate implicit measures and explicit relevance judgments for
both individual queries and search sessions. This work considered
a wide range of user behaviors (e.g., dwell time, scroll time,
reformulation patterns) in addition to the popular clickthrough
behavior. However, the modeling effort was aimed at predicting
explicit relevance judgments from implicit user actions and not
specifically at learning ranking functions. Other studies of user
behavior in web search include Pharo and Järvelin [19], but were
not directly applied to improve ranking.
More recently, Joachims et al. [12] presented an empirical
evaluation of interpreting clickthrough evidence. By performing
eye tracking studies and correlating predictions of their strategies
with explicit ratings, the authors showed that it is possible to
accurately interpret clickthroughs in a controlled, laboratory
setting. Unfortunately, the extent to which previous research
applies to real-world web search is unclear. At the same time,
while recent work (e.g., [26]) on using clickthrough information
for improving web search ranking is promising, it captures only
one aspect of the user interactions with web search engines.
We build on existing research to develop robust user behavior
interpretation techniques for the real web search setting. Instead of
treating each user as a reliable expert, we aggregate information
from multiple, unreliable, user search session traces, as we
describe in the next two sections.
3. INCORPORATING IMPLICIT
FEEDBACK
We consider two complementary approaches to ranking with
implicit feedback: (1) treating implicit feedback as independent
evidence for ranking results, and (2) integrating implicit feedback
features directly into the ranking algorithm. We describe the two
general ranking approaches next. The specific implicit feedback
features are described in Section 4, and the algorithms for
interpreting and incorporating implicit feedback are described in
Section 5.
3.1 Implicit Feedback as Independent
Evidence
The general approach is to re-rank the results obtained by a web
search engine according to observed clickthrough and other user
interactions for the query in previous search sessions. Each result
is assigned a score according to expected relevance/user
satisfaction based on previous interactions, resulting in some
preference ordering based on user interactions alone.
While there has been significant work on merging multiple
rankings, we adapt a simple and robust approach of ignoring the
original rankers" scores, and instead simply merge the rank orders.
The main reason for ignoring the original scores is that since the
feature spaces and learning algorithms are different, the scores are
not directly comparable, and re-normalization tends to remove the
benefit of incorporating classifier scores.
We experimented with a variety of merging functions on the
development set of queries (and using a set of interactions from a
different time period from final evaluation sets). We found that a
simple rank merging heuristic combination works well, and is
robust to variations in score values from original rankers. For a
given query q, the implicit score ISd is computed for each result d
from available user interaction features, resulting in the implicit
rank Id for each result. We compute a merged score SM(d) for d by
combining the ranks obtained from implicit feedback, Id with the
original rank of d, Od:
¡
¢
£
+
+
+
+
=
otherwise
O
dforexistsfeedbackimplicitif
OI
w
wOIdS
d
dd
I
IddM
1
1
1
1
1
1
),,,(
where the weight wI is a heuristically tuned scaling factor
representing the relative importance of the implicit feedback.
The query results are ordered in by decreasing values of SM to
produce the final ranking. One special case of this model arises
when setting wI to a very large value, effectively forcing clicked
results to be ranked higher than un-clicked results - an intuitive
and effective heuristic that we will use as a baseline. Applying
more sophisticated classifier and ranker combination algorithms
may result in additional improvements, and is a promising
direction for future work.
The approach above assumes that there are no interactions
between the underlying features producing the original web search
ranking and the implicit feedback features. We now relax this
assumption by integrating implicit feedback features directly into
the ranking process.
3.2 Ranking with Implicit Feedback Features
Modern web search engines rank results based on a large number
of features, including content-based features (i.e., how closely a
query matches the text or title or anchor text of the document), and
query-independent page quality features (e.g., PageRank of the
document or the domain). In most cases, automatic (or
semiautomatic) methods are developed for tuning the specific ranking
function that combines these feature values.
Hence, a natural approach is to incorporate implicit feedback
features directly as features for the ranking algorithm. During
training or tuning, the ranker can be tuned as before but with
additional features. At runtime, the search engine would fetch the
implicit feedback features associated with each query-result URL
pair. This model requires a ranking algorithm to be robust to
missing values: more than 50% of queries to web search engines
are unique, with no previous implicit feedback available. We now
describe such a ranker that we used to learn over the combined
feature sets including implicit feedback.
3.3 Learning to Rank Web Search Results
A key aspect of our approach is exploiting recent advances in
machine learning, namely trainable ranking algorithms for web
search and information retrieval (e.g., [5, 11] and classical results
reviewed in [3]). In our setting, explicit human relevance
judgments (labels) are available for a set of web search queries
and results. Hence, an attractive choice to use is a supervised
machine learning technique to learn a ranking function that best
predicts relevance judgments.
RankNet is one such algorithm. It is a neural net tuning algorithm
that optimizes feature weights to best match explicitly provided
pairwise user preferences. While the specific training algorithms
used by RankNet are beyond the scope of this paper, it is
described in detail in [5] and includes extensive evaluation and
comparison with other ranking methods. An attractive feature of
RankNet is both train- and run-time efficiency - runtime ranking
can be quickly computed and can scale to the web, and training
can be done over thousands of queries and associated judged
results.
We use a 2-layer implementation of RankNet in order to model
non-linear relationships between features. Furthermore, RankNet
can learn with many (differentiable) cost functions, and hence can
automatically learn a ranking function from human-provided
labels, an attractive alternative to heuristic feature combination
techniques. Hence, we will also use RankNet as a generic ranker
to explore the contribution of implicit feedback for different
ranking alternatives.
4. IMPLICIT USER FEEDBACK MODEL
Our goal is to accurately interpret noisy user feedback obtained as
by tracing user interactions with the search engine. Interpreting
implicit feedback in real web search setting is not an easy task.
We characterize this problem in detail in [1], where we motivate
and evaluate a wide variety of models of implicit user activities.
The general approach is to represent user actions for each search
result as a vector of features, and then train a ranker on these
features to discover feature values indicative of relevant (and
nonrelevant) search results. We first briefly summarize our features
and model, and the learning approach (Section 4.2) in order to
provide sufficient information to replicate our ranking methods
and the subsequent experiments.
4.1 Representing User Actions as Features
We model observed web search behaviors as a combination of a
``background"" component (i.e., query- and relevance-independent
noise in user behavior, including positional biases with result
interactions), and a ``relevance"" component (i.e., query-specific
behavior indicative of relevance of a result to a query). We design
our features to take advantage of aggregated user behavior. The
feature set is comprised of directly observed features (computed
directly from observations for each query), as well as
queryspecific derived features, computed as the deviation from the
overall query-independent distribution of values for the
corresponding directly observed feature values.
The features used to represent user interactions with web search
results are summarized in Table 4.1. This information was
obtained via opt-in client-side instrumentation from users of a
major web search engine.
We include the traditional implicit feedback features such as
clickthrough counts for the results, as well as our novel derived
features such as the deviation of the observed clickthrough number
for a given query-URL pair from the expected number of clicks on
a result in the given position. We also model the browsing
behavior after a result was clicked - e.g., the average page dwell
time for a given query-URL pair, as well as its deviation from the
expected (average) dwell time. Furthermore, the feature set was
designed to provide essential information about the user
experience to make feedback interpretation robust. For example,
web search users can often determine whether a result is relevant
by looking at the result title, URL, and summary - in many cases,
looking at the original document is not necessary. To model this
aspect of user experience we include features such as overlap in
words in title and words in query (TitleOverlap) and the fraction
of words shared by the query and the result summary.
Clickthrough features
Position Position of the URL in Current ranking
ClickFrequency Number of clicks for this query, URL pair
ClickProbability Probability of a click for this query and URL
ClickDeviation Deviation from expected click probability
IsNextClicked 1 if clicked on next position, 0 otherwise
IsPreviousClicked 1 if clicked on previous position, 0 otherwise
IsClickAbove 1 if there is a click above, 0 otherwise
IsClickBelow 1 if there is click below, 0 otherwise
Browsing features
TimeOnPage Page dwell time
CumulativeTimeOnPage
Cumulative time for all subsequent pages after
search
TimeOnDomain Cumulative dwell time for this domain
TimeOnShortUrl Cumulative time on URL prefix, no parameters
IsFollowedLink 1 if followed link to result, 0 otherwise
IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise
IsRedirected 1 if initial URL same as final URL, 0 otherwise
IsPathFromSearch 1 if only followed links after query, 0 otherwise
ClicksFromSearch Number of hops to reach page from query
AverageDwellTime Average time on page for this query
DwellTimeDeviation Deviation from average dwell time on page
CumulativeDeviation Deviation from average cumulative dwell time
DomainDeviation Deviation from average dwell time on domain
Query-text features
TitleOverlap Words shared between query and title
SummaryOverlap Words shared between query and snippet
QueryURLOverlap Words shared between query and URL
QueryDomainOverlap Words shared between query and URL domain
QueryLength Number of tokens in query
QueryNextOverlap Fraction of words shared with next query
Table 4.1: Some features used to represent post-search
navigation history for a given query and search result URL.
Having described our feature set, we briefly review our general
method for deriving a user behavior model.
4.2 Deriving a User Feedback Model
To learn to interpret the observed user behavior, we correlate user
actions (i.e., the features in Table 4.1 representing the actions)
with the explicit user judgments for a set of training queries. We
find all the instances in our session logs where these queries were
submitted to the search engine, and aggregate the user behavior
features for all search sessions involving these queries.
Each observed query-URL pair is represented by the features in
Table 4.1, with values averaged over all search sessions, and
assigned one of six possible relevance labels, ranging from
Perfect to Bad, as assigned by explicit relevance judgments.
These labeled feature vectors are used as input to the RankNet
training algorithm (Section 3.3) which produces a trained user
behavior model. This approach is particularly attractive as it does
not require heuristics beyond feature engineering. The resulting
user behavior model is used to help rank web search
resultseither directly or in combination with other features, as described
below.
5. EXPERIMENTAL SETUP
The ultimate goal of incorporating implicit feedback into ranking
is to improve the relevance of the returned web search results.
Hence, we compare the ranking methods over a large set of judged
queries with explicit relevance labels provided by human judges.
In order for the evaluation to be realistic we obtained a random
sample of queries from web search logs of a major search engine,
with associated results and traces for user actions. We describe
this dataset in detail next. Our metrics are described in Section 5.2
that we use to evaluate the ranking alternatives, listed in Section
5.3 in the experiments of Section 6.
5.1 Datasets
We compared our ranking methods over a random sample of 3,000
queries from the search engine query logs. The queries were
drawn from the logs uniformly at random by token without
replacement, resulting in a query sample representative of the
overall query distribution. On average, 30 results were explicitly
labeled by human judges using a six point scale ranging from
Perfect down to Bad. Overall, there were over 83,000 results
with explicit relevance judgments. In order to compute various
statistics, documents with label Good or better will be
considered relevant, and with lower labels to be non-relevant.
Note that the experiments were performed over the results already
highly ranked by a web search engine, which corresponds to a
typical user experience which is limited to the small number of the
highly ranked results for a typical web search query.
The user interactions were collected over a period of 8 weeks
using voluntary opt-in information. In total, over 1.2 million
unique queries were instrumented, resulting in over 12 million
individual interactions with the search engine. The data consisted
of user interactions with the web search engine (e.g., clicking on a
result link, going back to search results, etc.) performed after a
query was submitted. These actions were aggregated across users
and search sessions and converted to features in Table 4.1.
To create the training, validation, and test query sets, we created
three different random splits of 1,500 training, 500 validation, and
1000 test queries. The splits were done randomly by query, so that
there was no overlap in training, validation, and test queries.
5.2 Evaluation Metrics
We evaluate the ranking algorithms over a range of accepted
information retrieval metrics, namely Precision at K (P(K)),
Normalized Discounted Cumulative Gain (NDCG), and Mean
Average Precision (MAP). Each metric focuses on a deferent
aspect of system performance, as we describe below.
• Precision at K: As the most intuitive metric, P(K) reports the
fraction of documents ranked in the top K results that are
labeled as relevant. In our setting, we require a relevant
document to be labeled Good or higher. The position of
relevant documents within the top K is irrelevant, and hence
this metric measure overall user satisfaction with the top K
results.
• NDCG at K: NDCG is a retrieval measure devised specifically
for web search evaluation [10]. For a given query q, the ranked
results are examined from the top ranked down, and the NDCG
computed as:
=
+−=
K
j
jr
qq jMN
1
)(
)1log(/)12(
Where Mq is a normalization constant calculated so that a
perfect ordering would obtain NDCG of 1; and each r(j) is an
integer relevance label (0=Bad and 5=Perfect) of result
returned at position j. Note that unlabeled and Bad documents
do not contribute to the sum, but will reduce NDCG for the
query pushing down the relevant labeled documents, reducing
their contributions. NDCG is well suited to web search
evaluation, as it rewards relevant documents in the top ranked
results more heavily than those ranked lower.
• MAP: Average precision for each query is defined as the mean
of the precision at K values computed after each relevant
document was retrieved. The final MAP value is defined as the
mean of average precisions of all queries in the test set. This
metric is the most commonly used single-value summary of a
run over a set of queries.
5.3 Ranking Methods Compared
Recall that our goal is to quantify the effectiveness of implicit
behavior for real web search. One dimension is to compare the
utility of implicit feedback with other information available to a
web search engine. Specifically, we compare effectiveness of
implicit user behaviors with content-based matching, static page
quality features, and combinations of all features.
• BM25F: As a strong web search baseline we used the BM25F
scoring, which was used in one of the best performing systems
in the TREC 2004 Web track [23,27]. BM25F and its variants
have been extensively described and evaluated in IR literature,
and hence serve as a strong, reproducible baseline. The BM25F
variant we used for our experiments computes separate match
scores for each field for a result document (e.g., body text,
title, and anchor text), and incorporates query-independent
linkbased information (e.g., PageRank, ClickDistance, and URL
depth). The scoring function and field-specific tuning is
described in detail in [23]. Note that BM25F does not directly
consider explicit or implicit feedback for tuning.
• RN: The ranking produced by a neural net ranker (RankNet,
described in Section 3.3) that learns to rank web search results
by incorporating BM25F and a large number of additional static
and dynamic features describing each search result. This system
automatically learns weights for all features (including the
BM25F score for a document) based on explicit human labels
for a large set of queries. A system incorporating an
implementation of RankNet is currently in use by a major
search engine and can be considered representative of the state
of the art in web search.
• BM25F-RerankCT: The ranking produced by incorporating
clickthrough statistics to reorder web search results ranked by
BM25F above. Clickthrough is a particularly important special
case of implicit feedback, and has been shown to correlate with
result relevance. This is a special case of the ranking method in
Section 3.1, with the weight wI set to 1000 and the ranking Id
is simply the number of clicks on the result corresponding to d.
In effect, this ranking brings to the top all returned web search
results with at least one click (and orders them in decreasing
order by number of clicks). The relative ranking of the
remainder of results is unchanged and they are inserted below
all clicked results. This method serves as our baseline implicit
feedback reranking method.
BM25F-RerankAll The ranking produced by reordering the
BM25F results using all user behavior features (Section 4).
This method learns a model of user preferences by correlating
feature values with explicit relevance labels using the RankNet
neural net algorithm (Section 4.2). At runtime, for a given
query the implicit score Ir is computed for each result r with
available user interaction features, and the implicit ranking is
produced. The merged ranking is computed as described in
Section 3.1. Based on the experiments over the development set
we fix the value of wI to 3 (the effect of the wI parameter for
this ranker turned out to be negligible).
• BM25F+All: Ranking derived by training the RankNet
(Section 3.3) learner over the features set of the BM25F score
as well as all implicit feedback features (Section 3.2). We used
the 2-layer implementation of RankNet [5] trained on the
queries and labels in the training and validation sets.
• RN+All: Ranking derived by training the 2-layer RankNet
ranking algorithm (Section 3.3) over the union of all content,
dynamic, and implicit feedback features (i.e., all of the features
described above as well as all of the new implicit feedback
features we introduced).
The ranking methods above span the range of the information used
for ranking, from not using the implicit or explicit feedback at all
(i.e., BM25F) to a modern web search engine using hundreds of
features and tuned on explicit judgments (RN). As we will show
next, incorporating user behavior into these ranking systems
dramatically improves the relevance of the returned documents.
6. EXPERIMENTAL RESULTS
Implicit feedback for web search ranking can be exploited in a
number of ways. We compare alternative methods of exploiting
implicit feedback, both by re-ranking the top results (i.e., the
BM25F-RerankCT and BM25F-RerankAll methods that reorder
BM25F results), as well as by integrating the implicit features
directly into the ranking process (i.e., the RN+ALL and
BM25F+All methods which learn to rank results over the implicit
feedback and other features). We compare our methods over strong
baselines (BM25F and RN) over the NDCG, Precision at K, and
MAP measures defined in Section 5.2. The results were averaged
over three random splits of the overall dataset. Each split
contained 1500 training, 500 validation, and 1000 test queries, all
query sets disjoint. We first present the results over all 1000 test
queries (i.e., including queries for which there are no implicit
measures so we use the original web rankings). We then drill
down to examine the effects on reranking for the attempted
queries in more detail, analyzing where implicit feedback proved
most beneficial.
We first experimented with different methods of re-ranking the
output of the BM25F search results. Figures 6.1 and 6.2 report
NDCG and Precision for BM25F, as well as for the strategies
reranking results with user feedback (Section 3.1). Incorporating
all user feedback (either in reranking framework or as features to
the learner directly) results in significant improvements (using
two-tailed t-test with p=0.01) over both the original BM25F
ranking as well as over reranking with clickthrough alone. The
improvement is consistent across the top 10 results and largest for
the top result: NDCG at 1 for BM25F+All is 0.622 compared to
0.518 of the original results, and precision at 1 similarly increases
from 0.5 to 0.63. Based on these results we will use the direct
feature combination (i.e., BM25F+All) ranker for subsequent
comparisons involving implicit feedback.
0.5
0.52
0.54
0.56
0.58
0.6
0.62
0.64
0.66
0.68
1 2 3 4 5 6 7 8 9 10K
NDCG
BM25
BM25-Rerank-CT
BM25-Rerank-All
BM25+All
Figure 6.1: NDCG at K for BM25F, BM25F-RerankCT,
BM25F-Rerank-All, and BM25F+All for varying K
0.35
0.4
0.45
0.5
0.55
0.6
0.65
1 3 5 10
K
Precision
BM25
BM25-Rerank-CT
BM25-Rerank-All
BM25+All
Figure 6.2: Precision at K for BM25F, BM25F-RerankCT,
BM25F-Rerank-All, and BM25F+All for varying K
Interestingly, using clickthrough alone, while giving significant
benefit over the original BM25F ranking, is not as effective as
considering the full set of features in Table 4.1. While we analyze
user behavior (and most effective component features) in a
separate paper [1], it is worthwhile to give a concrete example of
the kind of noise inherent in real user feedback in web search
setting.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 5
Result position
Relativeclickfrequency
PTR=2
PTR=3
PTR=5
Figure 6.3: Relative clickthrough frequency for queries with
varying Position of Top Relevant result (PTR).
If users considered only the relevance of a result to their query,
they would click on the topmost relevant results. Unfortunately, as
Joachims and others have shown, presentation also influences
which results users click on quite dramatically. Users often click
on results above the relevant one presumably because the short
summaries do not provide enough information to make accurate
relevance assessments and they have learned that on average
topranked items are relevant. Figure 6.3 shows relative clickthrough
frequencies for queries with known relevant items at positions
other than the first position; the position of the top relevant result
(PTR) ranges from 2-10 in the figure. For example, for queries
with first relevant result at position 5 (PTR=5), there are more
clicks on the non-relevant results in higher ranked positions than
on the first relevant result at position 5. As we will see, learning
over a richer behavior feature set, results in substantial accuracy
improvement over clickthrough alone.
We now consider incorporating user behavior into a much richer
feature set, RN (Section 5.3) used by a major web search engine.
RN incorporates BM25F, link-based features, and hundreds of
other features. Figure 6.4 reports NDCG at K and Figure 6.5
reports Precision at K. Interestingly, while the original RN
rankings are significantly more accurate than BM25F alone,
incorporating implicit feedback features (BM25F+All) results in
ranking that significantly outperforms the original RN rankings. In
other words, implicit feedback incorporates sufficient information
to replace the hundreds of other features available to the RankNet
learner trained on the RN feature set.
0.5
0.52
0.54
0.56
0.58
0.6
0.62
0.64
0.66
0.68
0.7
1 2 3 4 5 6 7 8 9 10K
NDCG
RN
RN+All
BM25
BM25+All
Figure 6.4: NDCG at K for BM25F, BM25F+All, RN, and
RN+All for varying K
Furthermore, enriching the RN features with implicit feedback set
exhibits significant gain on all measures, allowing RN+All to
outperform all other methods. This demonstrates the
complementary nature of implicit feedback with other features
available to a state of the art web search engine.
0.4
0.45
0.5
0.55
0.6
0.65
1 3 5 10
K
Precision
RN
RN+All
BM25
BM25+All
Figure 6.5: Precision at K for BM25F, BM25F+All, RN, and
RN+All for varying K
We summarize the performance of the different ranking methods
in Table 6.1. We report the Mean Average Precision (MAP) score
for each system. While not intuitive to interpret, MAP allows
quantitative comparison on a single metric. The gains marked with
* are significant at p=0.01 level using two tailed t-test.
MAP Gain P(1) Gain
BM25F 0.184 -
0.503BM25F-Rerank-CT 0.215 0.031* 0.577 0.073*
BM25F-RerankImplicit 0.218 0.003 0.605 0.028*
BM25F+Implicit 0.222 0.004 0.620 0.015*
RN 0.215 -
0.597RN+All 0.248 0.033* 0.629 0.032*
Table 6.1: Mean Average Precision (MAP) for all strategies.
So far we reported results averaged across all queries in the test
set. Unfortunately, less than half had sufficient interactions to
attempt reranking. Out of the 1000 queries in test, between 46%
and 49%, depending on the train-test split, had sufficient
interaction information to make predictions (i.e., there was at least
1 search session in which at least 1 result URL was clicked on by
the user). This is not surprising: web search is heavy-tailed, and
there are many unique queries. We now consider the performance
on the queries for which user interactions were available. Figure
6.6 reports NDCG for the subset of the test queries with the
implicit feedback features. The gains at top 1 are dramatic. The
NDCG at 1 of BM25F+All increases from 0.6 to 0.75 (a 31%
relative gain), achieving performance comparable to RN+All
operating over a much richer feature set.
0.6
0.65
0.7
0.75
0.8
1 3 5 10K
NDCG
RN RN+All
BM25 BM25+All
Figure 6.6: NDCG at K for BM25F, BM25F+All, RN, and
RN+All on test queries with user interactions
Similarly, gains on precision at top 1 are substantial (Figure 6.7),
and are likely to be apparent to web search users. When implicit
feedback is available, the BM25F+All system returns relevant
document at top 1 almost 70% of the time, compared 53% of the
time when implicit feedback is not considered by the original
BM25F system.
0.45
0.5
0.55
0.6
0.65
0.7
1 3 5 10K
Precision
RN
RN+All
BM25
BM25+All
Figure 6.7: Precision at K NDCG at K for BM25F,
BM25F+All, RN, and RN+All on test queries with user
interactions
We summarize the results on the MAP measure for attempted
queries in Table 6.2. MAP improvements are both substantial and
significant, with improvements over the BM25F ranker most
pronounced.
Method MAP Gain P(1) Gain
RN 0.269 0.632
RN+All 0.321 0.051 (19%) 0.693 0.061(10%)
BM25F 0.236 0.525
BM25F+All 0.292 0.056 (24%) 0.687 0.162 (31%)
Table 6.2: Mean Average Precision (MAP) on attempted
queries for best performing methods
We now analyze the cases where implicit feedback was shown
most helpful. Figure 6.8 reports the MAP improvements over the
baseline BM25F run for each query with MAP under 0.6. Note
that most of the improvement is for poorly performing queries
(i.e., MAP < 0.1). Interestingly, incorporating user behavior
information degrades accuracy for queries with high original MAP
score. One possible explanation is that these easy queries tend
to be navigational (i.e., having a single, highly-ranked most
appropriate answer), and user interactions with lower-ranked
results may indicate divergent information needs that are better
served by the less popular results (with correspondingly poor
overall relevance ratings).
0
50
100
150
200
250
300
350
0.1 0.2 0.3 0.4 0.5 0.6
-0.4
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Frequency Average Gain
Figure 6.8: Gain of BM25F+All over original BM25F ranking
To summarize our experimental results, incorporating implicit
feedback in real web search setting resulted in significant
improvements over the original rankings, using both BM25F and
RN baselines. Our rich set of implicit features, such as time on
page and deviations from the average behavior, provides
advantages over using clickthrough alone as an indicator of
interest. Furthermore, incorporating implicit feedback features
directly into the learned ranking function is more effective than
using implicit feedback for reranking. The improvements observed
over large test sets of queries (1,000 total, between 466 and 495
with implicit feedback available) are both substantial and
statistically significant.
7. CONCLUSIONS AND FUTURE WORK
In this paper we explored the utility of incorporating noisy implicit
feedback obtained in a real web search setting to improve web
search ranking. We performed a large-scale evaluation over 3,000
queries and more than 12 million user interactions with a major
search engine, establishing the utility of incorporating noisy
implicit feedback to improve web search relevance.
We compared two alternatives of incorporating implicit feedback
into the search process, namely reranking with implicit feedback
and incorporating implicit feedback features directly into the
trained ranking function. Our experiments showed significant
improvement over methods that do not consider implicit feedback.
The gains are particularly dramatic for the top K=1 result in the
final ranking, with precision improvements as high as 31%, and
the gains are substantial for all values of K. Our experiments
showed that implicit user feedback can further improve web
search performance, when incorporated directly with popular
content- and link-based features.
Interestingly, implicit feedback is particularly valuable for queries
with poor original ranking of results (e.g., MAP lower than 0.1).
One promising direction for future work is to apply recent research
on automatically predicting query difficulty, and only attempt to
incorporate implicit feedback for the difficult queries. As
another research direction we are exploring methods for extending
our predictions to the previously unseen queries (e.g., query
clustering), which should further improve the web search
experience of users.
ACKNOWLEDGMENTS
We thank Chris Burges and Matt Richardson for an
implementation of RankNet for our experiments. We also thank
Robert Ragno for his valuable suggestions and many discussions.
8. REFERENCES
[1] E. Agichtein, E. Brill, S. Dumais, and R.Ragno, Learning
User Interaction Models for Predicting Web Search Result
Preferences. In Proceedings of the ACM Conference on
Research and Development on Information Retrieval (SIGIR),
2006
[2] J. Allan, HARD Track Overview in TREC 2003, High
Accuracy Retrieval from Documents, 2003
[3] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information
Retrieval, Addison-Wesley, 1999.
[4] S. Brin and L. Page, The Anatomy of a Large-scale
Hypertextual Web Search Engine, in Proceedings of WWW,
1997
[5] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds,
N. Hamilton, G. Hullender, Learning to Rank using Gradient
Descent, in Proceedings of the International Conference on
Machine Learning, 2005
[6] D.M. Chickering, The WinMine Toolkit, Microsoft Technical
Report MSR-TR-2002-103, 2002
[7] M. Claypool, D. Brown, P. Lee and M. Waseda. Inferring
user interest. IEEE Internet Computing. 2001
[8] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T.
White. Evaluating implicit measures to improve the search
experience. In ACM Transactions on Information Systems,
2005
[9] J. Goecks and J. Shavlick. Learning users" interests by
unobtrusively observing their normal behavior. In
Proceedings of the IJCAI Workshop on Machine Learning for
Information Filtering. 1999.
[10] K Jarvelin and J. Kekalainen. IR evaluation methods for
retrieving highly relevant documents. In Proceedings of the
ACM Conference on Research and Development on
Information Retrieval (SIGIR), 2000
[11] T. Joachims, Optimizing Search Engines Using Clickthrough
Data. In Proceedings of the ACM Conference on Knowledge
Discovery and Datamining (SIGKDD), 2002
[12] T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay,
Accurately Interpreting Clickthrough Data as Implicit
Feedback, Proceedings of the ACM Conference on Research
and Development on Information Retrieval (SIGIR), 2005
[13] T. Joachims, Making Large-Scale SVM Learning Practical.
Advances in Kernel Methods, in Support Vector Learning,
MIT Press, 1999
[14] D. Kelly and J. Teevan, Implicit feedback for inferring user
preference: A bibliography. In SIGIR Forum, 2003
[15] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and
J. Riedl. GroupLens: Applying collaborative filtering to
usenet news. In Communications of ACM, 1997.
[16] M. Morita, and Y. Shinoda, Information filtering based on
user behavior analysis and best match text retrieval.
Proceedings of the ACM Conference on Research and
Development on Information Retrieval (SIGIR), 1994
[17] D. Oard and J. Kim. Implicit feedback for recommender
systems. In Proceedings of the AAAI Workshop on
Recommender Systems. 1998
[18] D. Oard and J. Kim. Modeling information content using
observable behavior. In Proceedings of the 64th Annual
Meeting of the American Society for Information Science and
Technology. 2001
[19] N. Pharo, N. and K. Järvelin. The SST method: a tool for
analyzing web information search processes. In Information
Processing & Management, 2004
[20] P. Pirolli, The Use of Proximal Information Scent to Forage
for Distal Content on the World Wide Web. In Working with
Technology in Mind: Brunswikian. Resources for Cognitive
Science and Engineering, Oxford University Press, 2004
[21] F. Radlinski and T. Joachims, Query Chains: Learning to
Rank from Implicit Feedback. In Proceedings of the ACM
Conference on Knowledge Discovery and Data Mining
(SIGKDD), 2005.
[22] F. Radlinski and T. Joachims, Evaluating the Robustness of
Learning from Implicit Feedback, in Proceedings of the ICML
Workshop on Learning in Web Search, 2005
[23] S. E. Robertson, H. Zaragoza, and M. Taylor, Simple BM25
extension to multiple weighted fields, in Proceedings of the
Conference on Information and Knowledge Management
(CIKM), 2004
[24] G. Salton & M. McGill. Introduction to modern information
retrieval. McGraw-Hill, 1983
[25] E.M. Voorhees, D. Harman, Overview of TREC, 2001
[26] G.R. Xue, H.J. Zeng, Z. Chen, Y. Yu, W.Y. Ma, W.S. Xi,
and W.G. Fan, Optimizing web search using web
clickthrough data, in Proceedings of the Conference on
Information and Knowledge Management (CIKM), 2004
[27] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S.
Robertson. Microsoft Cambridge at TREC 13: Web and Hard
Tracks. In Proceedings of TREC 2004 | feedback;implicit relevance feedback;document;web search rank;web search;ranking;score;user interaction;information;information retrieval;user behavior;relevance feedback;result |
train_H-95 | Handling Locations in Search Engine Queries | This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches. | 1. INTRODUCTION
Search engine queries are often associated with geographical
locations, either explicitly (i.e. a location reference is given as part of
the query) or implicitly (i.e. the location reference is not present in
the query string, but the query clearly has a local intent [17]). One
of the concerns of geographical information retrieval (GIR) lies in
appropriately handling such queries, bringing better targeted search
results and improving user satisfaction.
Nowadays, GIR is getting increasing attention. Systems that
access resources on the basis of geographic context are starting to
appear, both in the academic and commercial domains [4, 7].
Accurately and effectively detecting location references in search
engine queries is a crucial aspect of these systems, as they are
generally based on interpreting geographical terms differently from the
others. Detecting locations in queries is also important for
generalpropose search engines, as this information can be used to improve
ranking algorithms. Queries with a local intent are best answered
with localized pages, while queries without any geographical
references are best answered with broad pages [5].
Text mining methods have been successfully used in GIR to
detect and disambiguate geographical references in text [9], or even to
infer geographic scopes for documents [1, 13]. However, this body
of research has been focused on processing Web pages and full-text
documents. Search engine queries are more difficult to handle, in
the sense that they are very short and with implicit and subjective
user intents. Moreover, the data is also noisier and more versatile
in form, and we have to deal with misspellings, multilingualism
and acronyms. How to automatically understand what the user
intended, given a search query, without putting the burden in the user
himself, remains an open text mining problem.
Key challenges in handling locations over search engine queries
include their detection and disambiguation, the ranking of possible
candidates, the detection of false positives (i.e not all contained
location names refer to geographical locations), and the detection of
implied locations by the context of the query (i.e. when the query
does not explicitly contain a place reference but it is nonetheless
geographical). Simple named entity recognition (NER) algorithms,
based on dictionary look-ups for geographical names, may
introduce high false positives for queries whose location names do not
constitute place references. For example the query Denzel
Washington contains the place name Washington, but the query is not
geographical. Queries can also be geographic without containing
any explicit reference to locations at the dictionary. In these cases,
place name extraction and disambiguation does not give any results,
and we need to access other sources of information.
This paper proposes simple and yet effective techniques for
handling place references over queries. Each query is split into a triple
< what,relation,where >, where what specifies the non-geographic
aspect of the information need, where specifies the geographic
areas of interest, and relation specifies a spatial relationship
connecting what and where. When this is not possible, i.e. the query does
not contain any place references, we try using information from
documents matching the query, exploiting geographic scopes
previously assigned to these documents.
Disambiguating place references is one of the most important
aspects. We use a search procedure that combines textual patterns
with geographical names defined at an ontology, and we use
heuristics to disambiguate the discovered references (e.g. more important
places are preferred). Disambiguation results in having the where
term, from the triple above, associated with the most likely
corresponding concepts from the ontology. When we cannot detect
any locations, we attempt to use geographical scopes previously
inferred for the documents at the top search results. By doing this,
we assume that the most frequent geographical scope in the results
should correspond to the geographical context implicit in the query.
Experiments with CLEF topics [4] and sample queries from a
Web search engine show that the proposed methods are accurate,
and may have applications in improving search results.
The rest of this paper is organized as follows. We first formalize
the problem and describe related work to our research. Next, we
describe our approach for handling place names in queries, starting
with the general approach for disambiguating place references over
textual strings, then presenting the method for splitting a query into
a < what,relation,where > triple, and finally discussing the
technique for exploiting geographic scopes previously assigned to
documents in the result set. Section 4 presents evaluation results.
Finally, we give some conclusions and directions for future research.
2. CONCEPTS AND RELATED WORK
Search engine performance depends on the ability to capture the
most likely meaning of a query as intended by the user [16].
Previous studies showed that a significant portion of the queries
submitted to search engines are geographic [8, 14]. A recent enhancement
to search engine technology is the addition of geographic
reasoning, combining geographic information systems and information
retrieval in order to build search engines that find information
associated with given locations. The ability to recognize and reason
about the geographical terminology, given in the text documents
and user queries, is a crucial aspect of these geographical
information retrieval (GIR) systems [4, 7].
Extracting and distinguishing different types of entities in text is
usually referred to as Named Entity Recognition (NER). For at least
a decade, this has been an important text mining task, and a key
feature of the Message Understanding Conferences (MUC) [3]. NER
has been successfully automated with near-human performance,
but the specific problem of recognizing geographical references
presents additional challenges [9]. When handling named
entities with a high level of detail, ambiguity problems arise more
frequently. Ambiguity in geographical references is bi-directional [15].
The same name can be used for more than one location (referent
ambiguity), and the same location can have more than one name
(reference ambiguity). The former has another twist, since the same
name can be used for locations as well as for other class of
entities, such as persons or company names (referent class ambiguity).
Besides the recognition of geographical expressions, GIR also
requires that the recognized expressions be classified and grounded
to unique identifiers [11]. Grounding the recognized expressions
(e.g. associating them to coordinates or concepts at an ontology)
assures that they can be used in more advanced GIR tasks.
Previous works have addressed the tagging and grounding of
locations in Web pages, as well as the assignment of geographic
scopes to these documents [1, 7, 13]. This is a complementary
aspect to the techniques described in this paper, since if we have the
Web pages tagged with location information, a search engine can
conveniently return pages with a geographical scope related to the
scope of the query. The task of handling geographical references
over documents is however considerably different from that of
handling geographical references over queries. In our case, queries are
usually short and often do not constitute proper sentences. Text
mining techniques that make use of context information are
difficult to apply for high accuracy.
Previous studies have also addressed the use of text mining and
automated classification techniques over search engine queries [16,
10]. However, most of these works did not consider place
references or geographical categories. Again, these previously proposed
methods are difficult to apply to the geographic domain.
Gravano et. al. studied the classification of Web queries into two
types, namely local and global [5]. They defined a query as local if
its best matches on a Web search engine are likely to be local pages,
such as houses for sale. A number of classification algorithms
have been evaluated using search engine queries. However, their
experimental results showed that only a rather low precision and
recall could be achieved. The problem addressed in this paper is
also slightly different, since we are trying not only to detect local
queries but also to disambiguate the local of interest.
Wang et. al. proposed to go further than detecting local queries,
by also disambiguating the implicit local of interest [17]. The
proposed approach works for both queries containing place references
and queries not containing them, by looking for dominant
geographic references over query logs and text from search results.
In comparison, we propose simpler techniques based on matching
names from a geographic ontology. Our approach looks for spatial
relationships at the query string, and it also associates the place
references to ontology concepts. In the case of queries not containing
explicit place references, we use geographical scopes previously
assigned to the documents, whereas Wang et. al. proposed to
extract locations from the text of the top search results.
There are nowadays many geocoding, reverse-geocoding, and
mapping services on the Web that can be easily integrated with
other applications. Geocoding is the process of locating points on
the surface of the Earth from alphanumeric addressing data. Taking
a string with an address, a geocoder queries a geographical
information system and returns interpolated coordinate values for the
given location. Instead of computing coordinates for a given place
reference, the technique described in this paper aims at assigning
references to the corresponding ontology concepts. However, if
each concept at the ontology contains associated coordinate
information, the approach described here could also be used to build a
geocoding service. Most of such existing services are commercial
in nature, and there are no technical publications describing them.
A number of commercial search services have also started to
support location-based searches. Google Local, for instance, initially
required the user to specify a location qualifier separately from the
search query. More recently, it added location look-up
capabilities that extract locations from query strings. For example, in a
search for Pizza Seattle, Google Local returns local results for
pizza near Seattle, WA. However, the intrinsics of their solution
are not published, and their approach also does not handle
locationimplicit queries. Moreover, Google Local does not take spatial
relations into account.
In sum, there are already some studies on tagging geographical
references, but Web queries pose additional challenges which have
not been addressed. In this paper, we explain the proposed
solutions for the identified problems.
3. HANDLINGQUERIESIN GIR SYSTEMS
Most GIR queries can be parsed to < what,relation,where >
triple, where the what term is used to specify the general
nongeographical aspect of the information need, the where term is used
to specify the geographical areas of interest, and the relation term
is used to specify a spatial relationship connecting what and where.
While the what term can assume any form, in order to reflect any
information need, the relation and where terms should be part of a
controlled vocabulary. In particular, the relation term should refer
to a well-known geographical relation that the underlying GIR
system can interpret (e.g. near or contained at), and the where
term should be disambiguated into a set of unique identifiers,
corresponding to concepts at the ontology.
Different systems can use alternative schemes to take input queries
from the users. Three general strategies can be identified, and GIR
systems often support more than one of the following schemes:
Figure 1: Strategies for processing queries in Geographical Information Retrieval systems.
1. Input to the system is a textual query string. This is the
hardest case, since we need to separate the query into the three
different components, and then we need to disambiguate the
where term into a set of unique identifiers.
2. Input to the system is provided in two separate strings, one
concerning the what term, and the other concerning the where.
The relation term can be either fixed (e.g. always assume the
near relation), specified together with the where string,
or provided separately from the users from a set of
possible choices. Although there is no need for separating query
string into the different components, we still need to
disambiguate the where term into a set of unique identifiers.
3. Input to the system is provided through a query string
together with an unambiguous description of the geographical
area of interest (e.g. a sketch in a map, spatial coordinates
or a selection from a set of possible choices). No
disambiguation is required, and therefore the techniques described
in this paper do not have to be applied.
The first two schemes depend on place name disambiguation.
Figure 1 illustrates how we propose to handle geographic queries
in these first two schemes. A common component is the algorithm
for disambiguating place references into corresponding ontology
concepts, which is described next.
3.1 From Place Names to Ontology Concepts
A required task in handling GIR queries consists of associating
a string containing a geographical reference with the set of
corresponding concepts at the geographic ontology. We propose to do
this according to the pseudo-code listed at Algorithm 1.
The algorithm considers the cases where a second (or even more
than one) location is given to qualify a first (e.g. Paris, France).
It makes recursive calls to match each location, and relies on
hierarchical part-of relations to detect if two locations share a common
hierarchy path. One of the provided locations should be more
general and the other more specific, in the sense that there must exist
a part-of relationship among the associated concepts at the
ontology (either direct or transitive). The most specific location is a
sub-region of the most general, and the algorithm returns the most
specific one (i.e. for Paris, France the algorithm returns the
ontology concept associated with Paris, the capital city of France).
We also consider the cases where a geographical type expression
is used to qualify a given name (e.g. city of Lisbon or state
of New York). For instance the name Lisbon can correspond
to many different concepts at a geographical ontology, and type
Algorithm 1 Matching a place name with ontology concepts
Require: O = A geographic ontology
Require: GN = A string with the geographic name to be matched
1: L = An empty list
2: INDEX = The position in GN for the first occurrence of a comma,
semi-colon or bracket character
3: if INDEX is defined then
4: GN1 = The substring of GN from position 0 to INDEX
5: GN2 = The substring of GN from INDEX +1 to length(GN)
6: L1 = Algorithm1(O,GN1)
7: L2 = Algorithm1(O,GN2)
8: for each C1 in L1 do
9: for each C2 in L2 do
10: if C1 is an ancestor of C2 at O then
11: L = The list L after adding element C2
12: else if C1 is a descendant of C2 at O then
13: L = The list L after adding element C1
14: end if
15: end for
16: end for
17: else
18: GN = The string GN after removing case and diacritics
19: if GN contains a geographic type qualifier then
20: T = The substring of GN containing the type qualifier
21: GN = The substring of GN with the type qualifier removed
22: L = The list of concepts from O with name GN and type T
23: else
24: L = The list of concepts from O with name GN
25: end if
26: end if
27: return The list L
qualifiers can provide useful information for disambiguation. The
considered type qualifiers should also described at the ontologies
(e.g. each geographic concept should be associated to a type that is
also defined at the ontology, such as country, district or city).
Ideally, the geographical reference provided by the user should
be disambiguated into a single ontology concept. However, this is
not always possible, since the user may not provide all the required
information (i.e. a type expression or a second qualifying location).
The output is therefore a list with the possible concepts being
referred to by the user. In a final step, we propose to sort this list,
so that if a single concept is required as output, we can use the one
that is ranked higher. The sorting procedure reflects the likelihood
of each concept being indeed the one referred to. We propose to
rank concepts according to the following heuristics:
1. The geographical type expression associated with the
ontology concept. For the same name, a country is more likely to
be referenced than a city, and in turn a city more likely to be
referenced than a street.
2. Number of ancestors at the ontology. Top places at the
ontology tend to be more general, and are therefore more likely
to be referenced in search engine queries.
3. Population count. Highly populated places are better known,
and therefore more likely to be referenced in queries.
4. Population counts from direct ancestors at the ontology.
Subregions of highly populated places are better known, and also
more likely to be referenced in search engine queries.
5. Occurrence frequency over Web documents (e.g. Google
counts) for the geographical names. Places names that occur
more frequently over Web documents are also more likely to
be referenced in search engine queries.
6. Number of descendants at the ontology. Places that have
more sub-regions tend to be more general, and are therefore
more likely to be mentioned in search engine queries.
7. String size for the geographical names. Short names are more
likely to be mentioned in search engine queries.
Algorithm 1, plus the ranking procedure, can already handle GIR
queries where the where term is given separately from the what and
relation terms. However, if the query is given in a single string, we
require the identification of the associated < what,relation,where >
triple, before disambiguating the where term into the corresponding
ontology concepts. This is described in the following Section.
3.2 Handling Single Query Strings
Algorithm 2 provides the mechanism for separating a query string
into a < what,relation,where > triple. It uses Algorithm 1 to find
the where term, disambiguating it into a set of ontology concepts.
The algorithm starts by tokenizing the query string into
individual words, also taking care of removing case and diacritics. We
have a simple tokenizer that uses the space character as a word
delimiter, but we could also have a tokenization approach similar to
the proposal of Wang et. al. which relies on Web occurrence
statistics to avoid breaking collocations [17]. In the future, we plan on
testing if this different tokenization scheme can improve results.
Next, the algorithm tests different possible splits of the query,
building the what, relation and where terms through
concatenations of the individual tokens. The relation term is matched against
a list of possible values (e.g. near, at, around, or south
of), corresponding to the operators that are supported by the GIR
system. Note that is also the responsibility of the underlying GIR
system to interpret the actual meaning of the different spatial
relations. Algorithm 1 is used to check whether a where term
constitutes a geographical reference or not. We also check if the last
word in the what term belongs to a list of exceptions, containing for
instance first names of people in different languages. This ensures
that a query like Denzel Washington is appropriately handled.
If the algorithm succeeds in finding valid relation and where
terms, then the corresponding triple is returned. Otherwise, we
return a triple with the what term equaling the query string, and the
relation and where terms set as empty. If the entire query string
constitutes a geographical reference, we return a triple with the
what term set to empty, the where term equaling the query string,
and the relation term set the DEFINITION (i.e. these queries
should be answered with information about the given place
references). The algorithm also handles query strings where more
than one geographical reference is provided, using and or an
equivalent preposition, together with a recursive call to Algorithm
2. A query like Diamond trade in Angola and South Africa is
Algorithm 2 Get < what,relation,where > from a query string
Require: O = A geographical ontology
Require: Q = A non-empty string with the query
1: Q = The string Q after removing case and diacritics
2: TOKENS[0..N] = An array of strings with the individual words of Q
3: N = The size of the TOKENS array
4: for INDEX = 0 to N do
5: if INDEX = 0 then
6: WHAT = Concatenation of TOKENS[0..INDEX −1]
7: LASTWHAT = TOKENS[INDEX −1]
8: else
9: WHAT = An empty string
10: LASTWHAT = An empty string
11: end if
12: WHERE = Concatenation of TOKENS[INDEX..N]
13: RELATION = An empty string
14: for INDEX2 = INDEX to N −1 do
15: RELATION2 = Concatenation of TOKENS[INDEX..INDEX2]
16: if RELATION2 is a valid geographical relation then
17: WHERE = Concatenation of S[INDEX2 +1..N]
18: RELATION = RELATION2;
19: end if
20: end for
21: if RELATION = empty AND LASTWHAT in an exception then
22: TESTGEO = FALSE
23: else
24: TESTGEO = TRUE
25: end if
26: if TESTGEO AND Algorithm1(WHERE) <> EMPTY then
27: if WHERE ends with AND SURROUNDINGS then
28: RELATION = The string NEAR;
29: WHERE = The substring of WHERE with AND
SURROUNDINGS removed
30: end if
31: if WHAT ends with AND or similar) then
32: < WHAT,RELATION,WHERE2 >= Algorithm2(WHAT)
33: WHERE = Concatenation of WHERE with WHERE2
34: end if
35: if RELATION = An empty string then
36: if WHAT = An empty string then
37: RELATION = The string DEFINITION
38: else
39: RELATION = The string CONTAINED-AT
40: end if
41: end if
42: else
43: WHAT = The string Q
44: WHERE = An empty string
45: RELATION = An empty string
46: end if
47: end for
48: return < WHAT,RELATION,WHERE >
therefore appropriately handled. Finally, if the geographical
reference in the query is complemented with an expression similar to
and its surroundings, the spatial relation (which is assumed to be
CONTAINED-AT if none is provided) is changed to NEAR.
3.3 From Search Results to Query Locality
The procedures given so far are appropriate for handling queries
where a place reference is explicitly mentioned. However, the fact
that a query can be associated with a geographical context may
not be directly observable in the query itself, but rather from the
results returned. For instance, queries like recommended hotels
for SIGIR 2006 or SeaFair 2006 lodging can be seen to refer to
the city of Seattle. Although they do not contain an explicit place
reference, we expect results to be about hotels in Seattle.
In the cases where a query does not contain place references, we
start by assuming that the top results from a search engine represent
the most popular and correct context and usage for the query. We
Topic What Relation Where TGN concepts ML concepts
Vegetable Exporters of Europe Vegetable Exporters CONTAINED-AT Europe 1 1
Trade Unions in Europe Trade Unions CONTAINED-AT Europe 1 1
Roman cities in the UK and Germany Roman cities CONTAINED-AT UK and Germany 6 2
Cathedrals in Europe Cathedrals CONTAINED-AT Europe 1 1
Car bombings near Madrid Car bombings NEAR Madrid 14 2
Volcanos around Quito Volcanos NEAR Quito 4 1
Cities within 100km of Frankfurt Cities NEAR Frankfurt 3 1
Russian troops in south(ern) Caucasus Russian troops in south(ern) CONTAINED-AT Caucasus 2 1
Cities near active volcanoes (This topic could not be appropriately handled - the relation and where terms are returned empty)
Japanese rice imports (This topic could not be appropriately handled - the relation and where terms are returned empty)
Table 1: Example topics from the GeoCLEF evaluation campaigns and the corresponding < what,relation,where > triples.
then propose to use the distributional characteristics of
geographical scopes previously assigned to the documents corresponding to
these top results. In a previous work, we presented a text mining
approach for assigning documents with corresponding
geographical scopes, defined at an ontology, that worked as an offline
preprocessing stage in a GIR system [13]. This pre-processing step is a
fundamental stage of GIR, and it is reasonable to assume that this
kind of information would be available on any system. Similarly to
Wang et. al., we could also attempt to process the results on-line,
in order to detect place references in the documents [17]. However,
a GIR system already requires the offline stage.
For the top N documents given at the results, we check the
geographic scopes that were assigned to them. If a significant portion
of the results are assigned to the same scope, than the query can be
seen to be related to the corresponding geographic concept. This
assumption could even be relaxed, for instance by checking if the
documents belong to scopes that are hierarchically related.
4. EVALUATION EXPERIMENTS
We used three different ontologies in evaluation experiments,
namely the Getty thesaurus of geographic names (TGN) [6] and
two specific resources developed at our group, here referred to as
the PT and ML ontologies [2]. TGN and ML include global
geographical information in multiple languages (although TGN is
considerably larger), while the PT ontology focuses on the Portuguese
territory with a high detail. Place types are also different across
ontologies, as for instance PT includes street names and postal
addresses, whereas ML only goes to the level of cities. The reader
should refer to [2, 6] for a complete description of these resources.
Our initial experiments used Portuguese and English topics from
the GeoCLEF 2005 and 2006 evaluation campaigns. Topics in
GeoCLEF correspond to query strings that can be used as input to a GIR
system [4]. ImageCLEF 2006 also included topics specifying place
references, and participants were encouraged to run their GIR
systems on them. Our experiments also considered this dataset. For
each topic, we measured if Algorithm 2 was able to find the
corresponding < what,relation,where > triple. The ontologies used
in this experiment were the TGN and ML, as topics were given in
multiple languages and covered the whole globe.
Dataset Number of Correct Triples Time per Query
Queries ML TGN ML TGN
GeoCLEF05 EN 25 19 20
GeoCLEF05 PT 25 20 18 288.1 334.5
GeoCLEF06 EN 32 28 19 msec msec
GeoCLEF06 PT 25 23 11
ImgCLEF06 EN 24 16 18
Table 2: Summary of results over CLEF topics.
Table 1 illustrates some of the topics, and Table 2 summarizes
the obtained results. The tables show that the proposed technique
adequately handles most of these queries. A manual inspection of
the ontology concepts that were returned for each case also revealed
that the where term was being correctly disambiguated. Note that
the TGN ontology indeed added some ambiguity, as for instance
names like Madrid can correspond to many different places around
the globe. It should also be noted that some of the considered
topics are very hard for an automated system to handle. Some of them
were ambiguous (e.g. in Japanese rice imports, the query can
be said to refer either rice imports in Japan or imports of Japanese
rice), and others contained no direct geographical references (e.g.
cities near active volcanoes). Besides these very hard cases, we
also missed some topics due to their usage of place adjectives and
specific regions that are not defined at the ontologies (e.g.
environmental concerns around the Scottish Trossachs).
In a second experiment, we used a sample of around 100,000
real search engine queries. The objective was to see if a
significant number of these queries were geographical in nature, also
checking if the algorithm did not produce many mistakes by
classifying a query as geographical when that was not the case. The
Portuguese ontology was used in this experiment, and queries were
taken from the logs of a Portuguese Web search engine available
at www.tumba.pt. Table 3 summarizes the obtained results. Many
queries were indeed geographical (around 3.4%, although
previous studies reported values above 14% [8]). A manual inspection
showed that the algorithm did not produce many false positives,
and the geographical queries were indeed correctly split into correct
< what,relation,where > triple. The few mistakes we encountered
were related to place names that were more frequently used in other
contexts (e.g. in Teófilo Braga we have the problem that Braga
is a Portuguese district, and Teófilo Braga was a well known
Portuguese writer and politician). The addition of more names to the
exception list can provide a workaround for most of these cases.
Value
Num. Queries 110,916
Num. Queries without Geographical References 107,159 (96.6%)
Num. Queries with Geographical References 3,757 (3.4%)
Table 3: Results from an experiment with search engine logs.
We also tested the procedure for detecting queries that are
implicitly geographical with a small sample of queries from the logs.
For instance, for the query Estádio do Dragão (e.g. home stadium
for a soccer team from Porto), the correct geographical context can
be discovered from the analysis of the results (more than 75% of
the top 20 results are assigned with the scope Porto). For future
work, we plan on using a larger collection of queries to evaluate
this aspect. Besides queries from the search engine logs, we also
plan on using the names of well-known buildings, monuments and
other landmarks, as they have a strong geographical connotation.
Finally, we also made a comparative experiment with 2 popular
geocoders, Maporama and Microsoft"s Mappoint. The objective
was to compare Algorithm 1 with other approaches, in terms of
being able to correctly disambiguate a string with a place reference.
Civil Parishes from Lisbon Maporama Mappoint Ours
Coded refs. (out of 53) 9 (16.9%) 30 (56,6%) 15 (28.3%)
Avg. Time per ref. (msec) 506.23 1235.87 143.43
Civil Parishes from Porto Maporama Mappoint Ours
Coded refs. (out of 15) 0 (0%) 2 (13,3%) 5 (33.3%)
Avg. Time per ref. (msec) 514.45 991.88 132.14
Table 4: Results from a comparison with geocoding services.
The Portuguese ontology was used in this experiment, taking as
input the names of civil parishes from the Portuguese municipalities
of Lisbon and Porto, and checking if the systems were able to
disambiguate the full name (e.g. Campo Grande, Lisboa or Foz
do Douro, Porto) into the correct geocode. We specifically
measured whether our approach was better at unambiguously returning
geocodes given the place reference (i.e. return the single correct
code), and providing results rapidly. Table 4 shows the obtained
results, and the accuracy of our method seems comparable to the
commercial geocoders. Note that for Maporama and Mappoint, the
times given at Table 4 include fetching results from the Web, but we
have no direct way of accessing the geocoding algorithms (in both
cases, fetching static content from the Web servers takes around
125 milliseconds). Although our approach cannot unambiguously
return the correct geocode in most cases (only 20 out of a total of
68 cases), it nonetheless returns results that a human user can
disambiguate (e.g. for Madalena, Lisboa we return both a street and
a civil parish), as opposed to the other systems that often did not
produce results. Moreover, if we consider the top geocode
according to the ranking procedure described in Section 3.1, or if we use
a type qualifier in the name (e.g. civil parish of Campo Grande,
Lisboa), our algorithm always returns the correct geocode.
5. CONCLUSIONS
This paper presented simple approaches for handling place
references in search engine queries. This is a hard text mining problem,
as queries are often ambiguous or underspecify information needs.
However, our initial experiments indicate that for many queries, the
referenced places can be determined effectively. Unlike the
techniques proposed by Wang et. al. [17], we mainly focused on
recognizing spatial relations and associating place names to ontology
concepts. The proposed techniques were employed in the prototype
system that we used for participating in GeoCLEF 2006. In queries
where a geographical reference is not explicitly mentioned, we
propose to use the results for the query, exploiting geographic scopes
previously assigned to these documents. In the future, we plan on
doing a careful evaluation of this last approach. Another idea that
we would like to test involves the integration of a spelling
correction mechanism [12] into Algorithm 1, so that incorrectly spelled
place references can be matched to ontology concepts.
The proposed techniques for handling geographic queries can
have many applications in improving GIR systems or even general
purpose search engines. After place references are appropriately
disambiguated into ontology concepts, a GIR system can use them
to retrieve relevant results, through the use of appropriate index
structures (e.g. indexing the spatial coordinates associated with
ontology concepts) and provided that the documents are also assigned
to scopes corresponding to ontology concepts. A different GIR
strategy can involve query expansion, by taking the where terms
from the query and using the ontology to add names from
neighboring locations. In a general purpose search engine, and if a local
query is detected, we can forward users to a GIR system, which
should be better suited for properly handling the query. The regular
Google search interface already does this, by presenting a link to
Google Local when it detects a geographical query.
6. REFERENCES
[1] E. Amitay, N. Har"El, R. Sivan, and A. Soffer. Web-a-Where:
Geotagging Web content. In Proceedings of SIGIR-04, the
27th Conference on research and development in information
retrieval, 2004.
[2] M. Chaves, M. J. Silva, and B. Martins. A Geographic
Knowledge Base for Semantic Web Applications. In
Proceedings of SBBD-05, the 20th Brazilian Symposium on
Databases, 2005.
[3] N. A. Chinchor. Overview of MUC-7/MET-2. In
Proceedings of MUC-7, the 7th Message Understanding
Conference, 1998.
[4] F. Gey, R. Larson, M. Sanderson, H. Joho, and P. Clough.
GeoCLEF: the CLEF 2005 cross-language geographic
information retrieval track. In Working Notes for the CLEF
2005 Workshop, 2005.
[5] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.
Categorizing Web queries according to geographical locality.
In Proceedings of CIKM-03, the 12th Conference on
Information and knowledge management, 2003.
[6] P. Harpring. Proper words in proper places: The thesaurus of
geographic names. MDA Information, 3, 1997.
[7] C. Jones, R. Purves, A. Ruas, M. Sanderson, M. Sester,
M. van Kreveld, and R. Weibel. Spatial information retrieval
and geographical ontologies: An overview of the SPIRIT
project. In Proceedings of SIGIR-02, the 25th Conference on
Research and Development in Information Retrieval, 2002.
[8] J. Kohler. Analyzing search engine queries for the use of
geographic terms, 2003. (MSc Thesis).
[9] A. Kornai and B. Sundheim, editors. Proceedings of the
NAACL-HLT Workshop on the Analysis of Geographic
References, 2003.
[10] Y. Li, Z. Zheng, and H. Dai. KDD CUP-2005 report: Facing
a great challenge. SIGKDD Explorations, 7, 2006.
[11] D. Manov, A. Kiryakov, B. Popov, K. Bontcheva,
D. Maynard, and H. Cunningham. Experiments with
geographic knowledge for information extraction. In
Proceedings of the NAACL-HLT Workshop on the Analysis of
Geographic References, 2003.
[12] B. Martins and M. J. Silva. Spelling correction for search
engine queries. In Proceedings of EsTAL-04, España for
Natural Language Processing, 2004.
[13] B. Martins and M. J. Silva. A graph-ranking algorithm for
geo-referencing documents. In Proceedings of ICDM-05, the
5th IEEE International Conference on Data Mining, 2005.
[14] L. Souza, C. J. Davis, K. Borges, T. Delboni, and
A. Laender. The role of gazetteers in geographic knowledge
discovery on the web. In Proceedings of LA-Web-05, the 3rd
Latin American Web Congress, 2005.
[15] E. Tjong, K. Sang, and F. D. Meulder. Introduction to the
CoNLL-2003 shared task: Language-Independent Named
Entity Recognition. In Proceedings of CoNLL-2003, the 7th
Conference on Natural Language Learning, 2003.
[16] D. Vogel, S. Bickel, P. Haider, R. Schimpfky, P. Siemen,
S. Bridges, and T. Scheffer. Classifying search engine
queries using the Web as background knowledge. SIGKDD
Explorations Newsletter, 7(2):117-122, 2005.
[17] L. Wang, C. Wang, X. Xie, J. Forman, Y. Lu, W.-Y. Ma, and
Y. Li. Detecting dominant locations from search queries. In
Proceedings of SIGIR-05, the 28th Conference on Research
and development in information retrieval, 2005. | tokenization scheme;geographical type expression;geographical information retrieval;search engine query;text mine;disambiguation result;search query;geographic ontology;place reference;spelling correction mechanism;named entity recognition algorithm;web search engine;geographic context;query string;textual string;geographic ir;locationimplicit query;query process;ner |
train_H-96 | A Study of Factors Affecting the Utility of Implicit Relevance Feedback | Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF. | 1. INTRODUCTION
Information Retrieval (IR) systems are designed to help searchers
solve problems. In the traditional interaction metaphor employed by
Web search systems such as Yahoo! and MSN Search, the system
generally only supports the retrieval of potentially relevant documents
from the collection. However, it is also possible to offer support to
searchers for different search activities, such as selecting the terms to
present to the system or choosing which search strategy to adopt [3,
8]; both of which can be problematic for searchers.
As the quality of the query submitted to the system directly affects the
quality of search results, the issue of how to improve search queries
has been studied extensively in IR research [6]. Techniques such as
Relevance Feedback (RF) [11] have been proposed as a way in which
the IR system can support the iterative development of a search query
by suggesting alternative terms for query modification. However, in
practice RF techniques have been underutilised as they place an
increased cognitive burden on searchers to directly indicate relevant
results [10].
Implicit Relevance Feedback (IRF) [7] has been proposed as a way in
which search queries can be improved by passively observing
searchers as they interact. IRF has been implemented either through
the use of surrogate measures based on interaction with documents
(such as reading time, scrolling or document retention) [7] or using
interaction with browse-based result interfaces [5]. IRF has been
shown to display mixed effectiveness because the factors that are good
indicators of user interest are often erratic and the inferences drawn
from user interaction are not always valid [7].
In this paper we present a study into the use and effectiveness of IRF
in an online search environment. The study aims to investigate the
factors that affect IRF, in particular three research questions: (i) is the
use of and perceived quality of terms generated by IRF affected by the
search task? (ii) is the use of and perceived quality of terms generated
by IRF affected by the level of search experience of system users? (iii)
is IRF equally used and does it generate terms that are equally useful
at all search stages? This study aims to establish when, and under what
circumstances, IRF performs well in terms of its use and the query
modification terms selected as a result of its use.
The main experiment from which the data are taken was designed to
test techniques for selecting query modification terms and techniques
for displaying retrieval results [13]. In this paper we use data derived
from that experiment to study factors affecting the utility of IRF.
2. STUDY
In this section we describe the user study conducted to address our
research questions.
2.1 Systems
Our study used two systems both of which suggested new query terms
to the user. One system suggested terms based on the user"s
interaction (IRF), the other used Explicit RF (ERF) asking the user to
explicitly indicate relevant material. Both systems used the same term
suggestion algorithm, [15], and used a common interface.
2.1.1 Interface Overview
In both systems, retrieved documents are represented at the interface
by their full-text and a variety of smaller, query-relevant
representations, created at retrieval time. We used the Web as the test
collection in this study and Google1
as the underlying search engine.
Document representations include the document title and a summary
of the document; a list of top-ranking sentences (TRS) extracted from
the top documents retrieved, scored in relation to the query, a sentence
in the document summary, and each summary sentence in the context
it occurs in the document (i.e., with the preceding and following
sentence). Each summary sentence and top-ranking sentence is
regarded as a representation of the document. The default display
contains the list of top-ranking sentences and the list of the first ten
document titles. Interacting with a representation guides searchers to a
different representation from the same document, e.g., moving the
mouse over a document title displays a summary of the document.
This presentation of progressively more information from documents
to aid relevance assessments has been shown to be effective in earlier
work [14, 16]. In Appendix A we show the complete interface to the
IRF system with the document representations marked and in
Appendix B we show a fragment from the ERF interface with the
checkboxes used by searchers to indicate relevant information. Both
systems provide an interactive query expansion feature by suggesting
new query terms to the user. The searcher has the responsibility for
choosing which, if any, of these terms to add to the query. The
searcher can also add or remove terms from the query at will.
2.1.2 Explicit RF system
This version of the system implements explicit RF. Next to each
document representation are checkboxes that allow searchers to mark
individual representations as relevant; marking a representation is an
indication that its contents are relevant. Only the representations
marked relevant by the user are used for suggesting new query terms.
This system was used as a baseline against which the IRF system
could be compared.
2.1.3 Implicit RF system
This system makes inferences about searcher interests based on the
information with which they interact. As described in Section 2.1.1
interacting with a representation highlights a new representation from
the same document. To the searcher this is a way they can find out
more information from a potentially interesting source. To the implicit
RF system each interaction with a representation is interpreted as an
implicit indication of interest in that representation; interacting with a
representation is assumed to be an indication that its contents are
relevant. The query modification terms are selected using the same
algorithm as in the Explicit RF system. Therefore the only difference
between the systems is how relevance is communicated to the system.
The results of the main experiment [13] indicated that these two
systems were comparable in terms of effectiveness.
2.2 Tasks
Search tasks were designed to encourage realistic search behaviour by
our subjects. The tasks were phrased in the form of simulated work
task situations [2], i.e., short search scenarios that were designed to
reflect real-life search situations and allow subjects to develop
personal assessments of relevance. We devised six search topics (i.e.,
applying to university, allergies in the workplace, art galleries in
Rome, Third Generation mobile phones, Internet music piracy and
petrol prices) based on pilot testing with a small representative group
of subjects. These subjects were not involved in the main experiment.
For each topic, three versions of each work task situation were
devised, each version differing in their predicted level of task
complexity. As described in [1] task complexity is a variable that
affects subject perceptions of a task and their interactive behaviour,
e.g., subjects perform more filtering activities with highly complex
search tasks. By developing tasks of different complexity we can
assess how the nature of the task affects the subjects" interactive
behaviour and hence the evidence supplied to IRF algorithms. Task
complexity was varied according to the methodology described in [1],
specifically by varying the number of potential information sources
and types of information required, to complete a task. In our pilot
tests (and in a posteriori analysis of the main experiment results) we
verified that subjects reporting of individual task complexity matched
our estimation of the complexity of the task.
Subjects attempted three search tasks: one high complexity, one
moderate complexity and one low complexity2
. They were asked to
read the task, place themselves in the situation it described and find
the information they felt was required to complete the task. Figure 1
shows the task statements for three levels of task complexity for one
of the six search topics.
HC Task: High Complexity
Whilst having dinner with an American colleague, they comment on the
high price of petrol in the UK compared to other countries, despite large
volumes coming from the same source. Unaware of any major differences,
you decide to find out how and why petrol prices vary worldwide.
MC Task: Moderate Complexity
Whilst out for dinner one night, one of your friends" guests is complaining
about the price of petrol and the factors that cause it. Throughout the night
they seem to be complaining about everything they can, reducing the
credibility of their earlier statements so you decide to research which
factors actually are important in determining the price of petrol in the UK.
LC Task: Low Complexity
While out for dinner one night, your friend complains about the rising
price of petrol. However, as you have not been driving for long, you are
unaware of any major changes in price. You decide to find out how the
price of petrol has changed in the UK in recent years.
Figure 1. Varying task complexity (Petrol Prices topic).
2.3 Subjects
156 volunteers expressed an interest in participating in our study. 48
subjects were selected from this set with the aim of populating two
groups, each with 24 subjects: inexperienced (infrequent/
inexperienced searchers) and experienced (frequent/ experienced
searchers). Subjects were not chosen and classified into their groups
until they had completed an entry questionnaire that asked them about
their search experience and computer use.
The average age of the subjects was 22.83 years (maximum 51,
minimum 18, σ = 5.23 years) and 75% had a university diploma or a
higher degree. 47.91% of subjects had, or were pursuing, a
qualification in a discipline related to Computer Science. The subjects
were a mixture of students, researchers, academic staff and others,
with different levels of computer and search experience. The subjects
were divided into the two groups depending on their search
experience, how often they searched and the types of searches they
performed. All were familiar with Web searching, and some with
searching in other domains.
2.4 Methodology
The experiment had a factorial design; with 2 levels of search
experience, 3 experimental systems (although we only report on the
findings from the ERF and IRF systems) and 3 levels of search task
complexity. Subjects attempted one task of each complexity,
2
The main experiment from which these results are drawn had a third
comparator system which had a different interface. Each subject
carried out three tasks, one on each system. We only report on the
results from the ERF and IRF systems as these are the only pertinent
ones for this paper.
switched systems after each task and used each system once. The
order in which systems were used and search tasks attempted was
randomised according to a Latin square experimental design.
Questionnaires used Likert scales, semantic differentials and
openended questions to elicit subject opinions [4]. System logging was
also used to record subject interaction.
A tutorial carried out prior to the experiment allowed subjects to use a
non-feedback version of the system to attempt a practice task before
using the first experimental system. Experiments lasted between
oneand-a-half and two hours, dependent on variables such as the time
spent completing questionnaires. Subjects were offered a 5 minute
break after the first hour. In each experiment:
i. the subject was welcomed and asked to read an introduction to
the experiments and sign consent forms. This set of instructions
was written to ensure that each subject received precisely the
same information.
ii. the subject was asked to complete an introductory questionnaire.
This contained questions about the subject"s education, general
search experience, computer experience and Web search
experience.
iii. the subject was given a tutorial on the interface, followed by a
training topic on a version of the interface with no RF.
iv. the subject was given three task sheets and asked to choose one
task from the six topics on each sheet. No guidelines were given
to subjects when choosing a task other than they could not
choose a task from any topic more than once. Task complexity
was rotated by the experimenter so each subject attempted one
high complexity task, one moderate complexity task and one low
complexity task.
v. the subject was asked to perform the search and was given 15
minutes to search. The subject could terminate a search early if
they were unable to find any more information they felt helped
them complete the task.
vi. after completion of the search, the subject was asked to complete
a post-search questionnaire.
vii. the remaining tasks were attempted by the subject, following
steps v. and vi.
viii. the subject completed a post-experiment questionnaire and
participated in a post-experiment interview.
Subjects were told that their interaction may be used by the IRF
system to help them as they searched. They were not told which
behaviours would be used or how it would be used.
We now describe the findings of our analysis.
3. FINDINGS
In this section we use the data derived from the experiment to answer
our research questions about the effect of search task complexity,
search experience and stage in search on the use and effectiveness of
IRF. We present our findings per research question. Due to the
ordinal nature of much of the data non-parametric statistical testing is
used in this analysis and the level of significance is set to p < .05,
unless otherwise stated. We use the method proposed by [12] to
determine the significance of differences in multiple comparisons and
that of [9] to test for interaction effects between experimental
variables, the occurrence of which we report where appropriate. All
Likert scales and semantic differentials were on a 5-point scale where
a rating closer to 1 signifies more agreement with the attitude
statement. The category labels HC, MC and LC are used to denote the
high, moderate and low complexity tasks respectively. The highest, or
most positive, values in each table are shown in bold. Our analysis
uses data from questionnaires, post-experiment interviews and
background system logging on the ERF and IRF systems.
3.1 Search Task
Searchers attempted three search tasks of varying complexity, each on
a different experimental system. In this section we present an analysis
on the use and usefulness of IRF for search tasks of different
complexities. We present our findings in terms of the RF provided by
subjects and the terms recommended by the systems.
3.1.1 Feedback
We use questionnaires and system logs to gather data on subject
perceptions and provision of RF for different search tasks. In the
postsearch questionnaire subjects were asked about how RF was conveyed
using differentials to elicit their opinion on:
1. the value of the feedback technique: How you conveyed relevance
to the system (i.e. ticking boxes or viewing information) was: easy /
difficult, effective/ ineffective, useful"/not useful.
2. the process of providing the feedback: How you conveyed relevance
to the system made you feel: comfortable/uncomfortable, in
control/not in control.
The average obtained differential values are shown in Table 1 for IRF
and each task category. The value corresponding to the differential
All represents the mean of all differentials for a particular attitude
statement. This gives some overall understanding of the subjects"
feelings which can be useful as the subjects may not answer individual
differentials very precisely. The values for ERF are included for
reference in this table and all other tables and figures in the Findings
section. Since the aim of the paper is to investigate situations in which
IRF might perform well, not a direct comparison between IRF and
ERF, we make only limited comparisons between these two types of
feedback.
Table 1. Subject perceptions of RF method (lower = better).
Each cell in Table 1 summarises the subject responses for 16
tasksystem pairs (16 subjects who ran a high complexity (HC) task on the
ERF system, 16 subjects who ran a medium complexity (MC) task on
the ERF system, etc). Kruskal-Wallis Tests were applied to each
differential for each type of RF3
. Subject responses suggested that
3
Since this analysis involved many differentials, we use a Bonferroni
correction to control the experiment-wise error rate and set the alpha
level (α) to .0167 and .0250 for both statements 1. and 2.
respectively, i.e., .05 divided by the number of differentials. This
correction reduces the number of Type I errors i.e., rejecting null
hypotheses that are true.
Explicit RF Implicit RF
Differential
HC MC LC HC MC LC
Easy 2.78 2.47 2.12 1.86 1.81 1.93
Effective 2.94 2.68 2.44 2.04 2.41 2.66
Useful 2.76 2.51 2.16 1.91 2.37 2.56
All (1) 2.83 2.55 2.24 1.94 2.20 2.38
Comfortable 2.27 2.28 2.35 2.11 2.15 2.16
In control 2.01 1.97 1.93 2.73 2.68 2.61
All (2) 2.14 2.13 2.14 2.42 2.42 2.39
IRF was most effective and useful for more complex search tasks4
and that the differences in all pair-wise comparisons between tasks
were significant5
. Subject perceptions of IRF elicited using the other
differentials did not appear to be affected by the complexity of the
search task6
. To determine whether a relationship exists between the
effectiveness and usefulness of the IRF process and task complexity
we applied Spearman"s Rank Order Correlation Coefficient to
participant responses. The results of this analysis suggest that the
effectiveness of IRF and usefulness of IRF are both related to task
complexity; as task complexity increases subject preference for IRF
also increases7
.
On the other hand, subjects felt ERF was more effective and useful
for low complexity tasks8
. Their verbal reporting of ERF, where
perceived utility and effectiveness increased as task complexity
decreased, supports this finding. In tasks of lower complexity the
subjects felt they were better able to provide feedback on whether or
not documents were relevant to the task.
We analyse interaction logs generated by both interfaces to investigate
the amount of RF subjects provided. To do this we use a measure of
search precision that is the proportion of all possible document
representations that a searcher assessed, divided by the total number
they could assess. In ERF this is the proportion of all possible
representations that were marked relevant by the searcher, i.e., those
representations explicitly marked relevant. In IRF this is the
proportion of representations viewed by a searcher over all possible
representations that could have been viewed by the searcher. This
proportion measures the searcher"s level of interaction with a
document, we take it to measure the user"s interest in the document:
the more document representations viewed the more interested we
assume a user is in the content of the document.
There are a maximum of 14 representations per document: 4
topranking sentences, 1 title, 1 summary, 4 summary sentences and 4
summary sentences in document context. Since the interface shows
document representations from the top-30 documents, there are 420
representations that a searcher can assess. Table 2 shows proportion
of representations provided as RF by subjects.
Table 2. Feedback and documents viewed.
Explicit RF Implicit RF
Measure
HC MC LC HC MC LC
Proportion
Feedback
2.14 2.39 2.65 21.50 19.36 15.32
Documents
Viewed
10.63 10.43 10.81 10.84 12.19 14.81
For IRF there is a clear pattern: as complexity increases the subjects
viewed fewer documents but viewed more representations for each
document. This suggests a pattern where users are investigating
retrieved documents in more depth. It also means that the amount of
4
effective: χ2
(2) = 11.62, p = .003; useful: χ2
(2) = 12.43, p = .002
5
Dunn"s post-hoc tests (multiple comparison using rank sums); all Z ≥
2.88, all p ≤ .002
6
all χ2
(2) ≤ 2.85, all p ≥ .24 (Kruskal-Wallis Tests)
7
effective: all r ≥ 0.644, p ≤ .002; useful: all r ≥ 0.541, p ≤ .009
8
effective: χ2
(2) = 7.01, p = .03; useful: χ2
(2) = 6.59, p = .037
(Kruskal-Wallis Test); all pair-wise differences significant, all Z ≥
2.34, all p ≤ .01 (Dunn"s post-hoc tests)
feedback varies based on the complexity of the search task. Since IRF
is based on the interaction of the searcher, the more they interact, the
more feedback they provide. This has no effect on the number of RF
terms chosen, but may affect the quality of the terms selected.
Correlation analysis revealed a strong negative correlation between the
number of documents viewed and the amount of feedback searchers
provide9
; as the number of documents viewed increases the proportion
of feedback falls (searchers view less representations of each
document). This may be a natural consequence of their being less
time to view documents in a time constrained task environment but as
we will show as complexity changes, the nature of information
searchers interact with also appears to change. In the next section we
investigate the effect of task complexity on the terms chosen as a
result of IRF.
3.1.2 Terms
The same RF algorithm was used to select query modification terms in
all systems [16]. We use subject opinions of terms recommended by
the systems as a measure of the effectiveness of IRF with respect to
the terms generated for different search tasks. To test this, subjects
were asked to complete two semantic differentials that completed the
statement: The words chosen by the system were:
relevant/irrelevant and useful/not useful. Table 3 presents
average responses grouped by search task.
Table 3. Subject perceptions of system terms (lower = better).
Explicit RF Implicit RF
Differential
HC MC LC HC MC LC
Relevant 2.50 2.46 2.41 1.94 2.35 2.68
Useful 2.61 2.61 2.59 2.06 2.54 2.70
Kruskal-Wallis Tests were applied within each type of RF. The
results indicate that the relevance and usefulness of the terms chosen
by IRF is affected by the complexity of the search task; the terms
chosen are more relevant and useful when the search task is more
complex. 10
Relevant here, was explained as being related to their task
whereas useful was for terms that were seen as being helpful in the
search task. For ERF, the results indicate that the terms generated are
perceived to be more relevant and useful for less complex search
tasks; although differences between tasks were not significant11
. This
suggests that subject perceptions of the terms chosen for query
modification are affected by task complexity. Comparison between
ERF and IRF shows that subject perceptions also vary for different
types of RF12
.
As well as using data on relevance and utility of the terms chosen, we
used data on term acceptance to measure the perceived value of the
terms suggested. Explicit and Implicit RF systems made
recommendations about which terms could be added to the original
search query. In Table 4 we show the proportion of the top six terms
9
r = −0.696, p = .001 (Pearson"s Correlation Coefficient)
10
relevant: χ2
(2) = 13.82, p = .001; useful: χ2
(2) = 11.04, p = .004; α
= .025
11
all χ2
(2) ≤ 2.28, all p ≥ .32 (Kruskal-Wallis Test)
12
all T(16) ≥ 102, all p ≤ .021, (Wilcoxon Signed-Rank Test)
13
that were shown to the searcher that were added to the search
query, for each type of task and each type of RF.
Table 4. Term Acceptance (percentage of top six terms).
Explicit RF Implicit RFProportion
of terms HC MC LC HC MC LC
Accepted 65.31 67.32 68.65 67.45 67.24 67.59
The average number of terms accepted from IRF is approximately the
same across all search tasks and generally the same as that of ERF14
.
As Table 2 shows, subjects marked fewer documents relevant for
highly complex tasks . Therefore, when task complexity increases the
ERF system has fewer examples of relevant documents and the
expansion terms generated may be poorer. This could explain the
difference in the proportion of recommended terms accepted in ERF
as task complexity increases. For IRF there is little difference in how
many of the recommended terms were chosen by subjects for each
level of task complexity15
. Subjects may have perceived IRF terms as
more useful for high complexity tasks but this was not reflected in the
proportion of IRF terms accepted. Differences may reside in the
nature of the terms accepted; future work will investigate this issue.
3.1.3 Summary
In this section we have presented an investigation on the effect of
search task complexity on the utility of IRF. From the results there
appears to be a strong relation between the complexity of the task and
the subject interaction: subjects preferring IRF for highly complex
tasks. Task complexity did not affect the proportion of terms accepted
in either RF method, despite there being a difference in how
relevant and useful subjects perceived the terms to be for different
complexities; complexity may affect term selection in ways other than
the proportion of terms accepted.
3.2 Search Experience
Experienced searchers may interact differently and give different
types of evidence to RF than inexperienced searchers. As such, levels
of search experience may affect searchers" use and perceptions of IRF.
In our experiment subjects were divided into two groups based on
their level of search experience, the frequency with which they
searched and the types of searches they performed. In this section we
use their perceptions and logging to address the next research
question; the relationship between the usefulness and use of IRF and
the search experience of experimental subjects. The data are the same
as that analysed in the previous section, but here we focus on search
experience rather than the search task.
3.2.1 Feedback
We analyse the results from the attitude statements described at the
beginning of Section 3.1.1. (i.e., How you conveyed relevance to the
system was… and How you conveyed relevance to the system made
you feel…). These differentials elicited opinion from experimental
subjects about the RF method used. In Table 5 we show the mean
average responses for inexperienced and experienced subject groups
on ERF and IRF; 24 subjects per cell.
13
This was the smallest number of query modification terms that were
offered in both systems.
14
all T(16) ≥ 80, all p ≤ .31, (Wilcoxon Signed-Rank Test)
15
ERF: χ2
(2) = 3.67, p = .16; IRF: χ2
(2) = 2.55, p = .28
(KruskalWallis Tests)
Table 5. Subject perceptions of RF method (lower = better).
The results demonstrate a strong preference in inexperienced subjects
for IRF; they found it more easy and effective than experienced
subjects. 16
The differences for all other IRF differentials were not
statistically significant. For all differentials, apart from in control,
inexperienced subjects generally preferred IRF over ERF17
.
Inexperienced subjects also felt that IRF was more difficult to control
than experienced subjects18
. As these subjects have less search
experience they may be less able to understand RF processes and may
be more comfortable with the system gathering feedback implicitly
from their interaction. Experienced subjects tended to like ERF more
than inexperienced subjects and felt more comfortable with this
feedback method19
. It appears from these results that experienced
subjects found ERF more useful and were more at ease with the ERF
process.
In a similar way to Section 3.1.1 we analysed the proportion of
feedback that searchers provided to the experimental systems. Our
analysis suggested that search experience does not affect the amount
of feedback subjects provide20
.
3.2.2 Terms
We used questionnaire responses to gauge subject opinion on the
relevance and usefulness of the terms from the perspective of
experienced and inexperienced subjects. Table 6 shows the average
differential responses obtained from both subject groups.
Table 6. Subject perceptions of system terms (lower = better).
Explicit RF Implicit RF
Differential
Inexp. Exp. Inexp. Exp.
Relevant 2.58 2.44 2.33 2.21
Useful 2.88 2.63 2.33 2.23
The differences between subject groups were significant21
.
Experienced subjects generally reacted to the query modification
terms chosen by the system more positively than inexperienced
16
easy: U(24) = 391, p = .016; effective: U(24) = 399, p = .011; α =
.0167 (Mann-Whitney Tests)
17
all T(24) ≥ 231, all p ≤ .001 (Wilcoxon Signed-Rank Test)
18
U(24) = 390, p = .018; α = .0250 (Mann-Whitney Test)
19
T(24) = 222, p = .020 (Wilcoxon Signed-Rank Test)
20
ERF: all U(24) ≤ 319, p ≥ .26, IRF: all U(24) ≤ 313, p ≥ .30
(MannWhitney Tests)
21
ERF: all U(24) ≥ 388, p ≤ .020, IRF: all U(24) ≥ 384, p ≤ .024
Explicit RF Implicit RF
Differential
Inexp. Exp. Inexp. Exp.
Easy 2.46 2.46 1.84 1.98
Effective 2.75 2.63 2.32 2.43
Useful 2.50 2.46 2.28 2.27
All (1) 2.57 2.52 2.14 2.23
Comfortable 2.46 2.14 2.05 2.24
In control 1.96 1.98 2.73 2.64
All (2) 2.21 2.06 2.39 2.44
subjects. This finding was supported by the proportion of query
modification terms these subjects accepted. In the same way as in
Section 3.1.2, we analysed the number of query modification terms
recommended by the system that were used by experimental subjects.
Table 7 shows the average number of accepted terms per subject
group.
Table 7. Term Acceptance (percentage of top six terms).
Explicit RF Implicit RFProportion
of terms Inexp. Exp. Inexp. Exp.
Accepted 63.76 70.44 64.43 71.35
Our analysis of the data show that differences between subject groups
for each type of RF are significant; experienced subjects accepted
more expansion terms regardless of type of RF. However, the
differences between the same groups for different types of RF are not
significant; subjects chose roughly the same percentage of expansion
terms offered irrespective of the type of RF22
.
3.2.3 Summary
In this section we have analysed data gathered from two subject
groups - inexperienced searchers and experienced searchers - on how
they perceive and use IRF. The results indicate that inexperienced
subjects found IRF more easy and effective than experienced
subjects, who in turn found the terms chosen as a result of IRF more
relevant and useful. We also showed that inexperienced subjects
generally accepted less recommended terms than experienced
subjects, perhaps because they were less comfortable with RF or
generally submitted shorter search queries. Search experience appears
to affect how subjects use the terms recommended as a result of the
RF process.
3.3 Search Stage
From our observations of experimental subjects as they searched we
conjectured that RF may be used differently at different times during a
search. To test this, our third research question concerned the use and
usefulness of IRF during the course of a search. In this section we
investigate whether the amount of RF provided by searchers or the
proportion of terms accepted are affected by how far through their
search they are. For the purposes of this analysis a search begins when
a subject poses the first query to the system and progresses until they
terminate the search or reach the maximum allowed time for a search
task of 15 minutes. We do not divide tasks based on this limit as
subjects often terminated their search in less than 15 minutes.
In this section we use data gathered from interaction logs and subject
opinions to investigate the extent to which RF was used and the extent
to which it appeared to benefit our experimental subjects at different
stages in their search
3.3.1 Feedback
The interaction logs for all searches on the Explicit RF and Implicit
RF were analysed and each search is divided up into nine equal length
time slices. This number of slices gave us an equal number per stage
and was a sufficient level of granularity to identify trends in the
results. Slices 1 - 3 correspond to the start of the search, 4 - 6 to the
middle of the search and 7 - 9 to the end. In Figure 2 we plot the
measure of precision described in Section 3.1.1 (i.e., the proportion
of all possible representations that were provided as RF) at each of the
22
IRF: U(24) = 403, p = .009, ERF: U(24) = 396, p = .013
nine slices, per search task, averaged across all subjects; this allows us
to see how the provision of RF was distributed during a search. The
total amount of feedback for a single RF method/task complexity
pairing across all nine slices corresponds to the value recorded in the
first row of Table 2 (e.g., the sum of the RF for IRF/HC across all nine
slices of Figure 2 is 21.50%). To simplify the statistical analysis and
comparison we use the grouping of start, middle and end.
0 1 2 3 4 5 6 7 8 9
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Slice
Search"precision"(%oftotalrepsprovidedasRF)
Explicit RF/HC
Explicit RF/MC
Explicit RF/LC
Implicit RF/HC
Implicit RF/MC
Implicit RF/LC
Figure 2. Distribution of RF provision per search task.
Figure 2 appears to show the existence of a relationship between the
stage in the search and the amount of relevance information provided
to the different types of feedback algorithm. These are essentially
differences in the way users are assessing documents. In the case of
ERF subjects provide explicit relevance assessments throughout most
of the search, but there is generally a steep increase in the end phase
towards the completion of the search23
.
When using the IRF system, the data indicates that at the start of the
search subjects are providing little relevance information24
, which
corresponds to interacting with few document representations. At this
stage the subjects are perhaps concentrating more on reading the
retrieved results. Implicit relevance information is generally offered
extensively in the middle of the search as they interact with results and
it then tails off towards the end of the search. This would appear to
correspond to stages of initial exploration, detailed analysis of
document representations and storage and presentation of findings.
Figure 2 also shows the proportion of feedback for tasks of different
complexity. The results appear to show a difference25
in how IRF is
used that relates to the complexity of the search task. More
specifically, as complexity increases it appears as though subjects take
longer to reach their most interactive point. This suggests that task
complexity affects how IRF is distributed during the search and that
they may be spending more time initially interpreting search results
for more complex tasks.
23
IRF: all Z ≥ 1.87, p ≤ .031, ERF: start vs. end Z = 2.58, p = .005
(Dunn"s post-hoc tests).
24
Although increasing toward the end of the start stage.
25
Although not statistically significant; χ2
(2) = 3.54, p = .17
(Friedman Rank Sum Test)
3.3.2 Terms
The terms recommended by the system are chosen based on the
frequency of their occurrence in the relevant items. That is,
nonstopword, non-query terms occurring frequently in search results
regarded as relevant are likely to be recommended to the searcher for
query modification. Since there is a direct association between the RF
and the terms selected we use the number of terms accepted by
searchers at different points in the search as an indication of how
effective the RF has been up until the current point in the search. In
this section we analysed the average number of terms from the top six
terms recommended by Explicit RF and Implicit RF over the course of
a search. The average proportion of the top six recommended terms
that were accepted at each stage are shown in Table 8; each cell
contains data from all 48 subjects.
Table 8. Term Acceptance (proportion of top six terms).
Explicit RF Implicit RFProportion
of terms start middle end start middle end
Accepted 66.87 66.98 67.34 61.85 68.54 73.22
The results show an apparent association between the stage in the
search and the number of feedback terms subjects accept. Search
stage affects term acceptance in IRF but not in ERF26
. The further
into a search a searcher progresses, the more likely they are to accept
terms recommended via IRF (significantly more than ERF27
). A
correlation analysis between the proportion of terms accepted at each
search stage and cumulative RF (i.e., the sum of all precision at each
slice in Figure 2 up to and including the end of the search stage)
suggests that in both types of RF the quality of system terms improves
as more RF is provided28
.
3.3.3 Summary
The results from this section indicate that the location in a search
affects the amount of feedback given by the user to the system, and
hence the amount of information that the RF mechanism has to decide
which terms to offer the user. Further, trends in the data suggest that
the complexity of the task affects how subjects provide IRF and the
proportion of system terms accepted.
4. DISCUSSION AND IMPLICATIONS
In this section we discuss the implications of the findings presented in
the previous section for each research question.
4.1 Search Task
The results of our study showed that ERF was preferred for less
complex tasks and IRF for more complex tasks. From observations
and subject comments we perceived that when using ERF systems
subjects generally forgot to provide the feedback but also employed
different criteria during the ERF process (i.e., they were assessing
relevance rather than expressing an interest). When the search was
more complex subjects rarely found results they regarded as
completely relevant. Therefore they struggled to find relevant
26
ERF: χ2
(2) = 2.22, p = .33; IRF: χ2
(2) = 7.73, p = .021 (Friedman
Rank Sum Tests); IRF: all pair-wise comparisons significant at Z ≥
1.77, all p ≤ .038 (Dunn"s post-hoc tests)
27
all T(48) ≥ 786, all p ≤ .002, (Wilcoxon Signed-Rank Test)
28
IRF: r = .712, p < .001, ERF: r = .695, p = .001 (Pearson
Correlation Coefficient)
information and were unable to communicate RF to the search system.
In these situations subjects appeared to prefer IRF as they do not need
to make a relevance decision to obtain the benefits of RF, i.e., term
suggestions, whereas in ERF they do.
The association between RF method and task complexity has
implications for the design of user studies of RF systems and the RF
systems themselves. It implies that in the design of user studies
involving ERF or IRF systems care should be taken to include tasks of
varying complexities, to avoid task bias. Also, in the design of search
systems it implies that since different types of RF may be appropriate
for different task complexities then a system that could automatically
detect complexity could use both ERF and IRF simultaneously to
benefit the searcher. For example, on the IRF system we noticed that
as task complexity falls search behaviour shifts from results interface
to retrieved documents. Monitoring such interaction across a number
of studies may lead to a set of criteria that could help IR systems
automatically detect task complexity and tailor support to suit.
4.2 Search Experience
We analysed the affect of search experience on the utility of IRF. Our
analysis revealed a general preference across all subjects for IRF over
ERF. That is, the average ratings assigned to IRF were generally more
positive than those assigned to ERF. However, IRF was generally
liked by both subject groups (perhaps because it removed the burden
of providing relevance information) and ERF was generally preferred
by experienced subjects more than inexperienced subjects (perhaps
because it allowed them to specify which results were used by the
system when generating term recommendations).
All subjects felt more in control with ERF than IRF, but for
inexperienced subjects this did not appear to affect their overall
preferences29
. These subjects may understand the RF process less, but
may be more willing to sacrifice control over feedback in favour of
IRF, a process that they perceive more positively.
4.3 Search Stage
We also analysed the effects of search stage on the use and usefulness
of IRF. Through analysis of this nature we can build a more complete
picture of how searchers used RF and how this varies based on the RF
method. The results suggest that IRF is used more in the middle of
the search than at the beginning or end, whereas ERF is used more
towards the end. The results also show the effects of task complexity
on the IRF process and how rapidly subjects reach their most
interactive point. Without an analysis of this type it would not have
been possible to establish the existence of such patterns of behaviour.
The findings suggest that searchers interact differently for IRF and
ERF. Since ERF is not traditionally used until toward the end of the
search it may be possible to incorporate both IRF and ERF into the
same IR system, with IRF being used to gather evidence until subjects
decide to use ERF. The development of such a system represents part
of our ongoing work in this area.
5. CONCLUSIONS
In this paper we have presented an investigation of Implicit Relevance
Feedback (IRF). We aimed to answer three research questions about
factors that may affect the provision and usefulness of IRF. These
factors were search task complexity, the subjects" search experience
and the stage in the search. Our overall conclusion was that all factors
29
This may also be true for experienced subjects, but the data we have
is insufficient to draw this conclusion.
appear to have some effect on the use and effectiveness of IRF,
although the interaction effects between factors are not statistically
significant.
Our conclusions per each research question are: (i) IRF is generally
more useful for complex search tasks, where searchers want to focus
on the search task and get new ideas for their search from the system,
(ii) IRF is preferred to ERF overall and generally preferred by
inexperienced subjects wanting to reduce the burden of providing RF,
and (iii) within a single search session IRF is affected by temporal
location in a search (i.e., it is used in the middle, not the beginning or
end) and task complexity.
Studies of this nature are important to establish the circumstances
where a promising technique such as IRF are useful and those when it
is not. It is only after such studies have been run and analysed in this
way can we develop an understanding of IRF that allow it to be
successfully implemented in operational IR systems.
6. REFERENCES
[1] Bell, D.J. and Ruthven, I. (2004). Searchers' assessments of task
complexity for web searching. Proceedings of the 26th European
Conference on Information Retrieval, 57-71.
[2] Borlund, P. (2000). Experimental components for the evaluation
of interactive information retrieval systems. Journal of
Documentation. 56(1): 71-90.
[3] Brajnik, G., Mizzaro, S., Tasso, C., and Venuti, F. (2002).
Strategic help for user interfaces for information retrieval.
Journal of the American Society for Information Science and
Technology. 53(5): 343-358.
[4] Busha, C.H. and Harter, S.P., (1980). Research methods in
librarianship: Techniques and interpretation. Library and
information science series. New York: Academic Press.
[5] Campbell, I. and Van Rijsbergen, C.J. (1996). The ostensive
model of developing information needs. Proceedings of the 3rd
International Conference on Conceptions of Library and
Information Science, 251-268.
[6] Harman, D., (1992). Relevance feedback and other query
modification techniques. In Information retrieval: Data
structures and algorithms. New York: Prentice-Hall.
[7] Kelly, D. and Teevan, J. (2003). Implicit feedback for inferring
user preference. SIGIR Forum. 37(2): 18-28.
[8] Koenemann, J. and Belkin, N.J. (1996). A case for interaction: A
study of interactive information retrieval behavior and
effectiveness. Proceedings of the ACM SIGCHI Conference on
Human Factors in Computing Systems, 205-212.
[9] Meddis, R., (1984). Statistics using ranks: A unified approach.
Oxford: Basil Blackwell, 303-308.
[10] Morita, M. and Shinoda, Y. (1994). Information filtering based
on user behavior analysis and best match text retrieval.
Proceedings of the 17th Annual ACM SIGIR Conference on
Research and Development in Information Retrieval, 272-281.
[11] Salton, G. and Buckley, C. (1990). Improving retrieval
performance by relevance feedback. Journal of the American
Society for Information Science. 41(4): 288-297.
[12] Siegel, S. and Castellan, N.J. (1988). Nonparametric statistics for
the behavioural sciences. 2nd ed. Singapore: McGraw-Hill.
[13] White, R.W. (2004). Implicit feedback for interactive information
retrieval. Unpublished Doctoral Dissertation, University of
Glasgow, Glasgow, United Kingdom.
[14] White, R.W., Jose, J.M. and Ruthven, I. (2005). An implicit
feedback approach for interactive information retrieval,
Information Processing and Management, in press.
[15] White, R.W., Jose, J.M., Ruthven, I. and Van Rijsbergen, C.J.
(2004). A simulated study of implicit feedback models.
Proceedings of the 26th European Conference on Information
Retrieval, 311-326.
[16] Zellweger, P.T., Regli, S.H., Mackinlay, J.D., and Chang, B.-W.
(2000). The impact of fluid documents on reading and browsing:
An observational study. Proceedings of the ACM SIGCHI
Conference on Human Factors in Computing Systems, 249-256.
Appendix B. Checkboxes to mark
relevant document titles in the
Explicit RF system.
Appendix A. Interface to Implicit RF system.
1. Top-Ranking Sentence 2. Title 3. Summary 4. Summary Sentence 5. Sentence in Context
2
3
4
5
1 | implicit relevance feedback;top-ranking sentence;query modification term;interactive query expansion feature;varying complexity;explicit rf system;medium complexity;high complexity whilst;proportion feedback;moderate complexity whilst;search task complexity;browse-based result interface;search precision;relevance feedback;introductory questionnaire |
train_H-97 | Feature Representation for Effective Action-Item Detection | E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage(s) indicating the request(s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender"s intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n=4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation. | 1. INTRODUCTION
E-mail users are facing an increasingly difficult task of
managing their inboxes in the face of mounting challenges that result from
rising e-mail usage. This includes prioritizing e-mails over a range
of sources from business partners to family members, filtering and
reducing junk e-mail, and quickly managing requests that demand
From: Henry Hutchins <hhutchins@innovative.company.com>
To: Sara Smith; Joe Johnson; William Woolings
Subject: meeting with prospective customers
Sent: Fri 12/10/2005 8:08 AM
Hi All,
I"d like to remind all of you that the group from GRTY will be visiting us
next Friday at 4:30 p.m. The current schedule looks like this:
+ 9:30 a.m. Informal Breakfast and Discussion in Cafeteria
+ 10:30 a.m. Company Overview
+ 11:00 a.m. Individual Meetings (Continue Over Lunch)
+ 2:00 p.m. Tour of Facilities
+ 3:00 p.m. Sales Pitch
In order to have this go off smoothly, I would like to practice the
presentation well in advance. As a result, I will need each of your parts by
Wednesday.
Keep up the good work!
-Henry
Figure 1: An E-mail with emphasized Action-Item, an explicit
request that requires the recipient"s attention or action.
the receiver"s attention or action. Automated action-item detection
targets the third of these problems by attempting to detect which
e-mails require an action or response with information, and within
those e-mails, attempting to highlight the sentence (or other
passage length) that directly indicates the action request.
Such a detection system can be used as one part of an e-mail
agent which would assist a user in processing important e-mails
quicker than would have been possible without the agent. We view
action-item detection as one necessary component of a successful
e-mail agent which would perform spam detection, action-item
detection, topic classification and priority ranking, among other
functions. The utility of such a detector can manifest as a method of
prioritizing e-mails according to task-oriented criteria other than
the standard ones of topic and sender or as a means of ensuring that
the email user hasn"t dropped the proverbial ball by forgetting to
address an action request.
Action-item detection differs from standard text classification in
two important ways. First, the user is interested both in
detecting whether an email contains action items and in locating exactly
where these action item requests are contained within the email
body. In contrast, standard text categorization merely assigns a
topic label to each text, whether that label corresponds to an e-mail
folder or a controlled indexing vocabulary [12, 15, 22]. Second,
action-item detection attempts to recover the email sender"s intent
- whether she means to elicit response or action on the part of the
receiver; note that for this task, classifiers using only unigrams as
features do not perform optimally, as evidenced in our results
below. Instead we find that we need more information-laden features
such as higher-order n-grams. Text categorization by topic, on the
other hand, works very well using just individual words as features
[2, 9, 13, 17]. In fact, genre-classification, which one would think
may require more than a bag-of-words approach, also works quite
well using just unigram features [14]. Topic detection and
tracking (TDT), also works well with unigram feature sets [1, 20]. We
believe that action-item detection is one of the first clear instances
of an IR-related task where we must move beyond bag-of-words
to achieve high performance, albeit not too far, as bag-of-n-grams
seem to suffice.
We first review related work for similar text classification
problems such as e-mail priority ranking and speech act identification.
Then we more formally define the action-item detection problem,
discuss the aspects that distinguish it from more common problems
like topic classification, and highlight the challenges in
constructing systems that can perform well at the sentence and document
level. From there, we move to a discussion of feature
representation and selection techniques appropriate for this problem and how
standard text classification approaches can be adapted to smoothly
move from the sentence-level detection problem to the
documentlevel classification problem. We then conduct an empirical analysis
that helps us determine the effectiveness of our feature extraction
procedures as well as establish baselines for a number of
classification algorithms on this task. Finally, we summarize this paper"s
contributions and consider interesting directions for future work.
2. RELATED WORK
Several other researchers have considered very similar text
classification tasks. Cohen et al. [5] describe an ontology of speech
acts, such as Propose a Meeting, and attempt to predict when an
e-mail contains one of these speech acts. We consider action-items
to be an important specific type of speech act that falls within their
more general classification. While they provide results for
several classification methods, their methods only make use of human
judgments at the document-level. In contrast, we consider whether
accuracy can be increased by using finer-grained human judgments
that mark the specific sentences and phrases of interest.
Corston-Oliver et al. [6] consider detecting items in e-mail to
Put on a To-Do List. This classification task is very similar to
ours except they do not consider simple factual questions to
belong to this category. We include questions, but note that not all
questions are action-items - some are rhetorical or simply social
convention, How are you?. From a learning perspective, while
they make use of judgments at the sentence-level, they do not
explicitly compare what if any benefits finer-grained judgments offer.
Additionally, they do not study alternative choices or approaches to
the classification task. Instead, they simply apply a standard SVM
at the sentence-level and focus primarily on a linguistic analysis of
how the sentence can be logically reformulated before adding it to
the task list. In this study, we examine several alternative
classification methods, compare document-level and sentence-level
approaches and analyze the machine learning issues implicit in these
problems.
Interest in a variety of learning tasks related to e-mail has been
rapidly growing in the recent literature. For example, in a forum
dedicated to e-mail learning tasks, Culotta et al. [7] presented
methods for learning social networks from e-mail. In this work, we do
not focus on peer relationships; however, such methods could
complement those here since peer relationships often influence word
choice when requesting an action.
3. PROBLEM DEFINITION & APPROACH
In contrast to previous work, we explicitly focus on the benefits
that finer-grained, more costly, sentence-level human judgments
offer over coarse-grained document-level judgments. Additionally,
we consider multiple standard text classification approaches and
analyze both the quantitative and qualitative differences that arise
from taking a document-level vs. a sentence-level approach to
classification. Finally, we focus on the representation necessary to
achieve the most competitive performance.
3.1 Problem Definition
In order to provide the most benefit to the user, a system would
not only detect the document, but it would also indicate the specific
sentences in the e-mail which contain the action-items. Therefore,
there are three basic problems:
1. Document detection: Classify a document as to whether or
not it contains an action-item.
2. Document ranking: Rank the documents such that all
documents containing action-items occur as high as possible in
the ranking.
3. Sentence detection: Classify each sentence in a document as
to whether or not it is an action-item.
As in most Information Retrieval tasks, the weight the
evaluation metric should give to precision and recall depends on the
nature of the application. In situations where a user will eventually
read all received messages, ranking (e.g., via precision at recall of
1) may be most important since this will help encourage shorter
delays in communications between users. In contrast, high-precision
detection at low recall will be of increasing importance when the
user is under severe time-pressure and therefore will likely not read
all mail. This can be the case for crisis managers during disaster
management. Finally, sentence detection plays a role in both
timepressure situations and simply to alleviate the user"s required time
to gist the message.
3.2 Approach
As mentioned above, the labeled data can come in one of two
forms: a document-labeling provides a yes/no label for each
document as to whether it contains an action-item; a phrase-labeling
provides only a yes label for the specific items of interest. We term
the human judgments a phrase-labeling since the user"s view of the
action-item may not correspond with actual sentence boundaries or
predicted sentence boundaries. Obviously, it is straightforward to
generate a document-labeling consistent with a phrase-labeling by
labeling a document yes if and only if it contains at least one
phrase labeled yes.
To train classifiers for this task, we can take several viewpoints
related to both the basic problems we have enumerated and the form
of the labeled data. The document-level view treats each e-mail as
a learning instance with an associated class-label. Then, the
document can be converted to a feature-value vector and learning
progresses as usual. Applying a document-level classifier to document
detection and ranking is straightforward. In order to apply it to
sentence detection, one must make additional steps. For example,
if the classifier predicts a document contains an action-item, then
areas of the document that contain a high-concentration of words
which the model weights heavily in favor of action-items can be
indicated. The obvious benefit of the document-level approach is
that training set collection costs are lower since the user only has
to specify whether or not an e-mail contains an action-item and not
the specific sentences.
In the sentence-level view, each e-mail is automatically segmented
into sentences, and each sentence is treated as a learning instance
with an associated class-label. Since the phrase-labeling provided
by the user may not coincide with the automatic segmentation, we
must determine what label to assign a partially overlapping
sentence when converting it to a learning instance. Once trained,
applying the resulting classifiers to sentence detection is now
straightforward, but in order to apply the classifiers to document
detection and document ranking, the individual predictions over each
sentence must be aggregated in order to make a document-level
prediction. This approach has the potential to benefit from
morespecific labels that enable the learner to focus attention on the key
sentences instead of having to learn based on data that the majority
of the words in the e-mail provide no or little information about
class membership.
3.2.1 Features
Consider some of the phrases that might constitute part of an
action item: would like to know, let me know, as soon as
possible, have you. Each of these phrases consists of common
words that occur in many e-mails. However, when they occur in
the same sentence, they are far more indicative of an action-item.
Additionally, order can be important: consider have you versus
you have. Because of this, we posit that n-grams play a larger
role in this problem than is typical of problems like topic
classification. Therefore, we consider all n-grams up to size 4.
When using n-grams, if we find an n-gram of size 4 in a segment
of text, we can represent the text as just one occurrence of the
ngram or as one occurrence of the n-gram and an occurrence of each
smaller n-gram contained by it. We choose the second of these
alternatives since this will allow the algorithm itself to smoothly
back-off in terms of recall. Methods such as na¨ıve Bayes may be
hurt by such a representation because of double-counting.
Since sentence-ending punctuation can provide information, we
retain the terminating punctuation token when it is identifiable.
Additionally, we add a beginning-of-sentence and end-of-sentence
token in order to capture patterns that are often indicators at the
beginning or end of a sentence. Assuming proper punctuation, these
extra tokens are unnecessary, but often e-mail lacks proper
punctuation. In addition, for the sentence-level classifiers that use
ngrams, we additionally code for each sentence a binary encoding
of the position of the sentence relative to the document. This
encoding has eight associated features that represent which octile (the
first eighth, second eighth, etc.) contains the sentence.
3.2.2 Implementation Details
In order to compare the document-level to the sentence-level
approach, we compare predictions at the document-level. We do not
address how to use a document-level classifier to make predictions
at the sentence-level.
In order to automatically segment the text of the e-mail, we use
the RASP statistical parser [4]. Since the automatically segmented
sentences may not correspond directly with the phrase-level
boundaries, we treat any sentence that contains at least 30% of a marked
action-item segment as an action-item. When evaluating
sentencedetection for the sentence-level system, we use these class labels
as ground truth. Since we are not evaluating multiple segmentation
approaches, this does not bias any of the methods. If multiple
segmentation systems were under evaluation, one would need to use a
metric that matched predicted positive sentences to phrases labeled
positive. The metric would need to punish overly long true
predictions as well as too short predictions. Our criteria for converting
to labeled instances implicitly includes both criteria. Since the
segmentation is fixed, an overly long prediction would be predicting
yes for many no instances since presumably the extra length
corresponds to additional segmented sentences all of which do not
contain 30% of action-item. Likewise, a too short prediction must
correspond to a small sentence included in the action-item but not
constituting all of the action-item. Therefore, in order to consider
the prediction to be too short, there will be an additional
preceding/following sentence that is an action-item where we incorrectly
predicted no.
Once a sentence-level classifier has made a prediction for each
sentence, we must combine these predictions to make both a
document-level prediction and a document-level score. We use the
simple policy of predicting positive when any of the sentences is
predicted positive. In order to produce a document score for
ranking, the confidence that the document contains an action-item is:
ψ(d) =
1
n(d) s∈d|π(s)=1 ψ(s) if for any s ∈ d, π(s) = 1
1
n(d)
maxs∈d ψ(s) o.w.
where s is a sentence in document d, π is the classifier"s 1/0
prediction, ψ is the score the classifier assigns as its confidence that
π(s) = 1, and n(d) is the greater of 1 and the number of (unigram)
tokens in the document. In other words, when any sentence is
predicted positive, the document score is the length normalized sum of
the sentence scores above threshold. When no sentence is predicted
positive, the document score is the maximum sentence score
normalized by length. As in other text problems, we are more likely to
emit false positives for documents with more words or sentences.
Thus we include a length normalization factor.
4. EXPERIMENTAL ANALYSIS
4.1 The Data
Our corpus consists of e-mails obtained from volunteers at an
educational institution and cover subjects such as: organizing a
research workshop, arranging for job-candidate interviews,
publishing proceedings, and talk announcements. The messages were
anonymized by replacing the names of each individual and
institution with a pseudonym.1
After attempting to identify and eliminate
duplicate e-mails, the corpus contains 744 e-mail messages.
After identity anonymization, the corpora has three basic
versions. Quoted material refers to the text of a previous e-mail that
an author often leaves in an e-mail message when responding to the
e-mail. Quoted material can act as noise when learning since it may
include action-items from previous messages that are no longer
relevant. To isolate the effects of quoted material, we have three
versions of the corpora. The raw form contains the basic messages.
The auto-stripped version contains the messages after quoted
material has been automatically removed. The hand-stripped version
contains the messages after quoted material has been removed by
a human. Additionally, the hand-stripped version has had any xml
content and e-mail signatures removed - leaving only the essential
content of the message. The studies reported here are performed
with the hand-stripped version. This allows us to balance the
cognitive load in terms of number of tokens that must be read in the
user-studies we report - including quoted material would
complicate the user studies since some users might skip the material while
others read it. Additionally, ensuring all quoted material is removed
1
We have an even more highly anonymized version of the
corpus that can be made available for some outside experimentation.
Please contact the authors for more information on obtaining this
data.
prevents tainting the cross-validation since otherwise a test item
could occur as quoted material in a training document.
4.1.1 Data Labeling
Two human annotators labeled each message as to whether or
not it contained an action-item. In addition, they identified each
segment of the e-mail which contained an action-item. A segment
is a contiguous section of text selected by the human annotators
and may span several sentences or a complete phrase contained in
a sentence. They were instructed that an action item is an explicit
request for information that requires the recipient"s attention or a
required action and told to highlight the phrases or sentences that
make up the request.
Annotator 1
No Yes
Annotator 2
No 391 26
Yes 29 298
Table 1: Agreement of Human Annotators at Document Level
Annotator One labeled 324 messages as containing action items.
Annotator Two labeled 327 messages as containing action items.
The agreement of the human annotators is shown in Tables 1 and
2. The annotators are said to agree at the document-level when
both marked the same document as containing no action-items or
both marked at least one action-item regardless of whether the text
segments were the same. At the document-level, the annotators
agreed 93% of the time. The kappa statistic [3, 5] is often used to
evaluate inter-annotator agreement:
κ =
A − R
1 − R
A is the empirical estimate of the probability of agreement. R
is the empirical estimate of the probability of random agreement
given the empirical class priors. A value close to −1 implies the
annotators agree far less often than would be expected randomly,
while a value close to 1 means they agree more often than randomly
expected.
At the document-level, the kappa statistic for inter-annotator
agreement is 0.85. This value is both strong enough to expect the
problem to be learnable and is comparable with results for similar tasks
[5, 6].
In order to determine the sentence-level agreement, we use each
judgment to create a sentence-corpus with labels as described in
Section 3.2.2, then consider the agreement over these sentences.
This allows us to compare agreement over no judgments. We
perform this comparison over the hand-stripped corpus since that
eliminates spurious no judgments that would come from
including quoted material, etc. Both annotators were free to label the
subject as an action-item, but since neither did, we omit the subject
line of the message as well. This only reduces the number of no
agreements. This leaves 6301 automatically segmented sentences.
At the sentence-level, the annotators agreed 98% of the time, and
the kappa statistic for inter-annotator agreement is 0.82.
In order to produce one single set of judgments, the human
annotators went through each annotation where there was
disagreement and came to a consensus opinion. The annotators did not
collect statistics during this process but anecdotally reported that
the majority of disagreements were either cases of clear annotator
oversight or different interpretations of conditional statements. For
example, If you would like to keep your job, come to tomorrow"s
meeting implies a required action where If you would like to join
Annotator 1
No Yes
Annotator 2
No 5810 65
Yes 74 352
Table 2: Agreement of Human Annotators at Sentence Level
the football betting pool, come to tomorrow"s meeting does not.
The first would be an action-item in most contexts while the
second would not. Of course, many conditional statements are not so
clearly interpretable. After reconciling the judgments there are 416
e-mails with no action-items and 328 e-mails containing
actionitems. Of the 328 e-mails containing action-items, 259 messages
have one action-item segment; 55 messages have two action-item
segments; 11 messages have three action-item segments. Two
messages have four action-item segments, and one message has six
action-item segments. Computing the sentence-level agreement
using the reconciled gold standard judgments with each of the
annotators" individual judgments gives a kappa of 0.89 for Annotator
One and a kappa of 0.92 for Annotator Two.
In terms of message characteristics, there were on average 132
content tokens in the body after stripping. For action-item
messages, there were 115. However, by examining Figure 2 we see
the length distributions are nearly identical. As would be expected
for e-mail, it is a long-tailed distribution with about half the
messages having more than 60 tokens in the body (this paragraph has
65 tokens).
4.2 Classifiers
For this experiment, we have selected a variety of standard text
classification algorithms. In selecting algorithms, we have chosen
algorithms that are not only known to work well but which differ
along such lines as discriminative vs. generative and lazy vs.
eager. We have done this in order to provide both a competitive and
thorough sampling of learning methods for the task at hand. This
is important since it is easy to improve a strawman classifier by
introducing a new representation. By thoroughly sampling
alternative classifier choices we demonstrate that representation
improvements over bag-of-words are not due to using the information in the
bag-of-words poorly.
4.2.1 kNN
We employ a standard variant of the k-nearest neighbor
algorithm used in text classification, kNN with s-cut score
thresholding [19]. We use a tfidf-weighting of the terms with a
distanceweighted vote of the neighbors to compute the score before
thresholding it. In order to choose the value of s for thresholding, we
perform leave-one-out cross-validation over the training set. The
value of k is set to be 2( log2 N + 1) where N is the number of
training points. This rule for choosing k is theoretically motivated
by results which show such a rule converges to the optimal
classifier as the number of training points increases [8]. In practice,
we have also found it to be a computational convenience that
frequently leads to comparable results with numerically optimizing k
via a cross-validation procedure.
4.2.2 Na¨ıve Bayes
We use a standard multinomial na¨ıve Bayes classifier [16]. In
using this classifier, we smoothed word and class probabilities using a
Bayesian estimate (with the word prior) and a Laplace m-estimate,
respectively.
0
20
40
60
80
100
120
140
160
0 200 400 600 800 1000 1200 1400
NumberofMessages
Number of Tokens
All Messages
Action-Item Messages
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0 200 400 600 800 1000 1200 1400
PercentageofMessages
Number of Tokens
All Messages
Action-Item Messages
Figure 2: The Histogram (left) and Distribution (right) of Message Length. A bin size of 20 words was used. Only tokens in the body
after hand-stripping were counted. After stripping, the majority of words left are usually actual message content.
Classifiers Document Unigram Document Ngram Sentence Unigram Sentence Ngram
F1
kNN 0.6670 ± 0.0288 0.7108 ± 0.0699 0.7615 ± 0.0504 0.7790 ± 0.0460
na¨ıve Bayes 0.6572 ± 0.0749 0.6484 ± 0.0513 0.7715 ± 0.0597 0.7777 ± 0.0426
SVM 0.6904 ± 0.0347 0.7428 ± 0.0422 0.7282 ± 0.0698 0.7682 ± 0.0451
Voted Perceptron 0.6288 ± 0.0395 0.6774 ± 0.0422 0.6511 ± 0.0506 0.6798 ± 0.0913
Accuracy
kNN 0.7029 ± 0.0659 0.7486 ± 0.0505 0.7972 ± 0.0435 0.8092 ± 0.0352
na¨ıve Bayes 0.6074 ± 0.0651 0.5816 ± 0.1075 0.7863 ± 0.0553 0.8145 ± 0.0268
SVM 0.7595 ± 0.0309 0.7904 ± 0.0349 0.7958 ± 0.0551 0.8173 ± 0.0258
Voted Perceptron 0.6531 ± 0.0390 0.7164 ± 0.0376 0.6413 ± 0.0833 0.7082 ± 0.1032
Table 3: Average Document-Detection Performance during Cross-Validation for Each Method and the Sample Standard Deviation
(Sn−1) in italics. The best performance for each classifier is shown in bold.
4.2.3 SVM
We have used a linear SVM with a tfidf feature representation
and L2-norm as implemented in the SVMlight package v6.01 [11].
All default settings were used.
4.2.4 Voted Perceptron
Like the SVM, the Voted Perceptron is a kernel-based
learning method. We use the same feature representation and kernel
as we have for the SVM, a linear kernel with tfidf-weighting and
an L2-norm. The voted perceptron is an online-learning method
that keeps a history of past perceptrons used, as well as a weight
signifying how often that perceptron was correct. With each new
training example, a correct classification increases the weight on
the current perceptron and an incorrect classification updates the
perceptron. The output of the classifier uses the weights on the
perceptra to make a final voted classification. When used in an
offline-manner, multiple passes can be made through the training
data. Both the voted perceptron and the SVM give a solution from
the same hypothesis space - in this case, a linear classifier.
Furthermore, it is well-known that the Voted Perceptron increases the
margin of the solution after each pass through the training data [10].
Since Cohen et al. [5] obtain worse results using an SVM than a
Voted Perceptron with one training iteration, they conclude that the
best solution for detecting speech acts may not lie in an area with
a large margin. Because their tasks are highly similar to ours, we
employ both classifiers to ensure we are not overlooking a
competitive alternative classifier to the SVM for the basic bag-of-words
representation.
4.3 Performance Measures
To compare the performance of the classification methods, we
look at two standard performance measures, F1 and accuracy. The
F1 measure [18, 21] is the harmonic mean of precision and recall
where Precision = Correct Positives
Predicted Positives
and Recall = Correct Positives
Actual Positives
.
4.4 Experimental Methodology
We perform standard 10-fold cross-validation on the set of
documents. For the sentence-level approach, all sentences in a
document are either entirely in the training set or entirely in the test set
for each fold. For significance tests, we use a two-tailed t-test [21]
to compare the values obtained during each cross-validation fold
with a p-value of 0.05.
Feature selection was performed using the chi-squared
statistic. Different levels of feature selection were considered for each
classifier. Each of the following number of features was tried:
10, 25, 50, 100, 250, 750, 1000, 2000, 4000. There are
approximately 4700 unigram tokens without feature selection. In order to choose
the number of features to use for each classifier, we perform nested
cross-validation and choose the settings that yield the optimal
document-level F1 for that classifier. For this study, only the body of
each e-mail message was used. Feature selection is always applied
to all candidate features. That is, for the n-gram representation, the
n-grams and position features are also subject to removal by the
feature selection method.
4.5 Results
The results for document-level classification are given in Table
3. The primary hypothesis we are concerned with is that n-grams
are critical for this task; if this is true, we expect to see a significant
gap in performance between the document-level classifiers that use
n-grams (denoted Document Ngram) and those using only unigram
features (denoted Document Unigram). Examining Table 3, we
observe that this is indeed the case for every classifier except na¨ıve
Bayes. This difference in performance produced by the n-gram
representation is statistically significant for each classifier except
for na¨ıve Bayes and the accuracy metric for kNN (see Table 4).
Na¨ıve Bayes poor performance with the n-gram representation is
not surprising since the bag-of-n-grams causes excessive
doublecounting as mentioned in Section 3.2.1; however, na¨ıve Bayes is
not hurt at the sentence-level because the sparse examples provide
few chances for agglomerative effects of double counting. In either
case, when a language-modeling approach is desired, modeling the
n-grams directly would be preferable to na¨ıve Bayes. More
importantly for the n-gram hypothesis, the n-grams lead to the best
document-level classifier performance as well.
As would be expected, the difference between the sentence-level
n-gram representation and unigram representation is small. This
is because the window of text is so small that the unigram
representation, when done at the sentence-level, implicitly picks up
on the power of the n-grams. Further improvement would
signify that the order of the words matter even when only
considering a small sentence-size window. Therefore, the finer-grained
sentence-level judgments allows a unigram representation to
succeed but only when performed in a small window - behaving as
an n-gram representation for all practical purposes.
Document Winner Sentence Winner
kNN Ngram Ngram
na¨ıve Bayes Unigram Ngram
SVM Ngram†
Ngram
Voted Perceptron Ngram†
Ngram
Table 4: Significance results for n-grams versus unigrams for
document detection using document-level and sentence-level
classifiers. When the F1 result is statistically significant, it is
shown in bold. When the accuracy result is significant, it is
shown with a †
.
F1 Winner Accuracy Winner
kNN Sentence Sentence
na¨ıve Bayes Sentence Sentence
SVM Sentence Sentence
Voted Perceptron Sentence Document
Table 5: Significance results for sentence-level classifiers vs.
document-level classifiers for the document detection problem.
When the result is statistically significant, it is shown in bold.
Further highlighting the improvement from finer-grained
judgments and n-grams, Figure 3 graphically depicts the edge the SVM
sentence-level classifier has over the standard bag-of-words approach
with a precision-recall curve. In the high precision area of the
graph, the consistent edge of the sentence-level classifier is rather
impressive - continuing at precision 1 out to 0.1 recall. This
would mean that a tenth of the user"s action-items would be placed
at the top of their action-item sorted inbox. Additionally, the large
separation at the top right of the curves corresponds to the area
where the optimal F1 occurs for each classifier, agreeing with the
large improvement from 0.6904 to 0.7682 in F1 score. Considering
the relative unexplored nature of classification at the sentence-level,
this gives great hope for further increases in performance.
Accuracy F1
Unigram Ngram Unigram Ngram
kNN 0.9519 0.9536 0.6540 0.6686
na¨ıve Bayes 0.9419 0.9550 0.6176 0.6676
SVM 0.9559 0.9579 0.6271 0.6672
Voted Perceptron 0.8895 0.9247 0.3744 0.5164
Table 6: Performance of the Sentence-Level Classifiers at
Sentence Detection
Although Cohen et al. [5] observed that the Voted Perceptron
with a single training iteration outperformed SVM in a set of
similar tasks, we see no such behavior here. This further strengthens the
evidence that an alternate classifier with the bag-of-words
representation could not reach the same level of performance. The Voted
Perceptron classifier does improve when the number of training
iterations are increased, but it is still lower than the SVM classifier.
Sentence detection results are presented in Table 6. With regard
to the sentence detection problem, we note that the F1 measure
gives a better feel for the remaining room for improvement in this
difficult problem. That is, unlike document detection where
actionitem documents are fairly common, action-item sentences are very
rare. Thus, as in other text problems, the accuracy numbers are
deceptively high sheerly because of the default accuracy attainable by
always predicting no. Although, the results here are significantly
above-random, it is unclear what level of performance is necessary
for sentence detection to be useful in and of itself and not simply
as a means to document ranking and classification.
Figure 4: Users find action-items quicker when assisted by a
classification system.
Finally, when considering a new type of classification task, one
of the most basic questions is whether an accurate classifier built
for the task can have an impact on the end-user. In order to
demonstrate the impact this task can have on e-mail users, we conducted
a user study using an earlier less-accurate version of the sentence
classifier - where instead of using just a single sentence, a
threesentence windowed-approach was used. There were three distinct
sets of e-mail in which users had to find action-items. These sets
were either presented in a random order (Unordered), ordered by
the classifier (Ordered), or ordered by the classifier and with the
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Precision
Recall
Action-Item Detection SVM Performance (Post Model Selection)
Document Unigram
Sentence Ngram
Figure 3: Both n-grams and a small prediction window lead to consistent improvements over the standard approach.
center sentence in the highest confidence window highlighted
(Order+help). In order to perform fair comparisons between
conditions, the overall number of tokens in each message set should be
approximately equal; that is, the cognitive reading load should be
approximately the same before the classifier"s reordering.
Additionally, users typically show practice effects by improving at the
overall task and thus performing better at later message sets. This
is typically handled by varying the ordering of the sets across users
so that the means are comparable. While omitting further detail,
we note the sets were balanced for the total number of tokens and
a latin square design was used to balance practice effects.
Figure 4 shows that at intervals of 5, 10, and 15 minutes, users
consistently found significantly more action-items when assisted
by the classifier, but were most critically aided in the first five
minutes. Although, the classifier consistently aids the users, we did not
gain an additional end-user impact by highlighting. As mentioned
above, this might be a result of the large room for improvement that
still exists for sentence detection, but anecdotal evidence suggests
this might also be a result of how the information is presented to the
user rather than the accuracy of sentence detection. For example,
highlighting the wrong sentence near an actual action-item hurts
the user"s trust, but if a vague indicator (e.g., an arrow) points to the
approximate area the user is not aware of the near-miss. Since the
user studies used a three sentence window, we believe this played a
role as well as sentence detection accuracy.
4.6 Discussion
In contrast to problems where n-grams have yielded little
difference, we believe their power here stems from the fact that many of
the meaningful n-grams for action-items consist of common words,
e.g., let me know. Therefore, the document-level unigram
approach cannot gain much leverage, even when modeling their joint
probability correctly, since these words will often co-occur in the
document but not necessarily in a phrase. Additionally, action-item
detection is distinct from many text classification tasks in that a
single sentence can change the class label of the document. As a
result, good classifiers cannot rely on aggregating evidence from a
large number of weak indicators across the entire document.
Even though we discarded the header information, examining
the top-ranked features at the document-level reveals that many of
the features are names or parts of e-mail addresses that occurred in
the body and are highly associated with e-mails that tend to
contain many or no action-items. A few examples are terms such as
org, bob, and gov. We note that these features will be
sensitive to the particular distribution (senders/receivers) and thus the
document-level approach may produce classifiers that transfer less
readily to alternate contexts and users at different institutions. This
points out that part of the problem of going beyond bag-of-words
may be the methodology, and investigating such properties as
learning curves and how well a model transfers may highlight
differences in models which appear to have similar performance when
tested on the distributions they were trained on. We are currently
investigating whether the sentence-level classifiers do perform
better over different test corpora without retraining.
5. FUTURE WORK
While applying text classifiers at the document-level is fairly
well-understood, there exists the potential for significantly
increasing the performance of the sentence-level classifiers. Such methods
include alternate ways of combining the predictions over each
sentence, weightings other than tfidf, which may not be appropriate
since sentences are small, better sentence segmentation, and other
types of phrasal analysis. Additionally, named entity tagging, time
expressions, etc., seem likely candidates for features that can
further improve this task. We are currently pursuing some of these
avenues to see what additional gains these offer.
Finally, it would be interesting to investigate the best methods for
combining the document-level and sentence-level classifiers. Since
the simple bag-of-words representation at the document-level leads
to a learned model that behaves somewhat like a context-specific
prior dependent on the sender/receiver and general topic, a first
choice would be to treat it as such when combining probability
estimates with the sentence-level classifier. Such a model might
serve as a general example for other problems where bag-of-words
can establish a baseline model but richer approaches are needed to
achieve performance beyond that baseline.
6. SUMMARY AND CONCLUSIONS
The effectiveness of sentence-level detection argues that
labeling at the sentence-level provides significant value. Further
experiments are needed to see how this interacts with the amount of
training data available. Sentence detection that is then agglomerated to
document-level detection works surprisingly better given low recall
than would be expected with sentence-level items. This, in turn,
indicates that improved sentence segmentation methods could yield
further improvements in classification.
In this work, we examined how action-items can be effectively
detected in e-mails. Our empirical analysis has demonstrated that
n-grams are of key importance to making the most of
documentlevel judgments. When finer-grained judgments are available, then
a standard bag-of-words approach using a small (sentence) window
size and automatic segmentation techniques can produce results
almost as good as the n-gram based approaches.
Acknowledgments
This material is based upon work supported by the Defense
Advanced Research Projects Agency (DARPA) under Contract No.
NBCHD030010. Any opinions, findings and conclusions or
recommendations expressed in this material are those of the author(s)
and do not necessarily reflect the views of the Defense Advanced
Research Projects Agency (DARPA), or the Department of
InteriorNational Business Center (DOI-NBC).
We would like to extend our sincerest thanks to Jill Lehman
whose efforts in data collection were essential in constructing the
corpus, and both Jill and Aaron Steinfeld for their direction of the
HCI experiments. We would also like to thank Django Wexler for
constructing and supporting the corpus labeling tools and Curtis
Huttenhower"s support of the text preprocessing package. Finally,
we gratefully acknowledge Scott Fahlman for his encouragement
and useful discussions on this topic.
7. REFERENCES
[1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and
Y. Yang. Topic detection and tracking pilot study: Final
report. In Proceedings of the DARPA Broadcast News
Transcription and Understanding Workshop, Washington,
D.C., 1998.
[2] C. Apte, F. Damerau, and S. M. Weiss. Automated learning
of decision rules for text categorization. ACM Transactions
on Information Systems, 12(3):233-251, July 1994.
[3] J. Carletta. Assessing agreement on classification tasks: The
kappa statistic. Computational Linguistics, 22(2):249-254,
1996.
[4] J. Carroll. High precision extraction of grammatical relations.
In Proceedings of the 19th International Conference on
Computational Linguistics (COLING), pages 134-140, 2002.
[5] W. W. Cohen, V. R. Carvalho, and T. M. Mitchell. Learning
to classify email into speech acts. In EMNLP-2004
(Conference on Empirical Methods in Natural Language
Processing), pages 309-316, 2004.
[6] S. Corston-Oliver, E. Ringger, M. Gamon, and R. Campbell.
Task-focused summarization of email. In Text Summarization
Branches Out: Proceedings of the ACL-04 Workshop, pages
43-50, 2004.
[7] A. Culotta, R. Bekkerman, and A. McCallum. Extracting
social networks and contact information from email and the
web. In CEAS-2004 (Conference on Email and Anti-Spam),
Mountain View, CA, July 2004.
[8] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory
of Pattern Recognition. Springer-Verlag, New York, NY,
1996.
[9] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami.
Inductive learning algorithms and representations for text
categorization. In CIKM "98, Proceedings of the 7th ACM
Conference on Information and Knowledge Management,
pages 148-155, 1998.
[10] Y. Freund and R. Schapire. Large margin classification using
the perceptron algorithm. Machine Learning, 37(3):277-296,
1999.
[11] T. Joachims. Making large-scale svm learning practical. In
B. Sch¨olkopf, C. J. Burges, and A. J. Smola, editors,
Advances in Kernel Methods - Support Vector Learning,
pages 41-56. MIT Press, 1999.
[12] L. S. Larkey. A patent search and classification system. In
Proceedings of the Fourth ACM Conference on Digital
Libraries, pages 179 - 187, 1999.
[13] D. D. Lewis. An evaluation of phrasal and clustered
representations on a text categorization task. In SIGIR "92,
Proceedings of the 15th Annual International ACM
Conference on Research and Development in Information
Retrieval, pages 37-50, 1992.
[14] Y. Liu, J. Carbonell, and R. Jin. A pairwise ensemble
approach for accurate genre classification. In Proceedings of
the European Conference on Machine Learning (ECML),
2003.
[15] Y. Liu, R. Yan, R. Jin, and J. Carbonell. A comparison study
of kernels for multi-label text classification using category
association. In The Twenty-first International Conference on
Machine Learning (ICML), 2004.
[16] A. McCallum and K. Nigam. A comparison of event models
for naive bayes text classification. In Working Notes of AAAI
"98 (The 15th National Conference on Artificial
Intelligence), Workshop on Learning for Text Categorization,
pages 41-48, 1998. TR WS-98-05.
[17] F. Sebastiani. Machine learning in automated text
categorization. ACM Computing Surveys, 34(1):1-47, March
2002.
[18] C. J. van Rijsbergen. Information Retrieval. Butterworths,
London, 1979.
[19] Y. Yang. An evaluation of statistical approaches to text
categorization. Information Retrieval, 1(1/2):67-88, 1999.
[20] Y. Yang, J. Carbonell, R. Brown, T. Pierce, B. T. Archibald,
and X. Liu. Learning approaches to topic detection and
tracking. IEEE EXPERT, Special Issue on Applications of
Intelligent Information Retrieval, 1999.
[21] Y. Yang and X. Liu. A re-examination of text categorization
methods. In SIGIR "99, Proceedings of the 22nd Annual
International ACM Conference on Research and
Development in Information Retrieval, pages 42-49, 1999.
[22] Y. Yang, J. Zhang, J. Carbonell, and C. Jin.
Topic-conditioned novelty detection. In Proceedings of the
ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, July 2002. | document detection;svm;feature selection;sentence-level classifier;n-gram;information retrieval;action-item detection;text classification;e-mail;automated model selection;chi-squared feature selection;document ranking;embedded cross-validation;sentence detection;genre-classification;topic-driven text classification;speech act;speech act identification;text categorization;e-mail priority ranking;simple factual question |
train_H-98 | Using Asymmetric Distributions to Improve Text Classifier Probability Estimates | Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the extremely irrelevant, hard to discriminate, and obviously relevant items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable. | 1. INTRODUCTION
Text classifiers that give probability estimates are more flexible
in practice than those that give only a simple classification or even a
ranking. For example, rather than choosing one set decision
threshold, they can be used in a Bayesian risk model [8] to issue a
runtime decision which minimizes the expected cost of a user-specified
cost function dynamically chosen at prediction time. This can be
used to minimize a linear utility cost function for filtering tasks
where pre-specified costs of relevant/irrelevant are not available
during training but are specified at prediction time. Furthermore,
the costs can be changed without retraining the model.
Additionally, probability estimates are often used as the basis of deciding
which document"s label to request next during active learning [17,
23]. Effective active learning can be key in many information
retrieval tasks where obtaining labeled data can be costly - severely
reducing the amount of labeled data needed to reach the same
performance as when new labels are requested randomly [17]. Finally,
they are also amenable to making other types of cost-sensitive
decisions [26] and for combining decisions [3]. However, in all of
these tasks, the quality of the probability estimates is crucial.
Parametric models generally use assumptions that the data
conform to the model to trade-off flexibility with the ability to estimate
the model parameters accurately with little training data. Since
many text classification tasks often have very little training data, we
focus on parametric methods. However, most of the existing
parametric methods that have been applied to this task have an
assumption we find undesirable. While some of these methods allow the
distributions of the documents relevant and irrelevant to the topic
to have different variances, they typically enforce the unnecessary
constraint that the documents are symmetrically distributed around
their respective modes. We introduce several asymmetric
parametric models that allow us to relax this assumption without
significantly increasing the number of parameters and demonstrate how
we can efficiently fit the models. Additionally, these models can be
interpreted as assuming the scores produced by the text classifier
have three basic types of empirical behavior - one corresponding
to each of the extremely irrelevant, hard to discriminate, and
obviously relevant items.
We first review related work on improving probability estimates
and score modeling in information retrieval. Then, we discuss in
further detail the need for asymmetric models. After this, we
describe two specific asymmetric models and, using two standard text
classifiers, na¨ıve Bayes and SVMs, demonstrate how they can be
efficiently used to recalibrate poor probability estimates or produce
high quality probability estimates from raw scores. We then review
experiments using previously proposed methods and the
asymmetric methods over several text classification corpora to demonstrate
the strengths and weaknesses of the various methods. Finally, we
summarize our contributions and discuss future directions.
2. RELATED WORK
Parametric models have been employed to obtain probability
estimates in several areas of information retrieval. Lewis & Gale [17]
use logistic regression to recalibrate na¨ıve Bayes though the quality
of the probability estimates are not directly evaluated; it is simply
performed as an intermediate step in active learning. Manmatha
et. al [20] introduced models appropriate to produce probability
estimates from relevance scores returned from search engines and
demonstrated how the resulting probability estimates could be
subsequently employed to combine the outputs of several search
engines. They use a different parametric distribution for the relevant
and irrelevant classes, but do not pursue two-sided asymmetric
distributions for a single class as described here. They also survey the
long history of modeling the relevance scores of search engines.
Our work is similar in flavor to these previous attempts to model
search engine scores, but we target text classifier outputs which we
have found demonstrate a different type of score distribution
behavior because of the role of training data.
Focus on improving probability estimates has been growing lately.
Zadrozny & Elkan [26] provide a corrective measure for decision
trees (termed curtailment) and a non-parametric method for
recalibrating na¨ıve Bayes. In more recent work [27], they investigate
using a semi-parametric method that uses a monotonic
piecewiseconstant fit to the data and apply the method to na¨ıve Bayes and a
linear SVM. While they compared their methods to other
parametric methods based on symmetry, they fail to provide significance
test results. Our work provides asymmetric parametric methods
which complement the non-parametric and semi-parametric
methods they propose when data scarcity is an issue. In addition, their
methods reduce the resolution of the scores output by the classifier
(the number of distinct values output), but the methods here do not
have such a weakness since they are continuous functions.
There is a variety of other work that this paper extends. Platt
[22] uses a logistic regression framework that models noisy class
labels to produce probabilities from the raw output of an SVM.
His work showed that this post-processing method not only can
produce probability estimates of similar quality to SVMs directly
trained to produce probabilities (regularized likelihood kernel
methods), but it also tends to produce sparser kernels (which generalize
better). Finally, Bennett [1] obtained moderate gains by applying
Platt"s method to the recalibration of na¨ıve Bayes but found there
were more problematic areas than when it was applied to SVMs.
Recalibrating poorly calibrated classifiers is not a new problem.
Lindley et. al [19] first proposed the idea of recalibrating classifiers,
and DeGroot & Fienberg [5, 6] gave the now accepted standard
formalization for the problem of assessing calibration initiated by
others [4, 24].
3. PROBLEM DEFINITION & APPROACH
Our work differs from earlier approaches primarily in three points:
(1) We provide asymmetric parametric models suitable for use when
little training data is available; (2) We explicitly analyze the quality
of probability estimates these and competing methods produce and
provide significance tests for these results; (3) We target text
classifier outputs where a majority of the previous literature targeted the
output of search engines.
3.1 Problem Definition
The general problem we are concerned with is highlighted in
Figure 1. A text classifier produces a prediction about a document
and gives a score s(d) indicating the strength of its decision that
the document belongs to the positive class (relevant to the topic).
We assume throughout there are only two classes: the positive and
the negative (or irrelevant) class ("+" and "-" respectively).
There are two general types of parametric approaches. The first
of these tries to fit the posterior function directly, i.e. there is one
p(s|+) p(s|−)
Bayes" RuleP(+) P(−)
Classifier
P(+| s(d))
Predict class, c(d)={+,−}
confidence s(d) that c(d)=+
Document, d
and give unnormalized
Figure 1: We are concerned with how to perform the box
highlighted in grey. The internals are for one type of approach.
function estimator that performs a direct mapping of the score s to
the probability P(+|s(d)). The second type of approach breaks the
problem down as shown in the grey box of Figure 1. An estimator
for each of the class-conditional densities (i.e. p(s|+) and p(s|−))
is produced, then Bayes" rule and the class priors are used to obtain
the estimate for P(+|s(d)).
3.2 Motivation for Asymmetric Distributions
Most of the previous parametric approaches to this problem
either directly or indirectly (when fitting only the posterior)
correspond to fitting Gaussians to the class-conditional densities; they
differ only in the criterion used to estimate the parameters. We can
visualize this as depicted in Figure 2. Since increasing s usually
indicates increased likelihood of belonging to the positive class, then
the rightmost distribution usually corresponds to p(s|+).
A B
C
0
0.2
0.4
0.6
0.8
1
−10 −5 0 5 10
p(s|Class={+,−})
Unnormalized Confidence Score s
p(s | Class = +)
p(s | Class = −)
Figure 2: Typical View of Discrimination based on Gaussians
However, using standard Gaussians fails to capitalize on a basic
characteristic commonly seen. Namely, if we have a raw output
score that can be used for discrimination, then the empirical
behavior between the modes (label B in Figure 2) is often very different
than that outside of the modes (labels A and C in Figure 2).
Intuitively, the area between the modes corresponds to the hard
examples, which are difficult for this classifier to distinguish, while the
areas outside the modes are the extreme examples that are usually
easily distinguished. This suggests that we may want to uncouple
the scale of the outside and inside segments of the distribution (as
depicted by the curve denoted as A-Gaussian in Figure 3). As a
result, an asymmetric distribution may be a more appropriate choice
for application to the raw output score of a classifier.
Ideally (i.e. perfect classification) there will exist scores θ− and
θ+ such that all examples with score greater than θ+ are relevant
and all examples with scores less than θ− are irrelevant.
Furthermore, no examples fall between θ− and θ+. The distance
| θ− − θ+ | corresponds to the margin in some classifiers, and
an attempt is often made to maximize this quantity. Because text
classifiers have training data to use to separate the classes, the
final behavior of the score distributions is primarily a factor of the
amount of training data and the consequent separation in the classes
achieved. This is in contrast to search engine retrieval where the
distribution of scores is more a factor of language distribution across
documents, the similarity function, and the length and type of query.
Perfect classification corresponds to using two very asymmetric
distributions, but in this case, the probabilities are actually one and
zero and many methods will work for typical purposes. Practically,
some examples will fall between θ− and θ+, and it is often
important to estimate the probabilities of these examples well (since they
correspond to the hard examples). Justifications can be given for
both why you may find more and less examples between θ− and θ+
than outside of them, but there are few empirical reasons to believe
that the distributions should be symmetric.
A natural first candidate for an asymmetric distribution is to
generalize a common symmetric distribution, e.g. the Laplace or the
Gaussian. An asymmetric Laplace distribution can be achieved by
placing two exponentials around the mode in the following manner:
p(x | θ, β, γ) =
βγ
β+γ
exp [−β (θ − x)] x ≤ θ
(β, γ > 0)
βγ
β+γ
exp [−γ (x − θ)] x > θ
(1)
where θ, β, and γ are the model parameters. θ is the mode of the
distribution, β is the inverse scale of the exponential to the left of
the mode, and γ is the inverse scale of the exponential to the right.
We will use the notation Λ(X | θ, β, γ) to refer to this distribution.
0
0.002
0.004
0.006
0.008
0.01
-300 -200 -100 0 100 200
p(s|Class={+,-})
Unnormalized Confidence Score s
Gaussian
A-Gaussian
Figure 3: Gaussians vs. Asymmetric Gaussians. A
Shortcoming of Symmetric Distributions - The vertical lines show the
modes as estimated nonparametrically.
We can create an asymmetric Gaussian in the same manner:
p(x | θ, σl, σr) =
2√
2π(σl+σr)
exp −(x−θ)2
2σ2
l
x ≤ θ
(σl, σr > 0)
2√
2π(σl+σr)
exp −(x−θ)2
2σ2
r
x > θ
(2)
where θ, σl, and σr are the model parameters. To refer to this
asymmetric Gaussian, we use the notation Γ(X | θ, σl, σr). While
these distributions are composed of halves, the resulting function
is a single continuous distribution.
These distributions allow us to fit our data with much greater
flexibility at the cost of only fitting six parameters. We could
instead try mixture models for each component or other extensions,
but most other extensions require at least as many parameters (and
can often be more computationally expensive). In addition, the
motivation above should provide significant cause to believe the
underlying distributions actually behave in this way. Furthermore,
this family of distributions can still fit a symmetric distribution,
and finally, in the empirical evaluation, evidence is presented that
demonstrates this asymmetric behavior (see Figure 4).
To our knowledge, neither family of distributions has been
previously used in machine learning or information retrieval. Both are
termed generalizations of an Asymmetric Laplace in [14], but we
refer to them as described above to reflect the nature of how we
derived them for this task.
3.3 Estimating the Parameters of the
Asymmetric Distributions
This section develops the method for finding maximum
likelihood estimates (MLE) of the parameters for the above asymmetric
distributions. In order to find the MLEs, we have two choices: (1)
use numerical estimation to estimate all three parameters at once
(2) fix the value of θ, and estimate the other two (β and γ or σl
and σr) given our choice of θ, then consider alternate values of θ.
Because of the simplicity of analysis in the latter alternative, we
choose this method.
3.3.1 Asymmetric Laplace MLEs
For D = {x1, x2, . . . , xN } where the xi are i.i.d. and X ∼
Λ(X | θ, β, γ), the likelihood is N
i Λ(X | θ, β, γ). Now, we fix
θ and compute the maximum likelihood for that choice of θ. Then,
we can simply consider all choices of θ and choose the one with
the maximum likelihood over all choices of θ.
The complete derivation is omitted because of space but is
available in [2]. We define the following values:
Nl = | {x ∈ D | x ≤ θ} | Nr = | {x ∈ D | x > θ} |
Sl =
x∈D|x≤θ
x Sr =
x∈D|x>θ
x
Dl = Nlθ − Sl Dr = Sr − Nrθ.
Note that Dl and Dr are the sum of the absolute differences
between the x belonging to the left and right halves of the distribution
(respectively) and θ. Finally the MLEs for β and γ for a fixed θ are:
βMLE =
N
Dl +
√
DrDl
γMLE =
N
Dr +
√
DrDl
. (3)
These estimates are not wholly unexpected since we would obtain
Nl
Dl
if we were to estimate β independently of γ. The elegance of
the formulae is that the estimates will tend to be symmetric only
insofar as the data dictate it (i.e. the closer Dl and Dr are to being
equal, the closer the resulting inverse scales).
By continuity arguments, when N = 0, we assign β = γ = 0
where 0 is a small constant that acts to disperse the distribution to
a uniform. Similarly, when N = 0 and Dl = 0, we assign β = inf
where inf is a very large constant that corresponds to an extremely
sharp distribution (i.e. almost all mass at θ for that half). Dr = 0
is handled similarly.
Assuming that θ falls in some range [φ, ψ] dependent upon only
the observed documents, then this alternative is also easily
computable. Given Nl, Sl, Nr, Sr, we can compute the posterior and
the MLEs in constant time. In addition, if the scores are sorted,
then we can perform the whole process quite efficiently. Starting
with the minimum θ = φ we would like to try, we loop through the
scores once and set Nl, Sl, Nr, Sr appropriately. Then we increase
θ and just step past the scores that have shifted from the right side
of the distribution to the left. Assuming the number of candidate
θs are O(n), this process is O(n), and the overall process is
dominated by sorting the scores, O(n log n) (or expected linear time).
3.3.2 Asymmetric Gaussian MLEs
For D = {x1, x2, . . . , xN } where the xi are i.i.d. and X ∼
Γ(X | θ, σl, σr), the likelihood is N
i Γ(X | θ, β, γ). The MLEs
can be worked out similar to the above.
We assume the same definitions as above (the complete
derivation omitted for space is available in [2]), and in addition, let:
Sl2 =
x∈D|x≤θ
x2
Sr2 =
x∈D|x>θ
x2
Dl2 = Sl2 − Slθ + θ2
Nl Dr2 = Sr2 − Srθ + θ2
Nr.
The analytical solution for the MLEs for a fixed θ is:
σl,MLE =
Dl2 + D
2/3
l2 D
1/3
r2
N
(4)
σr,MLE =
Dr2 + D
2/3
r2 D
1/3
l2
N
. (5)
By continuity arguments, when N = 0, we assign σr = σl =
inf , and when N = 0 and Dl2 = 0 (resp. Dr2 = 0), we
assign σl = 0 (resp. σr = 0). Again, the same computational
complexity analysis applies to estimating these parameters.
4. EXPERIMENTAL ANALYSIS
4.1 Methods
For each of the methods that use a class prior, we use a smoothed
add-one estimate, i.e. P(c) = |c|+1
N+2
where N is the number of
documents. For methods that fit the class-conditional densities, p(s|+)
and p(s|−), the resulting densities are inverted using Bayes" rule as
described above. All of the methods below are fit using maximum
likelihood estimates.
For recalibrating a classifier (i.e. correcting poor probability
estimates output by the classifier), it is usual to use the log-odds of
the classifier"s estimate as s(d). The log-odds are defined to be
log P (+|d)
P (−|d)
. The normal decision threshold (minimizing error) in
terms of log-odds is at zero (i.e. P(+|d) = P(−|d) = 0.5).
Since it scales the outputs to a space [−∞, ∞], the log-odds
make normal (and similar distributions) applicable [19]. Lewis &
Gale [17] give a more motivating viewpoint that fitting the log-odds
is a dampening effect for the inaccurate independence assumption
and a bias correction for inaccurate estimates of the priors. In
general, fitting the log-odds can serve to boost or dampen the signal
from the original classifier as the data dictate.
Gaussians
A Gaussian is fit to each of the class-conditional densities, using
the usual maximum likelihood estimates. This method is denoted
in the tables below as Gauss.
Asymmetric Gaussians
An asymmetric Gaussian is fit to each of the class-conditional
densities using the maximum likelihood estimation procedure
described above. Intervals between adjacent scores are divided by 10
in testing candidate θs, i.e. 8 points between actual scores
occurring in the data set are tested. This method is denoted as A. Gauss.
Laplace Distributions
Even though Laplace distributions are not typically applied to
this task, we also tried this method to isolate why benefit is gained
from the asymmetric form. The usual MLEs were used for
estimating the location and scale of a classical symmetric Laplace
distribution as described in [14]. We denote this method as Laplace below.
Asymmetric Laplace Distributions
An asymmetric Laplace is fit to each of the class-conditional
densities using the maximum likelihood estimation procedure
described above. As with the asymmetric Gaussian, intervals between
adjacent scores are divided by 10 in testing candidate θs. This
method is denoted as A. Laplace below.
Logistic Regression
This method is the first of two methods we evaluated that
directly fit the posterior, P(+|s(d)). Both methods restrict the set
of families to a two-parameter sigmoid family; they differ
primarily in their model of class labels. As opposed to the above
methods, one can argue that an additional boon of these methods is they
completely preserve the ranking given by the classifier. When this
is desired, these methods may be more appropriate. The previous
methods will mostly preserve the rankings, but they can deviate if
the data dictate it. Thus, they may model the data behavior better at
the cost of departing from a monotonicity constraint in the output
of the classifier.
Lewis & Gale [17] use logistic regression to recalibrate na¨ıve
Bayes for subsequent use in active learning. The model they use is:
P(+|s(d)) =
exp(a + b s(d))
1 + exp(a + b s(d))
. (6)
Instead of using the probabilities directly output by the classifier,
they use the loglikelihood ratio of the probabilities, log P (d|+)
P (d|−)
, as
the score s(d). Instead of using this below, we will use the
logodds ratio. This does not affect the model as it simply shifts all of
the scores by a constant determined by the priors. We refer to this
method as LogReg below.
Logistic Regression with Noisy Class Labels
Platt [22] proposes a framework that extends the logistic
regression model above to incorporate noisy class labels and uses it to
produce probability estimates from the raw output of an SVM.
This model differs from the LogReg model only in how the
parameters are estimated. The parameters are still fit using maximum
likelihood estimation, but a model of noisy class labels is used in
addition to allow for the possibility that the class was mislabeled.
The noise is modeled by assuming there is a finite probability of
mislabeling a positive example and of mislabeling a negative
example; these two noise estimates are determined by the number
of positive examples and the number of negative examples (using
Bayes" rule to infer the probability of incorrect label).
Even though the performance of this model would not be
expected to deviate much from LogReg, we evaluate it for
completeness. We refer to this method below as LR+Noise.
4.2 Data
We examined several corpora, including the MSN Web Directory,
Reuters, and TREC-AP.
MSN Web Directory
The MSN Web Directory is a large collection of heterogeneous
web pages (from a May 1999 web snapshot) that have been
hierarchically classified. We used the same train/test split of 50078/10024
documents as that reported in [9]. The MSN Web hierarchy is a
seven-level hierarchy; we used all 13 of the top-level categories.
The class proportions in the training set vary from 1.15% to 22.29%.
In the testing set, they range from 1.14% to 21.54%. The classes
are general subjects such as Health & Fitness and Travel & Vacation.
Human indexers assigned the documents to zero or more categories.
For the experiments below, we used only the top 1000 words with
highest mutual information for each class; approximately 195K
words appear in at least three training documents.
Reuters
The Reuters 21578 corpus [16] contains Reuters news articles
from 1987. For this data set, we used the ModApte standard train/
test split of 9603/3299 documents (8676 unused documents). The
classes are economic subjects (e.g., acq for acquisitions, earn
for earnings, etc.) that human taggers applied to the document;
a document may have multiple subjects. There are actually 135
classes in this domain (only 90 of which occur in the training and
testing set); however, we only examined the ten most frequent classes
since small numbers of testing examples make interpreting some
performance measures difficult due to high variance.1
Limiting to
the ten largest classes allows us to compare our results to
previously published results [10, 13, 21, 22]. The class proportions in
the training set vary from 1.88% to 29.96%. In the testing set, they
range from 1.7% to 32.95%.
For the experiments below we used only the top 300 words with
highest mutual information for each class; approximately 15K words
appear in at least three training documents.
TREC-AP
The TREC-AP corpus is a collection of AP news stories from
1988 to 1990. We used the same train/test split of 142791/66992
documents that was used in [18]. As described in [17] (see also
[15]), the categories are defined by keywords in a keyword field.
The title and body fields are used in the experiments below. There
are twenty categories in total. The class proportions in the training
set vary from 0.06% to 2.03%. In the testing set, they range from
0.03% to 4.32%.
For the experiments described below, we use only the top 1000
words with the highest mutual information for each class;
approximately 123K words appear in at least 3 training documents.
4.3 Classifiers
We selected two classifiers for evaluation. A linear SVM
classifier which is a discriminative classifier that does not normally
output probability values, and a na¨ıve Bayes classifier whose
probability outputs are often poor [1, 7] but can be improved [1, 26, 27].
1
A separate comparison of only LogReg, LR+Noise, and
A. Laplace over all 90 categories of Reuters was also conducted.
After accounting for the variance, that evaluation also supported
the claims made here.
SVM
For linear SVMs, we use the Smox toolkit which is based on
Platt"s Sequential Minimal Optimization algorithm. The features
were represented as continuous values. We used the raw output
score of the SVM as s(d) since it has been shown to be appropriate
before [22]. The normal decision threshold (assuming we are
seeking to minimize errors) for this classifier is at zero.
Na¨ıve Bayes
The na¨ıve Bayes classifier model is a multinomial model [21].
We smoothed word and class probabilities using a Bayesian
estimate (with the word prior) and a Laplace m-estimate, respectively.
We use the log-odds estimated by the classifier as s(d). The normal
decision threshold is at zero.
4.4 Performance Measures
We use log-loss [12] and squared error [4, 6] to evaluate the
quality of the probability estimates. For a document d with class c(d) ∈
{+, −} (i.e. the data have known labels and not probabilities),
logloss is defined as δ(c(d), +) log P(+|d) + δ(c(d), −) log P(−|d)
where δ(a, b)
.
= 1 if a = b and 0 otherwise. The squared error is
δ(c(d), +)(1 − P(+|d))2
+ δ(c(d), −)(1 − P(−|d))2
. When the
class of a document is correctly predicted with a probability of one,
log-loss is zero and squared error is zero. When the class of a
document is incorrectly predicted with a probability of one, log-loss
is −∞ and squared error is one. Thus, both measures assess how
close an estimate comes to correctly predicting the item"s class but
vary in how harshly incorrect predictions are penalized.
We report only the sum of these measures and omit the averages
for space. Their averages, average log-loss and mean squared
error (MSE), can be computed from these totals by dividing by the
number of binary decisions in a corpus.
In addition, we also compare the error of the classifiers at their
default thresholds and with the probabilities. This evaluates how
the probability estimates have improved with respect to the
decision threshold P(+|d) = 0.5. Thus, error only indicates how the
methods would perform if a false positive was penalized the same
as a false negative and not the general quality of the probability
estimates. It is presented simply to provide the reader with a more
complete understanding of the empirical tendencies of the methods.
We use a a standard paired micro sign test [25] to determine
statistical significance in the difference of all measures. Only pairs
that the methods disagree on are used in the sign test. This test
compares pairs of scores from two systems with the null
hypothesis that the number of items they disagree on are binomially
distributed. We use a significance level of p = 0.01.
4.5 Experimental Methodology
As the categories under consideration in the experiments are not
mutually exclusive, the classification was done by training n binary
classifiers, where n is the number of classes.
In order to generate the scores that each method uses to fit its
probability estimates, we use five-fold cross-validation on the
training data. We note that even though it is computationally efficient
to perform leave-one-out cross-validation for the na¨ıve Bayes
classifier, this may not be desirable since the distribution of scores can
be skewed as a result. Of course, as with any application of n-fold
cross-validation, it is also possible to bias the results by holding n
too low and underestimating the performance of the final classifier.
4.6 Results & Discussion
The results for recalibrating na¨ıve Bayes are given in Table 1a.
Table 1b gives results for producing probabilistic outputs for SVMs.
Log-loss Error2
Errors
MSN Web
Gauss -60656.41 10503.30 10754
A.Gauss -57262.26 8727.47 9675
Laplace -45363.84 8617.59 10927
A.Laplace -36765.88 6407.84†
8350
LogReg -36470.99 6525.47 8540
LR+Noise -36468.18 6534.61 8563
na¨ıve Bayes -1098900.83 17117.50 17834
Reuters
Gauss -5523.14 1124.17 1654
A.Gauss -4929.12 652.67 888
Laplace -5677.68 1157.33 1416
A.Laplace -3106.95‡
554.37‡
726
LogReg -3375.63 603.20 786
LR+Noise -3374.15 604.80 785
na¨ıve Bayes -52184.52 1969.41 2121
TREC-AP
Gauss -57872.57 8431.89 9705
A.Gauss -66009.43 7826.99 8865
Laplace -61548.42 9571.29 11442
A.Laplace -48711.55 7251.87‡
8642
LogReg -48250.81 7540.60 8797
LR+Noise -48251.51 7544.84 8801
na¨ıve Bayes -1903487.10 41770.21 43661
Log-loss Error2
Errors
MSN Web
Gauss -54463.32 9090.57 10555
A. Gauss -44363.70 6907.79 8375
Laplace -42429.25 7669.75 10201
A. Laplace -31133.83 5003.32 6170
LogReg -30209.36 5158.74 6480
LR+Noise -30294.01 5209.80 6551
Linear SVM N/A N/A 6602
Reuters
Gauss -3955.33 589.25 735
A. Gauss -4580.46 428.21 532
Laplace -3569.36 640.19 770
A. Laplace -2599.28 412.75 505
LogReg -2575.85 407.48 509
LR+Noise -2567.68 408.82 516
Linear SVM N/A N/A 516
TREC-AP
Gauss -54620.94 6525.71 7321
A. Gauss -77729.49 6062.64 6639
Laplace -54543.19 7508.37 9033
A. Laplace -48414.39 5761.25‡
6572‡
LogReg -48285.56 5914.04 6791
LR+Noise -48214.96 5919.25 6794
Linear SVM N/A N/A 6718
Table 1: (a) Results for na¨ıve Bayes (left) and (b) SVM (right). The best entry for a corpus is in bold. Entries that are statistically
significantly better than all other entries are underlined. A † denotes the method is significantly better than all other methods except
for na¨ıve Bayes. A ‡ denotes the entry is significantly better than all other methods except for A. Gauss (and na¨ıve Bayes for the table
on the left). The reason for this distinction in significance tests is described in the text.
We start with general observations that result from examining
the performance of these methods over the various corpora. The
first is that A. Laplace, LR+Noise, and LogReg, quite clearly
outperform the other methods. There is usually little difference
between the performance of LR+Noise and LogReg (both as shown
here and on a decision by decision basis), but this is unsurprising
since LR+Noise just adds noisy class labels to the LogReg model.
With respect to the three different measures, LR+Noise and
LogReg tend to perform slightly better (but never significantly) than
A. Laplace at some tasks with respect to log-loss and squared error.
However, A. Laplace always produces the least number of errors
for all of the tasks, though at times the degree of improvement is
not significant.
In order to give the reader a better sense of the behavior of these
methods, Figures 4-5 show the fits produced by the most
competitive of these methods versus the actual data behavior (as estimated
nonparametrically by binning) for class Earn in Reuters. Figure 4
shows the class-conditional densities, and thus only A. Laplace is
shown since LogReg fits the posterior directly. Figure 5 shows the
estimations of the log-odds, (i.e. log P (Earn|s(d))
P (¬Earn|s(d))
). Viewing the
log-odds (rather than the posterior) usually enables errors in
estimation to be detected by the eye more easily.
We can break things down as the sign test does and just look at
wins and losses on the items that the methods disagree on. Looked
at in this way only two methods (na¨ıve Bayes and A. Gauss) ever
have more pairwise wins than A. Laplace; those two sometimes
have more pairwise wins on log-loss and squared error even though
the total never wins (i.e. they are dragged down by heavy penalties).
In addition, this comparison of pairwise wins means that for
those cases where LogReg and LR+Noise have better scores than
A. Laplace, it would not be deemed significant by the sign test at
any level since they do not have more wins. For example, of the
130K binary decisions over the MSN Web dataset, A. Laplace had
approximately 101K pairwise wins versus LogReg and LR+Noise.
No method ever has more pairwise wins than A. Laplace for the
error comparison nor does any method every achieve a better total.
The basic observation made about na¨ıve Bayes in previous work
is that it tends to produce estimates very close to zero and one [1,
17]. This means if it tends to be right enough of the time, it will
produce results that do not appear significant in a sign test that
ignores size of difference (as the one here). The totals of the squared
error and log-loss bear out the previous observation that when it"s
wrong it"s really wrong.
There are several interesting points about the performance of the
asymmetric distributions as well. First, A. Gauss performs poorly
because (similar to na¨ıve Bayes) there are some examples where
it is penalized a large amount. This behavior results from a
general tendency to perform like the picture shown in Figure 3 (note
the crossover at the tails). While the asymmetric Gaussian tends
to place the mode much more accurately than a symmetric
Gaussian, its asymmetric flexibility combined with its distance function
causes it to distribute too much mass to the outside tails while
failing to fit around the mode accurately enough to compensate. Figure
3 is actually a result of fitting the two distributions to real data. As
a result, at the tails there can be a large discrepancy between the
likelihood of belonging to each class. Thus when there are no
outliers A. Gauss can perform quite competitively, but when there is an
0
0.002
0.004
0.006
0.008
0.01
0.012
-600 -400 -200 0 200 400
p(s(d)|Class={+,-})
s(d) = naive Bayes log-odds
Train
Test
A.Laplace
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
-15 -10 -5 0 5 10 15
p(s(d)|Class={+,-})
s(d) = linear SVM raw score
Train
Test
A.Laplace
Figure 4: The empirical distribution of classifier scores for documents in the training and the test set for class Earn in Reuters.
Also shown is the fit of the asymmetric Laplace distribution to the training score distribution. The positive class (i.e. Earn) is the
distribution on the right in each graph, and the negative class (i.e. ¬Earn) is that on the left in each graph.
-6
-4
-2
0
2
4
6
8
-250 -200 -150 -100 -50 0 50 100 150
LogOdds=logP(+|s(d))-logP(-|s(d))
s(d) = naive Bayes log-odds
Train
Test
A.Laplace
LogReg
-5
0
5
10
15
-4 -2 0 2 4 6
LogOdds=logP(+|s(d))-logP(-|s(d))
s(d) = linear SVM raw score
Train
Test
A.Laplace
LogReg
Figure 5: The fit produced by various methods compared to the empirical log-odds of the training data for class Earn in Reuters.
outlier A. Gauss is penalized quite heavily. There are enough such
cases overall that it seems clearly inferior to the top three methods.
However, the asymmetric Laplace places much more emphasis
around the mode (Figure 4) because of the different distance
function (think of the sharp peak of an exponential). As a result most
of the mass stays centered around the mode, while the asymmetric
parameters still allow more flexibility than the standard Laplace.
Since the standard Laplace also corresponds to a piecewise fit in the
log-odds space, this highlights that part of the power of the
asymmetric methods is their sensitivity in placing the knots at the actual
modes - rather than the symmetric assumption that the means
correspond to the modes. Additionally, the asymmetric methods have
greater flexibility in fitting the slopes of the line segments as well.
Even in cases where the test distribution differs from the training
distribution (Figure 4), A. Laplace still yields a solution that gives
a better fit than LogReg (Figure 5), the next best competitor.
Finally, we can make a few observations about the usefulness
of the various performance metrics. First, log-loss only awards a
finite amount of credit as the degree to which something is
correct improves (i.e. there are diminishing returns as it approaches
zero), but it can infinitely penalize for a wrong estimate. Thus, it
is possible for one outlier to skew the totals, but misclassifying this
example may not matter for any but a handful of actual utility
functions used in practice. Secondly, squared error has a weakness in
the other direction. That is, its penalty and reward are bounded in
[0, 1], but if the number of errors is small enough, it is possible for
a method to appear better when it is producing what we generally
consider unhelpful probability estimates. For example, consider a
method that only estimates probabilities as zero or one (which na¨ıve
Bayes tends to but doesn"t quite reach if you use smoothing). This
method could win according to squared error, but with just one
error it would never perform better on log-loss than any method that
assigns some non-zero probability to each outcome. For these
reasons, we recommend that neither of these are used in isolation as
they each give slightly different insights to the quality of the
estimates produced. These observations are straightforward from the
definitions but are underscored by the evaluation.
5. FUTURE WORK
A promising extension to the work presented here is a hybrid
distribution of a Gaussian (on the outside slopes) and exponentials
(on the inner slopes). From the empirical evidence presented in
[22], the expectation is that such a distribution might allow more
emphasis of the probability mass around the modes (as with the
exponential) while still providing more accurate estimates toward
the tails.
Just as logistic regression allows the log-odds of the posterior
distribution to be fit directly with a line, we could directly fit the
log-odds of the posterior with a three-piece line (a spline) instead of
indirectly doing the same thing by fitting the asymmetric Laplace.
This approach may provide more power since it retains the
asymmetry assumption but not the assumption that the class-conditional
densities are from an asymmetric Laplace.
Finally, extending these methods to the outputs of other
discriminative classifiers is an open area. We are currently evaluating the
appropriateness of these methods for the output of a voted
perceptron [11]. By analogy to the log-odds, the operative score that
appears promising is log
weight perceptrons voting +
weight perceptrons voting −
.
6. SUMMARY AND CONCLUSIONS
We have reviewed a wide variety of parametric methods for
producing probability estimates from the raw scores of a discriminative
classifier and for recalibrating an uncalibrated probabilistic
classifier. In addition, we have introduced two new families that attempt
to capitalize on the asymmetric behavior that tends to arise from
learning a discrimination function. We have given an efficient way
to estimate the parameters of these distributions.
While these distributions attempt to strike a balance between the
generalization power of parametric distributions and the flexibility
that the added asymmetric parameters give, the asymmetric
Gaussian appears to have too great of an emphasis away from the modes.
In striking contrast, the asymmetric Laplace distribution appears to
be preferable over several large text domains and a variety of
performance measures to the primary competing parametric methods,
though comparable performance is sometimes achieved with one
of two varieties of logistic regression. Given the ease of
estimating the parameters of this distribution, it is a good first choice for
producing quality probability estimates.
Acknowledgments
We are grateful to Francisco Pereira for the sign test code, Anton
Likhodedov for logistic regression code, and John Platt for the code
support for the linear SVM classifier toolkit Smox. Also, we
sincerely thank Chris Meek and John Platt for the very useful advice
provided in the early stages of this work. Thanks also to Jaime
Carbonell and John Lafferty for their useful feedback on the final
versions of this paper.
7. REFERENCES
[1] P. N. Bennett. Assessing the calibration of naive bayes"
posterior estimates. Technical Report CMU-CS-00-155,
Carnegie Mellon, School of Computer Science, 2000.
[2] P. N. Bennett. Using asymmetric distributions to improve
classifier probabilities: A comparison of new and standard
parametric methods. Technical Report CMU-CS-02-126,
Carnegie Mellon, School of Computer Science, 2002.
[3] H. Bourlard and N. Morgan. A continuous speech
recognition system embedding mlp into hmm. In NIPS "89,
1989.
[4] G. Brier. Verification of forecasts expressed in terms of
probability. Monthly Weather Review, 78:1-3, 1950.
[5] M. H. DeGroot and S. E. Fienberg. The comparison and
evaluation of forecasters. Statistician, 32:12-22, 1983.
[6] M. H. DeGroot and S. E. Fienberg. Comparing probability
forecasters: Basic binary concepts and multivariate
extensions. In P. Goel and A. Zellner, editors, Bayesian
Inference and Decision Techniques. Elsevier Science
Publishers B.V., 1986.
[7] P. Domingos and M. Pazzani. Beyond independence:
Conditions for the optimality of the simple bayesian
classifier. In ICML "96, 1996.
[8] R. Duda, P. Hart, and D. Stork. Pattern Classification. John
Wiley & Sons, Inc., 2001.
[9] S. T. Dumais and H. Chen. Hierarchical classification of web
content. In SIGIR "00, 2000.
[10] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami.
Inductive learning algorithms and representations for text
categorization. In CIKM "98, 1998.
[11] Y. Freund and R. Schapire. Large margin classification using
the perceptron algorithm. Machine Learning, 37(3):277-296,
1999.
[12] I. Good. Rational decisions. Journal of the Royal Statistical
Society, Series B, 1952.
[13] T. Joachims. Text categorization with support vector
machines: Learning with many relevant features. In ECML
"98, 1998.
[14] S. Kotz, T. J. Kozubowski, and K. Podgorski. The Laplace
Distribution and Generalizations: A Revisit with
Applications to Communications, Economics, Engineering,
and Finance. Birkh¨auser, 2001.
[15] D. D. Lewis. A sequential algorithm for training text
classifiers: Corrigendum and additional data. SIGIR Forum,
29(2):13-19, Fall 1995.
[16] D. D. Lewis. Reuters-21578, distribution 1.0.
http://www.daviddlewis.com/resources/
testcollections/reuters21578, January 1997.
[17] D. D. Lewis and W. A. Gale. A sequential algorithm for
training text classifiers. In SIGIR "94, 1994.
[18] D. D. Lewis, R. E. Schapire, J. P. Callan, and R. Papka.
Training algorithms for linear text classifiers. In SIGIR "96,
1996.
[19] D. Lindley, A. Tversky, and R. Brown. On the reconciliation
of probability assessments. Journal of the Royal Statistical
Society, 1979.
[20] R. Manmatha, T. Rath, and F. Feng. Modeling score
distributions for combining the outputs of search engines. In
SIGIR "01, 2001.
[21] A. McCallum and K. Nigam. A comparison of event models
for naive bayes text classification. In AAAI "98, Workshop on
Learning for Text Categorization, 1998.
[22] J. C. Platt. Probabilistic outputs for support vector machines
and comparisons to regularized likelihood methods. In A. J.
Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors,
Advances in Large Margin Classifiers. MIT Press, 1999.
[23] M. Saar-Tsechansky and F. Provost. Active learning for class
probability estimation and ranking. In IJCAI "01, 2001.
[24] R. L. Winkler. Scoring rules and the evaluation of probability
assessors. Journal of the American Statistical Association,
1969.
[25] Y. Yang and X. Liu. A re-examination of text categorization
methods. In SIGIR "99, 1999.
[26] B. Zadrozny and C. Elkan. Obtaining calibrated probability
estimates from decision trees and naive bayesian classifiers.
In ICML "01, 2001.
[27] B. Zadrozny and C. Elkan. Reducing multiclass to binary by
coupling probability estimates. In KDD "02, 2002. | classifier combination;parametric model;maximum likelihood estimate;text classification;asymmetric laplace distribution;asymmetric gaussian;cost-sensitive learn;decision threshold;logistic regression framework;posterior function;active learn;class-conditional density;symmetric distribution;text classifier;information retrieval;bayesian risk model;probability estimate;empirical score distribution;search engine retrieval |
train_I-37 | A Framework for Agent-Based Distributed Machine Learning and Data Mining | This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework. | 1. INTRODUCTION
In the areas of machine learning and data mining (cf. [14,
17] for overviews), it has long been recognised that
parallelisation and distribution can be used to improve learning
performance. Various techniques have been suggested in
this respect, ranging from the low-level integration of
independently derived learning hypotheses (e.g. combining
different classifiers to make optimal classification decisions [4,
7], model averaging of Bayesian classifiers [8], or
consensusbased methods for integrating different clusterings [11]), to
the high-level combination of learning results obtained by
heterogeneous learning agents using meta-learning (e.g. [3,
10, 21]).
All of these approaches assume homogeneity of agent
design (all agents apply the same learning algorithm) and/or
agent objectives (all agents are trying to cooperatively solve
a single, global learning problem). Therefore, the techniques
they suggest are not applicable in societies of autonomous
learners interacting in open systems. In such systems,
learners (agents) may not be able to integrate their datasets or
learning results (because of different data formats and
representations, learning algorithms, or legal restrictions that
prohibit such integration [11]) and cannot always be
guaranteed to interact in a strictly cooperative fashion (discovered
knowledge and collected data might be economic assets that
should only be shared when this is deemed profitable;
malicious agents might attempt to adversely influence others"
learning results, etc.).
Examples for applications of this kind abound. Many
distributed learning domains involve the use of sensitive data
and prohibit the exchange of this data (e.g. exchange of
patient data in distributed brain tumour diagnosis [2]) -
however, they may permit the exchange of local learning
hypotheses among different learners. In other areas, training
data might be commercially valuable, so that agents would
only make it available to others if those agents could
provide something in return (e.g. in remote ship surveillance
and tracking, where the different agencies involved are
commercial service providers [1]). Furthermore, agents might
have a vested interest in negatively affecting other agents"
learning performance. An example for this is that of
fraudulent agents on eBay which may try to prevent
reputationlearning agents from the construction of useful models for
detecting fraud.
Viewing learners as autonomous, self-directed agents is
the only appropriate view one can take in modelling these
distributed learning environments: the agent metaphor
becomes a necessity as oppossed to preferences for scalability,
dynamic data selection, interactivity [13], which can also
be achieved through (non-agent) distribution and
parallelisation in principle.
Despite the autonomy and self-directedness of learning
agents, many of these systems exhibit a sufficient overlap
in terms of individual learning goals so that beneficial
cooperation might be possible if a model for flexible
interaction between autonomous learners was available that allowed
agents to
1. exchange information about different aspects of their
own learning mechanism at different levels of detail
without being forced to reveal private information that
should not be disclosed,
2. decide to what extent they want to share information
about their own learning processes and utilise
information provided by other learners, and
3. reason about how this information can best be used to
improve their own learning performance.
Our model is based on the simple idea that autonomous
learners should maintain meta-descriptions of their own
learning processes (see also [3]) in order to be able to
exchange information and reason about them in a rational way
(i.e. with the overall objective of improving their own
learning results). Our hypothesis is a very simple one:
If we can devise a sufficiently general, abstract
view of describing learning processes, we will be
able to utilise the whole range of methods for (i)
rational reasoning and (ii) communication and
coordination offered by agent technology so as to
build effective autonomous learning agents.
To test this hypothesis, we introduce such an abstract
architecture (section 2) and implement a simple, concrete
instance of it in a real-world domain (section 3). We report
on empirical results obtained with this implemented system
that demonstrate the viability of our approach (section 4).
Finally, we review related work (section 5) and conclude
with a summary, discussion of our approach and outlook to
future work on the subject (section 6).
2. ABSTRACT ARCHITECTURE
Our framework is based on providing formal (meta-level)
descriptions of learning processes, i.e. representations of all
relevant components of the learning machinery used by a
learning agent, together with information about the state of
the learning process.
To ensure that this framework is sufficiently general, we
consider the following general description of a learning
problem:
Given data D ⊆ D taken from an instance space
D, a hypothesis space H and an (unknown)
target function c ∈ H1
, derive a function h ∈ H that
approximates c as well as possible according to
some performance measure g : H → Q where Q
is a set of possible levels of learning performance.
1
By requiring this we are ensuring that the learning problem
can be solved in principle using the given hypothesis space.
This very broad definition includes a number of components
of a learning problem for which more concrete specifications
can be provided if we want to be more precise. For the cases
of classification and clustering, for example, we can further
specify the above as follows: Learning data can be described
in both cases as D = ×n
i=1[Ai] where [Ai] is the domain of
the ith attribute and the set of attributes is A = {1, . . . , n}.
For the hypothesis space we obtain
H ⊆ {h|h : D → {0, 1}}
in the case of classification (i.e. a subset of the set of all
possible classifiers, the nature of which depends on the
expressivity of the learning algorithm used) and
H ⊆ {h|h : D → N, h is total with range {1, . . . , k}}
in the case of clustering (i.e. a subset of all sets of possible
cluster assignments that map data points to a finite number
of clusters numbered 1 to k). For classification, g might be
defined in terms of the numbers of false negatives and false
positives with respect to some validation set V ⊆ D, and
clustering might use various measures of cluster validity to
evaluate the quality of a current hypothesis, so that Q = R
in both cases (but other sets of learning quality levels can
be imagined).
Next, we introduce a notion of learning step, which
imposes a uniform basic structure on all learning processes that
are supposed to exchange information using our framework.
For this, we assume that each learner is presented with a
finite set of data D = d1, . . . dk in each step (this is an
ordered set to express that the order in which the samples are
used for training matters) and employs a training/update
function f : H × D∗
→ H which updates h given a series of
samples d1, . . . , dk. In other words, one learning step always
consists of applying the update function to all samples in D
exactly once. We define a learning step as a tuple
l = D, H, f, g, h
where we require that H ⊆ H and h ∈ H.
The intuition behind this definition is that each learning
step completely describes one learning iteration as shown
in Figure 1: in step t, the learner updates the current
hypothesis ht−1 with data Dt, and evaluates the resulting new
hypothesis ht according to the current performance measure
gt. Such a learning step is equivalent to the following steps
of computation:
1. train the algorithm on all samples in D (once), i.e.
calculate ft(ht−1, Dt) = ht,
2. calculate the quality gt of the resulting hypothesis
gt(ht).
We denote the set of all possible learning steps by L. For
ease of notation, we denote the components of any l ∈ L by
D(l), H(l), f(l) and g(l), respectively. The reason why such
learning step specifications use a subset H of H instead of
H itself is that learners often have explicit knowledge about
which hypotheses are effectively ruled out by f given h in
the future (if this is not the case, we can still set H = H).
A learning process is a finite, non-empty sequence
l = l1 → l2 → . . . → ln
of learning steps such that
∀1 ≤ i < n .h(li+1) = f(li)(h(li), D(li))
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 679
training function
ht
performance measure solution quality
qtgtft
training set
Dt
hypothesis
hypothesis
ht−1
Figure 1: A generic model of a learning step
i.e. the only requirement the transition relation →⊆ L × L
makes is that the new hypothesis is the result of training the
old hypothesis on all available sample data that belongs to
the current step. We denote the set of all possible learning
processes by L (ignoring, for ease of notation, the fact that
this set depends on H, D and the spaces of possible
training and evaluation functions f and g). The performance
trace associated with a learning process l is the sequence
q1, . . . , qn ∈ Qn
where qi = g(li)(h(li)), i.e. the sequence
of quality values calculated by the performance measures of
the individual learning steps on the respective hypotheses.
Such specifications allow agents to provide a
selfdescription of their learning process. However, in
communication among learning agents, it is often useful to
provide only partial information about one"s internal learning
process rather than its full details, e.g. when advertising
this information in order to enter information exchange
negotiations with others. For this purpose, we will assume
that learners describe their internal state in terms of sets of
learning processes (in the sense of disjunctive choice) which
we call learning process descriptions (LPDs) rather than by
giving precise descriptions about a single, concrete learning
process.
This allows us to describe properties of a learning
process without specifying its details exhaustively. As an
example, the set {l ∈ L|∀l = l[i].D(l) ≤ 100} describes all
processes that have a training set of at most 100
samples (where all the other elements are arbitrary). Likewise,
{l ∈ L|∀l = l[i].D(l) = {d}} is equivalent to just providing
information about a single sample {d} and no other details
about the process (this can be useful to model, for
example, data received from the environment). Therefore, we use
℘(L), that is the set of all LPDs, as the basis for
designing content languages for communication in the protocols
we specify below.
In practice, the actual content language chosen will of
course be more restricted and allow only for a special type
of subsets of L to be specified in a compact way, and its
choice will be crucial for the interactions that can occur
between learning agents. For our examples below, we simply
assume explicit enumeration of all possible elements of the
respective sets and function spaces (D, H, etc.) extended by
the use of wildcard symbols ∗ (so that our second example
above would become ({d}, ∗, ∗, ∗, ∗)).
2.1 Learning agents
In our framework, a learning agent is essentially a
metareasoning function that operates on information about
learning processes and is situated in an environment co-inhabited
by other learning agents. This means that it is not only
capable of meta-level control on how to learn, but in doing
so it can take information into account that is provided by
other agents or the environment. Although purely
cooperative or hybrid cases are possible, for the purposes of this
paper we will assume that agents are purely self-interested,
and that while there may be a potential for cooperation
considering how agents can mutually improve each others"
learning performance, there is no global mechanism that can
enforce such cooperative behaviour.2
Formally speaking, an agent"s learning function is a
function which, given a set of histories of previous learning
processes (of oneself and potentially of learning processes about
which other agents have provided information) and outputs
a learning step which is its next learning action. In the
most general sense, our learning agent"s internal learning
process update can hence be viewed as a function
λ : ℘(L) → L × ℘(L)
which takes a set of learning histories of oneself and others
as inputs and computes a new learning step to be executed
while updating the set of known learning process histories
(e.g. by appending the new learning action to one"s own
learning process and leaving all information about others"
learning processes untouched). Note that in λ({l1, . . . ln}) =
(l, {l1, . . . ln }) some elements li of the input learning process
set may be descriptions of new learning data received from
the environment.
The λ-function can essentially be freely chosen by the
agent as long as one requirement is met, namely that the
learning data that is being used always stems from what
has been previously observed. More formally,
∀{l1, . . . ln} ∈ ℘(L).λ({l1, . . . ln}) = (l, {l1, . . . ln })
⇒
„
D(l) ∪
[
l =li[j]
D(l )
«
⊆
[
l =li[j]
D(l )
i.e. whatever λ outputs as a new learning step and updated
set of learning histories, it cannot invent new data; it has
to work with the samples that have been made available
to it earlier in the process through the environment or from
other agents (and it can of course re-train on previously used
data).
The goal of the agent is to output an optimal learning
step in each iteration given the information that it has. One
possibility of specifying this is to require that
∀{l1, . . . ln} ∈ ℘(L).λ({l1, . . . ln}) = (l, {l1, . . . ln })
⇒ l = arg max
l ∈L
g(l )(h(l ))
but since it will usually be unrealistic to compute the
optimal next learning step in every situation, it is more useful
2
Note that our outlook is not only different from common,
cooperative models of distributed machine learning and data
mining, but also delineates our approach from multiagent
learning systems in which agents learn about other agents
[25], i.e. the learning goal itself is not affected by agents"
behaviour in the environment.
680 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
i j Dj Hj fj gj hj
Di
pD→D
1 (Di, Dj)
.
..
pD→D
kD→D
(Di, Dj)
. . . . . . n/a . . .
Hi
.
..
... n/a
fi
.
..
... n/a
gi
.
.. n/a
pg→h
1 (gi, hj)
.
..
pg→h
kg→h
(gi, hj)
hi
.
.. n/a
...
Table 1: Matrix of integration functions for
messages sent from learner i to j
to simply use g(l )(h(l )) as a running performance measure
to evaluate how well the agent is performing.
This is too abstract and unspecific for our purposes: While
it describes what agents should do (transform the settings
for the next learning step in an optimal way), it doesn"t
specify how this can be achieved in practice.
2.2 Integrating learning process information
To specify how an agent"s learning process can be affected
by integrating information received from others, we need to
flesh out the details of how the learning steps it will perform
can be modified using incoming information about learning
processes described by other agents (this includes the
acquisition of new learning data from the environment as a
special case). In the most general case, we can specify this
in terms of the potential modifications to the existing
information about learning histories that can be performed using
new information. For ease of presentation, we will assume
that agents are stationary learning processes that can only
record the previously executed learning step and only
exchange information about this one individual learning step
(our model can be easily extended to cater for more complex
settings).
Let lj = Dj, Hj, fj, gj, hj be the current state of
agent j when receiving a learning process description li =
Di, Hi, fi, gi, hi from agent i (for the time being, we
assume that this is a specific learning step and not a more
vague, disjunctive description of properties of the
learning step of i). Considering all possible interactions at an
abstract level, we basically obtain a matrix of
possibilities for modifications of j"s learning step specification as
shown in Table 1. In this matrix, each entry specifies
a family of integration functions pc→c
1 , . . . , pc→c
kc→c
where
c, c ∈ {D, H, f, g, h} and which define how agent j"s
component cj will be modified using the information ci provided
about (the same or a different component of) i"s learning
step by applying pc→c
r (ci, cj) for some r ∈ {1, . . . , kc→c }.
To put it more simply, the collections of p-functions an agent
j uses specifies how it will modify its own learning behaviour
using information obtained from i.
For the diagonal of this matrix, which contains the most
common ways of integrating new information in one"s own
learning model, obvious ways of modifying one"s own
learning process include replacing cj by ci or ignoring ci
altogether. More complex/subtle forms of learning process
integration include:
• Modification of Dj: append Di to Dj; filter out all
elements from Dj which also appear in Di; append
Di to Dj discarding all elements with attributes
outside ranges which affect gj, or those elements already
correctly classified by hj;
• Modification of Hi: use the union/intersection of Hi
and Hj; alternatively, discard elements of Hj that are
inconsistent with Dj in the process of intersection or
union, or filter out elements that cannot be obtained
using fj (unless fj is modified at the same time)
• Modification of fj: modify parameters or background
knowledge of fj using information about fi; assess
their relevance by simulating previous learning steps
on Dj using gj and discard those that do not help
improve own performance
• Modification of hj: combine hj with hi using (say)
logical or mathematical operators; make the use of hi
contingent on a pre-integration assessment of its quality
using own data Dj and gj
While this list does not include fully fledged, concrete
integration operations for learning processes, it is indicative of
the broad range of interactions between individual agents"
learning processes that our framework enables.
Note that the list does not include any modifications to
gj. This is because we do not allow modifications to the
agent"s own quality measure as this would render the model
of rational (learning) action useless (if the quality measure
is relative and volatile, we cannot objectively judge learning
performance). Also note that some of the above examples
require consulting other elements of lj than those appearing
as arguments of the p-operations; we omit these for ease
of notation, but emphasise that information-rich operations
will involve consulting many different aspects of lj.
Apart from operations along the diagonal of the matrix,
more exotic integration operations are conceivable that
combine information about different components. In theory
we could fill most of the matrix with entries for them, but
for lack of space we list only a few examples:
• Modification of Dj using fi: pre-process samples in
fi, e.g. to achieve intermediate representations that fj
can be applied to
• Modification of Dj using hi: filter out samples from
Dj that are covered by hi and build hj using fj only
on remaining samples
• Modification of Hj using fi: filter out hypotheses from
Hj that are not realisable using fi
• Modification of hj using gi: if hj is composed of several
sub-components, filter out those sub-components that
do not perform well according to gi
• . . .
Finally, many messages received from others describing
properties of their learning processes will contain
information about several elements of a learning step, giving rise to
yet more complex operations that depend on which kinds of
information are available.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681
Figure 2: Screenshot of our simulation system,
displaying online vessel tracking data for the North Sea
region
3. APPLICATION EXAMPLE
3.1 Domain description
As an illustration of our framework, we present an
agentbased data mining system for clustering-based surveillance
using AIS (Automatic Identification System [1]) data. In
our application domain, different commercial and
governmental agencies track the journeys of ships over time
using AIS data which contains structured information
automatically provided by ships equipped with shipborne
mobile AIS stations to shore stations, other ships and aircrafts.
This data contains the ship"s identity, type, position, course,
speed, navigational status and other safety-related
information. Figure 2 shows a screenshot of our simulation system.
It is the task of AIS agencies to detect anomalous
behaviour so as to alarm police/coastguard units to further
investigate unusual, potentially suspicious behaviour. Such
behaviour might include things such as deviation from the
standard routes between the declared origin and destination
of the journey, unexpected close encounters between
different vessels on sea, or unusual patterns in the choice of
destination over multiple journeys, taking the type of
vessel and reported freight into account. While the reasons for
such unusual behaviour may range from pure coincidence or
technical problems to criminal activity (such as smuggling,
piracy, terrorist/military attacks) it is obviously useful to
pre-process the huge amount of vessel (tracking) data that
is available before engaging in further analysis by human
experts.
To support this automated pre-processing task, software
used by these agencies applies clustering methods in order
to identify outliers and flag those as potentially suspicious
entities to the human user. However, many agencies active
in this domain are competing enterprises and use their
(partially overlapping, but distinct) datasets and learning
hypotheses (models) as assets and hence cannot be expected
to collaborate in a fully cooperative way to improve
overall learning results. Considering that this is the reality of
the domain in the real world, it is easy to see that a
framework like the one we have suggested above might be useful
to exploit the cooperation potential that is not exploited by
current systems.
3.2 Agent-based distributed learning system
design
To describe a concrete design for the AIS domain, we need
to specify the following elements of the overall system:
1. The datasets and clustering algorithms available to
individual agents,
2. the interaction mechanism used for exchanging
descriptions of learning processes, and
3. the decision mechanism agents apply to make learning
decisions.
Regarding 1., our agents are equipped with their own private
datasets in the form of vessel descriptions. Learning samples
are represented by tuples containing data about individual
vessels in terms of attributes A = {1, . . . , n} including things
such as width, length, etc. with real-valued domains ([Ai] =
R for all i).
In terms of learning algorithm, we consider clustering
with a fixed number of k clusters using the k-means and
k-medoids clustering algorithms [5] (fixed meaning that
the learning algorithm will always output k clusters;
however, we allow agents to change the value of k over different
learning cycles). This means that the hypothesis space can
be defined as H = { c1, . . . , ck |ci ∈ R|A|
} i.e. the set of all
possible sets of k cluster centroids in |A|-dimensional
Euclidean space. For each hypothesis h = c1, . . . , ck and any
data point d ∈ ×n
i=1[Ai] given domain [Ai] for the ith
attribute of each sample, the assignment to clusters is given
by
C( c1, . . . , ck , d) = arg min
1≤j≤k
|d − cj|
i.e. d is assigned to that cluster whose centroid is closest to
the data point in terms of Euclidean distance.
For evaluation purposes, each dataset pertaining to a
particular agent i is initially split into a training set Di and a
validation Vi. Then, we generate a set of fake vessels Fi
such that |Fi| = |Vi|. These two sets assess the agent"s
ability to detect suspicious vessels. For this, we assign a
confidence value r(h, d) to every ship d:
r(h, d) =
1
|d − cC(h,d)|
where C(h, d) is the index of the nearest centroid. Based
on this measure, we classify any vessel in Fi ∪ Vi as fake if
its r-value is below the median of all the confidences r(h, d)
for d ∈ Fi ∪ Vi. With this, we can compute the quality
gi(h) ∈ R as the ratio between all correctly classified vessels
and all vessels in Fi ∪ Vi.
As concerns 2., we use a simple Contract-Net Protocol
(CNP) [20] based hypothesis trading mechanism: Before
each learning iteration, agents issue (publicly broadcasted)
Calls-For-Proposals (CfPs), advertising their own numerical
model quality. In other words, the initiator of a CNP
describes its own current learning state as (∗, ∗, ∗, gi(h), ∗)
where h is their current hypothesis/model. We assume that
agents are sincere when advertising their model quality, but
note that this quality might be of limited relevance to other
agents as they may specialise on specific regions of the data
space not related to the test set of the sender of the CfP.
Subsequently, (some) agents may issue bids in which they
advertise, in turn, the quality of their own model. If the
682 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
bids (if any) are accepted by the initiator of the protocol
who issued the CfP, the agents exchange their hypotheses
and the next learning iteration ensues.
To describe what is necessary for 3., we have to specify
(i) under which conditions agents submit bids in response
to a CfP, (ii) when they accept bids in the CNP negotiation
process, and (iii) how they integrate the received
information in their own learning process. Concerning (i) and (ii),
we employ a very simple rule that is identical in both cases:
let g be one"s own model quality and g that advertised by
the CfP (or highest bid, respectively). If g > g we respond
to the CfP (accept the bid), else respond to the CfP
(accept the bid) with probability P(g /g) and ignore (reject) it
else. If two agents make a deal, they exchange their
learning hypotheses (models). In our experiments, g and g are
calculated by an additional agent that acts as a global
validation mechanism for all agents (in a more realistic setting a
comparison mechanism for different g functions would have
to be provided).
As for (iii), each agent uses a single model merging
operator taken from the following two classes of operators (hj is
the receiver"s own model and hi is the provider"s model):
• ph→h
(hi, hj) :
- m-join: The m best clusters (in terms of coverage
of Dj) from hypothesis hi are appended to hj.
- m-select: The set of the m best clusters (in terms
of coverage of Dj) from the union hi ∪hj is chosen
as a new model. (Unlike m-join this method does
not prefer own clusters over others".)
• ph→D
(hi, Dj) :
- m-filter: The m best clusters (as above) from
hi are identified and appended to a new model
formed by using those samples not covered by
these clusters applying the own learning
algorithm fj.
Whenever m is large enough to encompass all clusters, we
simply write join or filter for them. In section 4 we analyse
the performance of each of these two classes for different
choices of m.
It is noteworthy that this agent-based distributed data
mining system is one of the simplest conceivable instances
of our abstract architecture. While we have previously
applied it also to a more complex market-based architecture
using Inductive Logic Programming learners in a transport
logistics domain [22], we believe that the system described
here is complex enough to illustrate the key design decisions
involved in using our framework and provides simple
example solutions for these design issues.
4. EXPERIMENTAL RESULTS
Figure 3 shows results obtained from simulations with
three learning agents in the above system using the k-means
and k-medoids clustering methods respectively. We
partition the total dataset of 300 ships into three disjoint sets of
100 samples each and assign each of these to one learning
agent. The Single Agent is learning from the whole dataset.
The parameter k is set to 10 as this is the optimal value for
the total dataset according to the Davies-Bouldin index [9].
For m-select we assume m = k which achieves a constant
Figure 3: Performance results obtained for different
integration operations in homogeneous learner
societies using the k-means (top) and k-medoids
(bottom) methods
model size. For m-join and m-filter we assume m = 3 to
limit the extent to which models increase over time.
During each experiment the learning agents receive ship
descriptions in batches of 10 samples. Between these
batches, there is enough time to exchange the models among
the agents and recompute the models if necessary. Each
ship is described using width, length, draught and speed
attributes with the goal of learning to detect which vessels
have provided fake descriptions of their own properties. The
validation set contains 100 real and 100 randomly generated
fake ships. To generate sufficiently realistic properties for
fake ships, their individual attribute values are taken from
randomly selected ships in the validation set (so that each
fake sample is a combination of attribute values of several
existing ships).
In these experiments, we are mainly interested in
investigating whether a simple form of knowledge sharing between
self-interested learning agents could improve agent
performance compared to a setting of isolated learners. Thereby,
we distinguish between homogeneous learner societies where
all agents use the same clustering algorithm and
heterogeneous ones where different agents use different algorithms.
As can be seen from the performance plots in Figure 3
(homogeneous case) and 4 (heterogeneous case, two agents
use the same method and one agent uses the other) this is
clearly the case for the (unrestricted) join and filter
integration operations (m = k) in both cases. This is quite natural,
as these operations amount to sharing all available model
knowledge among agents (under appropriate constraints
depending on how beneficial the exchange seems to the agents).
We can see that the quality of these operations is very close
to the Single Agent that has access to all training data.
For the restricted (m < k) m-join, m-filter and m-select
methods we can also observe an interesting distinction,
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 683
Figure 4: Performance results obtained for
different integration operations in heterogeneous societies
with the majority of learners using the k-means
(top) and k-medoids (bottom) methods
namely that these perform similarly to the isolated learner
case in homogeneous agent groups but better than isolated
learners in more heterogeneous societies. This suggests that
heterogeneous learners are able to benefit even from rather
limited knowledge sharing (and this is what using a rather
small m = 3 amounts to given that k = 10) while this is
not always true for homogeneous agents. This nicely
illustrates how different learning or data mining algorithms can
specialise on different parts of the problem space and then
integrate their local results to achieve better individual
performance.
Apart from these obvious performance benefits,
integrating partial learning results can also have other advantages:
The m-filter operation, for example, decreases the number
of learning samples and thus can speed up the learning
process. The relative number of filtered examples measured in
our experiments is shown in the following table.
k-means k-medoids
filtering 30-40 % 10-20 %
m-filtering 20-30 % 5-15 %
The overall conclusion we can draw from these initial
experiments with our architecture is that since a very
simplistic application of its principles has proven capable of
improving the performance of individual learning agents, it is
worthwhile investigating more complex forms of
information exchange about learning processes among autonomous
learners.
5. RELATED WORK
We have already mentioned work on distributed
(nonagent) machine learning and data mining in the
introductory chapter, so in this section we shall restrict ourselves to
approaches that are more closely related to our outlook on
distributed learning systems.
Very often, approaches that are allegedly agent-based
completely disregard agent autonomy and prescribe local
decision-making procedures a priori. A typical example for
this type of system is the one suggested by Caragea et al. [6]
which is based on a distributed support-vector machine
approach where agents incrementally join their datasets
together according to a fixed distributed algorithm. A similar
example is the work of Weiss [24], where groups of
classifier agents learn to organise their activity so as to optimise
global system behaviour.
The difference between this kind of collaborative
agentbased learning systems [16] and our own framework is that
these approaches assume a joint learning goal that is pursued
collaboratively by all agents.
Many approaches rely heavily on a homogeneity
assumption: Plaza and Ontanon [15] suggest methods for
agentbased intelligent reuse of cases in case-based reasoning but
is only applicable to societies of homogeneous learners (and
coined towards a specific learning method). An
agentbased method for integrating distributed cluster analysis
processes using density estimation is presented by Klusch
et al. [13] which is also specifically designed for a
particular learning algorithm. The same is true of [22, 23] which
both present market-based mechanisms for aggregating the
output of multiple learning agents, even though these
approaches consider more interesting interaction mechanisms
among learners.
A number of approaches for sharing learning data [18]
have also been proposed: Grecu and Becker [12] suggest an
exchange of learning samples among agents, and Ghosh et
al. [11] is a step in the right direction in terms of revealing
only partial information about one"s learning process as it
deals with limited information sharing in distributed
clustering.
Papyrus [3] is a system that provides a markup language
for meta-description of data, hypotheses and intermediate
results and allows for an exchange of all this information
among different nodes, however with a strictly cooperative
goal of distributing the load for massively distributed data
mining tasks.
The MALE system [19] was a very early multiagent
learning system in which agents used a blackboard approach to
communicate their hypotheses. Agents were able to critique
each others" hypotheses until agreement was reached.
However, all agents in this system were identical and the system
was strictly cooperative.
The ANIMALS system [10] was used to simulate
multistrategy learning by combining two or more learning
techniques (represented by heterogeneous agents) in order to
overcome weaknesses in the individual algorithms, yet it was
also a strictly cooperative system.
As these examples show and to the best of our knowledge,
there have been no previous attempts to provide a
framework that can accommodate both independent and
heterogeneous learning agents and this can be regarded as the main
contribution of our work.
6. CONCLUSION
In this paper, we outlined a generic, abstract framework
for distributed machine learning and data mining. This
framework constitutes, to our knowledge, the first attempt
684 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
to capture complex forms of interaction between
heterogeneous and/or self-interested learners in an architecture that
can be used as the foundation for implementing systems that
use complex interaction and reasoning mechanisms to enable
agents to inform and improve their learning abilities with
information provided by other learners in the system,
provided that all agents engage in a sufficiently similar learning
activity.
To illustrate that the abstract principles of our
architecture can be turned into concrete, computational systems,
we described a market-based distributed clustering system
which was evaluated in the domain of vessel tracking for
purposes of identifying deviant or suspicious behaviour.
Although our experimental results only hint at the potential
of using our architecture, they underline that what we are
proposing is feasible in principle and can have beneficial
effects even in its most simple instantiation.
Yet there is a number of issues that we have not addressed
in the presentation of the architecture and its empirical
evaluation: Firstly, we have not considered the cost of
communication and made the implicit assumption that the required
communication comes for free. This is of course
inadequate if we want to evaluate our method in terms of the
total effort required for producing a certain quality of
learning results. Secondly, we have not experimented with agents
using completely different learning algorithms (e.g. symbolic
and numerical). In systems composed of completely different
agents the circumstances under which successful information
exchange can be achieved might be very different from those
described here, and much more complex communication and
reasoning methods may be necessary to achieve a useful
integration of different agents" learning processes. Finally, more
sophisticated evaluation criteria for such distributed
learning architectures have to be developed to shed some light
on what the right measures of optimality for autonomously
reasoning and communicating agents should be.
These issues, together with a more systematic and
thorough investigation of advanced interaction and
communication mechanisms for distributed, collaborating and
competing agents will be the subject of our future work on the
subject.
Acknowledgement: We gratefully acknowledge the
support of the presented research by Army Research
Laboratory project N62558-03-0819 and Office for Naval Research
project N00014-06-1-0232.
7. REFERENCES
[1] http://www.aislive.com.
[2] http://www.healthagents.com.
[3] S. Bailey, R. Grossman, H. Sivakumar, and
A. Turinsky. Papyrus: A System for Data Mining over
Local and Wide Area Clusters and Super-Clusters. In
Proc. of the Conference on Supercomputing. 1999.
[4] E. Bauer and R. Kohavi. An Empirical Comparison of
Voting Classification Algorithms: Bagging, Boosting,
and Variants. Machine Learning, 36, 1999.
[5] P. Berkhin. Survey of Clustering Data Mining
Techniques, Technical Report, Accrue Software, 2002.
[6] D. Caragea, A. Silvescu, and V. Honavar. Agents that
Learn from Distributed Dynamic Data sources. In
Proc. of the Workshop on Learning Agents, 2000.
[7] N. Chawla and S. E. abd L. O. Hall. Creating
ensembles of classifiers. In Proceedings of ICDM 2001,
pages 580-581, San Jose, CA, USA, 2001.
[8] D. Dash and G. F. Cooper. Model Averaging for
Prediction with Discrete Bayesian Networks. Journal
of Machine Learning Research, 5:1177-1203, 2004.
[9] D. L. Davies and D. W. Bouldin. A Cluster
Separation Measure. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 4:224-227, 1979.
[10] P. Edwards and W. Davies. A Heterogeneous
Multi-Agent Learning System. In Proceedings of the
Special Interest Group on Cooperating Knowledge
Based Systems, pages 163-184, 1993.
[11] J. Ghosh, A. Strehl, and S. Merugu. A Consensus
Framework for Integrating Distributed Clusterings
Under Limited Knowledge Sharing. In NSF Workshop
on Next Generation Data Mining, 99-108, 2002.
[12] D. L. Grecu and L. A. Becker. Coactive Learning for
Distributed Data Mining. In Proceedings of KDD-98,
pages 209-213, New York, NY, August 1998.
[13] M. Klusch, S. Lodi, and G. Moro. Agent-based
distributed data mining: The KDEC scheme. In
AgentLink, number 2586 in LNCS. Springer, 2003.
[14] T. M. Mitchell. Machine Learning, pages 29-36.
McGraw-Hill, New York, 1997.
[15] S. Ontanon and E. Plaza. Recycling Data for
Multi-Agent Learning. In Proc. of ICML-05, 2005.
[16] L. Panait and S. Luke. Cooperative multi-agent
learning: The state of the art. Autonomous Agents
and Multi-Agent Systems, 11(3):387-434, 2005.
[17] B. Park and H. Kargupta. Distributed Data Mining:
Algorithms, Systems, and Applications. In N. Ye,
editor, Data Mining Handbook, pages 341-358, 2002.
[18] F. J. Provost and D. N. Hennessy. Scaling up:
Distributed machine learning with cooperation. In
Proc. of AAAI-96, pages 74-79. AAAI Press, 1996.
[19] S. Sian. Extending learning to multiple agents: Issues
and a model for multi-agent machine learning
(ma-ml). In Y. Kodratoff, editor, Machine
LearningEWSL-91, pages 440-456. Springer-Verlag, 1991.
[20] R. Smith. The contract-net protocol: High-level
communication and control in a distributed problem
solver. IEEE Transactions on Computers,
C-29(12):1104-1113, 1980.
[21] S. J. Stolfo, A. L. Prodromidis, S. Tselepis, W. Lee,
D. W. Fan, and P. K. Chan. Jam: Java Agents for
Meta-Learning over Distributed Databases. In Proc. of
the KDD-97, pages 74-81, USA, 1997.
[22] J. Toˇziˇcka, M. Jakob, and M. Pˇechouˇcek.
Market-Inspired Approach to Collaborative Learning.
In Cooperative Information Agents X (CIA 2006),
volume 4149 of LNCS, pages 213-227. Springer, 2006.
[23] Y. Z. Wei, L. Moreau, and N. R. Jennings.
Recommender systems: a market-based design. In
Proceedings of AAMAS-03), pages 600-607, 2003.
[24] G. Weiß. A Multiagent Perspective of Parallel and
Distributed Machine Learning. In Proceedings of
Agents"98, pages 226-230, 1998.
[25] G. Weiss and P. Dillenbourg. What is "multi" in
multi-agent learning? Collaborative-learning:
Cognitive and Computational Approaches, 64-80, 1999.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 685 | unsupervise cluster;distributed clustering application;multiagent learn;framework and architecture;meta-reasoning;unsupervised clustering;agent;bayesian classifier;frameworks and architecture;historical information;distribute machine learn;individual learning process;machine learning;consensusbased method;autonomous learning agent;datum mining;communication and coordination |
train_I-38 | Bidding Algorithms for a Distributed Combinatorial Auction | Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder"s utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm. | 1. INTRODUCTION
Both the research and practice of combinatorial auctions
have grown rapidly in the past ten years. In a
combinatorial auction bidders can place bids on combinations of
items, called packages or bidsets, rather than just
individual items. Once the bidders place their bids, it is necessary
to find the allocation of items to bidders that maximizes
the auctioneer"s revenue. This problem, known as the
winner determination problem, is a combinatorial optimization
problem and is NP-Hard [10]. Nevertheless, several
algorithms that have a satisfactory performance for problem
sizes and structures occurring in practice have been
developed. The practical applications of combinatorial auctions
include: allocation of airport takeoff and landing time slots,
procurement of freight transportation services, procurement
of public transport services, and industrial procurement [2].
Because of their wide applicability, one cannot hope for a
general-purpose winner determination algorithm that can
efficiently solve every instance of the problem. Thus,
several approaches and algorithms have been proposed to
address the winner determination problem. However, most of
the existing winner determination algorithms for
combinatorial auctions are centralized, meaning that they require
all agents to send their bids to a centralized auctioneer who
then determines the winners. Examples of these algorithms
are CASS [3], Bidtree [11] and CABOB [12]. We believe that
distributed solutions to the winner determination problem
should be studied as they offer a better fit for some
applications as when, for example, agents do not want to reveal
their valuations to the auctioneer.
The PAUSE (Progressive Adaptive User Selection
Environment) auction [4, 5] is one of a few efforts to distribute
the problem of winner determination amongst the bidders.
PAUSE establishes the rules the participants have to adhere
to so that the work is distributed amongst them. However,
it is not concerned with how the bidders determine what
they should bid.
In this paper we present two algorithms, pausebid and
cachedpausebid, which enable agents in a PAUSE
auction to find the bidset that maximizes their utility. Our
algorithms implement a myopic utility maximizing strategy
and are guaranteed to find the bidset that maximizes the
agent"s utility given the outstanding best bids at a given
time. pausebid performs a branch and bound search
completely from scratch every time that it is called.
cachedpausebid is a caching-based algorithm which explores fewer
nodes, since it caches some solutions.
694
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
2. THE PAUSE AUCTION
A PAUSE auction for m items has m stages. Stage 1
consists of having simultaneous ascending price open-cry
auctions and during this stage the bidders can only place bids on
individual items. At the end of this state we will know what
the highest bid for each individual item is and who placed
that bid. Each successive stage k = 2, 3, . . . , m consists of
an ascending price auction where the bidders must submit
bidsets that cover all items but each one of the bids must be
for k items or less. The bidders are allowed to use bids that
other agents have placed in previous rounds when building
their bidsets, thus allowing them to find better solutions.
Also, any new bidset has to have a sum of bid prices which
is bigger than that of the currently winning bidset. At the
end of each stage k all agents know the best bid for every
subset of size k or less. Also, at any point in time after stage
1 has ended there is a standing bidset whose value increases
monotonically as new bidsets are submitted. Since in the
final round all agents consider all possible bidsets, we know
that the final winning bidset will be one such that no agent
can propose a better bidset. Note, however, that this
bidset is not guaranteed to be the one that maximizes revenue
since we are using an ascending price auction so the
winning bid for each set will be only slightly bigger than the
second highest bid for the particular set of items. That is,
the final prices will not be the same as the prices in a
traditional combinatorial auction where all the bidders bid their
true valuation. However, there remains the open question
of whether the final distribution of items to bidders found
in a PAUSE auction is the same as the revenue maximizing
solution. Our test results provide an answer to this question.
The PAUSE auction makes the job of the auctioneer very
easy. All it has to do is to make sure that each new
bidset has a revenue bigger than the current winning bidset, as
well as make sure that every bid in an agent"s bidset that
is not his does indeed correspond to some other agents"
previous bid. The computational problem shifts from one of
winner determination to one of bid generation. Each agent
must search over the space of all bidsets which contain at
least one of its bids. The search is made easier by the fact
that the agent needs to consider only the current best bids
and only wants bidsets where its own utility is higher than
in the current winning bidset. Each agent also has a clear
incentive for performing this computation, namely, its
utility only increases with each bidset it proposes (of course, it
might decrease with the bidsets that others propose).
Finally, the PAUSE auction has been shown to be envy-free in
that at the conclusion of the auction no bidder would prefer
to exchange his allocation with that of any other bidder [2].
We can even envision completely eliminating the
auctioneer and, instead, have every agent perform the task of the
auctioneer. That is, all bids are broadcast and when an
agent receives a bid from another agent it updates the set
of best bids and determines if the new bid is indeed better
than the current winning bid. The agents would have an
incentive to perform their computation as it will increase their
expected utility. Also, any lies about other agents" bids are
easily found out by keeping track of the bids sent out by
every agent (the set of best bids). Namely, the only one that
can increase an agent"s bid value is the agent itself.
Anyone claiming a higher value for some other agent is lying.
The only thing missing is an algorithm that calculates the
utility-maximizing bidset for each agent.
3. PROBLEM FORMULATION
A bid b is composed of three elements bitems
(the set of
items the bid is over), bagent
(the agent that placed the bid),
and bvalue
(the value or price of the bid). The agents
maintain a set B of the current best bids, one for each set of items
of size ≤ k, where k is the current stage. At any point in the
auction, after the first round, there will also be a set W ⊆ B
of currently winning bids. This is the set of bids that covers
all the items and currently maximizes the revenue, where
the revenue of W is given by
r(W) =
b∈W
bvalue
. (1)
Agent i"s value function is given by vi(S) ∈ where S is a
set of items. Given an agent"s value function and the current
winning bidset W we can calculate the agent"s utility from
W as
ui(W) =
b∈W | bagent=i
vi(bitems
) − bvalue
. (2)
That is, the agent"s utility for a bidset W is the value it
receives for the items it wins in W minus the price it must
pay for those items. If the agent is not winning any items
then its utility is zero.
The goal of the bidding agents in the PAUSE auction is to
maximize their utility, subject to the constraint that their
next set of bids must have a total revenue that is at least
bigger than the current revenue, where is the smallest
increment allowed in the auction. Formally, given that W is
the current winning bidset, agent i must find a g∗
i such that
r(g∗
i ) ≥ r(W) + and
g∗
i = arg max
g⊆2B
ui(g), (3)
where each g is a set of bids that covers all items and
∀b∈g (b ∈ B) or (bagent
= i and bvalue
> B(bitems
) and
size(bitems
) ≤ k), and where B(items) is the value of the
bid in B for the set items (if there is no bid for those items
it returns zero). That is, each bid b in g must satisfy at least
one of the two following conditions. 1) b is already in B, 2)
b is a bid of size ≤ k in which the agent i bids higher than
the price for the same items in B.
4. BIDDING ALGORITHMS
According to the PAUSE auction, during the first stage we
have only several English auctions, with the bidders
submitting bids on individual items. In this case, an agent"s
dominant strategy is to bid higher than the current winning bid
until it reaches its valuation for that particular item. Our
algorithms focus on the subsequent stages: k > 1. When
k > 1, agents have to find g∗
i . This can be done by
performing a complete search on B. However, this approach is
computationally expensive since it produces a large search
tree. Our algorithms represent alternative approaches to
overcome this expensive search.
4.1 The PAUSEBID Algorithm
In the pausebid algorithm (shown in Figure 1) we
implement some heuristics to prune the search tree. Given
that bidders want to maximize their utility and that at any
given point there are likely only a few bids within B which
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 695
pausebid(i, k)
1 my-bids ← ∅
2 their-bids ← ∅
3 for b ∈ B
4 do if bagent
= i or vi(bitems
) > bvalue
5 then my-bids ← my-bids
+new Bid(bitems
, i, vi(bitems
))
6 else their-bids ← their-bids +b
7 for S ∈ subsets of k or fewer items such that
vi(S) > 0 and ¬∃b∈Bbitems
= S
8 do my-bids ← my-bids +new Bid(S, i, vi(S))
9 bids ← my-bids + their-bids
10 g∗
← ∅ £ Global variable
11 u∗
← ui(W)£ Global variable
12 pbsearch(bids, ∅)
13 surplus ← b∈g∗ | bagent=i bvalue
− B(bitems
)
14 if surplus = 0
15 then return g∗
16 my-payment ← vi(g∗
) − u∗
17 for b ∈ g∗
| bagent
= i
18 do if my-payment ≤ 0
19 then bvalue
← B(bitems
)
20 else bvalue
← B(bitems
)
+ my-payment ·bvalue
−B(bitems
)
surplus
21 return g∗
Figure 1: The pausebid algorithm which implements
a branch and bound search. i is the agent and k is
the current stage of the auction, for k ≥ 2.
the agent can dominate, we start by defining my-bids to be
the list of bids for which the agent"s valuation is higher than
the current best bid, as given in B. We set the value of
these bids to be the agent"s true valuation (but we won"t
necessarily be bidding true valuation, as we explain later).
Similarly, we set their-bids to be the rest of the bids from B.
Finally, the agent"s search list is simply the concatenation
of my-bids and their-bids. Note that the agent"s own bids
are placed first on the search list as this will enable us to do
more pruning (pausebid lines 3 to 9). The agent can now
perform a branch and bound search on the branch-on-bids
tree produced by these bids. This branch and bound search
is implemented by pbsearch (Figure 2). Our algorithm not
only implements the standard bound but it also implements
other pruning techniques in order to further reduce the size
of the search tree.
The bound we use is the maximum utility that the agent
can expect to receive from a given set of bids. We call it u∗
.
Initially, u∗
is set to ui(W) (pausebid line 11) since that
is the utility the agent currently receives and any solution
he proposes should give him more utility. If pbsearch ever
comes across a partial solution where the maximum utility
the agent can expect to receive is less than u∗
then that
subtree is pruned (pbsearch line 21). Note that we can
determine the maximum utility only after the algorithm has
searched over all of the agent"s own bids (which are first on
the list) because after that we know that the solution will
not include any more bids where the agent is the winner
thus the agent"s utility will no longer increase. For example,
pbsearch(bids, g)
1 if bids = ∅ then return
2 b ← first(bids)
3 bids ← bids −b
4 g ← g + b
5 ¯Ig ← items not in g
6 if g does not contain a bid from i
7 then return
8 if g includes all items
9 then min-payment ← max(0, r(W) + − (r(g) − ri(g)),
b∈g | bagent=i B(bitems
))
10 max-utility ← vi(g) − min-payment
11 if r(g) > r(W) and max-utility ≥ u∗
12 then g∗
← g
13 u∗
← max-utility
14 pbsearch(bids, g − b) £ b is Out
15 else max-revenue ← r(g) + max(h(¯Ig), hi(¯Ig))
16 if max-revenue ≤ r(W)
17 then pbsearch(bids, g − b) £ b is Out
18 elseif bagent
= i
19 then min-payment ← (r(W) + )
−(r(g) − ri(g)) − h(¯Ig)
20 max-utility ← vi(g) − min-payment
21 if max-utility > u∗
22 then pbsearch({x ∈ bids |
xitems
∩ bitems
= ∅}, g) £ b is In
23 pbsearch(bids, g − b) £ b is Out
24 else
25 pbsearch({x ∈ bids |
xitems
∩ bitems
= ∅}, g) £ b is In
26 pbsearch(bids, g − b) £ b is Out
27 return
Figure 2: The pbsearch recursive procedure where
bids is the set of available bids and g is the current
partial solution.
if an agent has only one bid in my-bids then the maximum
utility he can expect is equal to his value for the items in
that bid minus the minimum possible payment we can make
for those items and still come up with a set of bids that has
revenue greater than r(W). The calculation of the minimum
payment is shown in line 19 for the partial solution case and
line 9 for the case where we have a complete solution in
pbsearch. Note that in order to calculate the min-payment
for the partial solution case we need an upper bound on the
payments that we must make for each item. This upper
bound is provided by
h(S) =
s∈S
max
b∈B | s∈bitems
bvalue
size(bitems)
. (4)
This function produces a bound identical to the one used by
the Bidtree algorithm-it merely assigns to each individual
item in S a value equal to the maximum bid in B divided
by the number of items in that bid.
To prune the branches that cannot lead to a solution with
revenue greater than the current W, the algorithm considers
both the values of the bids in B and the valuations of the
696 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
agent. Similarly to (4) we define
hi(S, k) =
s∈S
max
S | size(S )≤k and s∈S and vi(S )>0
vi(S )
size(S )
(5)
which assigns to each individual item s in S the maximum
value produced by the valuation of S divided by the size
of S , where S is a set for which the agent has a valuation
greater than zero, contains s, and its size is less or equal
than k. The algorithm uses the heuristics h and hi (lines 15
and 19 of pbsearch), to prune the just mentioned branches
in the same way an A∗
algorithm uses its heuristic. A final
pruning technique implemented by the algorithm is ignoring
any branches where the agent has no bids in the current
answer g and no more of the agent"s bids are in the list
(pbsearch lines 6 and 7).
The resulting g∗
found by pbsearch is thus the set of bids
that has revenue bigger than r(W) and maximizes agent i"s
utility. However, agent i"s bids in g∗
are still set to his own
valuation and not to the lowest possible price. Lines 17 to 20
in pausebid are responsible for setting the agent"s payments
so that it can achieve its maximum utility u∗
. If the agent
has only one bid in g∗
then it is simply a matter of reducing
the payment of that bid by u∗
from the current maximum of
the agent"s true valuation. However, if the agent has more
than one bid then we face the problem of how to distribute
the agent"s payments among these bids. There are many
ways of distributing the payments and there does not appear
to be a dominant strategy for performing this distribution.
We have chosen to distribute the payments in proportion to
the agent"s true valuation for each set of items.
pausebid assumes that the set of best bids B and the
current best winning bidset W remains constant during its
execution, and it returns the agent"s myopic utility-maximizing
bidset (if there is one) using a branch and bound search.
However it repeats the whole search at every stage. We
can minimize this problem by caching the result of previous
searches.
4.2 The CACHEDPAUSEBID Algorithm
The cachedpausebid algorithm (shown in Figure 3) is
our second approach to solve the bidding problem in the
PAUSE auction. It is based in a cache table called C-Table
where we store some solutions to avoid doing a complete
search every time. The problem is the same; the agent i has
to find g∗
i . We note that g∗
i is a bidset that contains at least
one bid of the agent i. Let S be a set of items for which the
agent i has a valuation such that vi(S) ≥ B(S) > 0, let gS
i
be a bidset over S such that r(gS
i ) ≥ r(W) + and
gS
i = arg max
g⊆2B
ui(g), (6)
where each g is a set of bids that covers all items and
∀b∈g (b ∈ B) or (bagent
= i and bvalue
> B(bitems
)) and
(∃b∈gbitems
= S and bagent
= i). That is, gS
i is i"s best
bidset for all items which includes a bid from i for all S items.
In the PAUSE auction we cannot bid for sets of items with
size greater than k. So, if we have for each set of items S for
which vi(S) > 0 and size(S) ≤ k its corresponding gS
i then
g∗
i is the gS
i that maximizes the agent"s utility. That is
g∗
i = arg max
{S | vi(S)>0∧size(S)≤k}
ui(gS
i ). (7)
Each agent i implements a hash table C-Table such that
C-Table[S] = gS
for all S which vi(S) ≥ B(S) > 0. We can
cachedpausebid(i, k, k-changed)
1 for each S in C-Table
2 do if vi(S) < B(S)
3 then remove S from C-Table
4 else if k-changed and size(S) = k
5 then B ← B + new Bid(i, S, vi(S))
6 g∗
← ∅
7 u∗
← ui(W)
8 for each S with size(S) ≤ k in C-Table
9 do ¯S ← Items − S
10 gS
← C-Table[S] £ Global variable
11 min-payment ← max(r(W) + , b∈gS B(bitems
))
12 uS
← r(gS
) − min-payment £ Global variable
13 if (k-changed and size(S) = k)
or (∃b∈B bitems
⊆ ¯S and bagent
= i)
14 then B ← {b ∈ B |bitems
⊆ ¯S}
15 bids ← B
+{b ∈ B|bitems
⊆ ¯S and b /∈ B }
16 for b ∈ bids
17 do if vi(bitems
) > bvalue
18 then bagent
← i
19 bvalue
← vi(bitems
)
20 if k-changed and size(S) = k
21 then n ← size(bids)
22 uS
← 0
23 else n ← size(B )
24 g ← ∅ + new Bid(S, i, vi(S))
25 cpbsearch(bids, g, n)
26 C-Table[S] ← gS
27 if uS
> u∗
and r(gS
) ≥ r(W) +
28 then surplus ←
b∈gS | bagent=i bvalue
− B(bitems
)
29 if surplus > 0
30 then my-payment ← vi(gS
) − ui(gS
)
31 for b ∈ gS
| bagent
= i
32 do if my-payment ≤ 0
33 then bvalue
← B(bitems
)
34 else bvalue
← B(bitems
)+
my-payment ·bvalue
−B(bitems
)
surplus
35 u∗
← ui(gS
)
36 g∗
← gS
37 else if uS
≤ 0 and vi(S) < B(S)
38 then remove S from C-Table
39 return g∗
Figure 3: The cachedpausebid algorithm that
implements a caching based search to find a bidset that
maximizes the utility for the agent i. k is the
current stage of the auction (for k ≥ 2), and k-changed is
a boolean that is true right after the auction moved
to the next stage.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 697
cpbsearch(bids, g, n)
1 if bids = ∅ or n ≤ 0 then return
2 b ← first(bids)
3 bids ← bids −b
4 g ← g + b
5 ¯Ig ← items not in g
6 if g includes all items
7 then min-payment ← max(0, r(W) + − (r(g) − ri(g)),
b∈g | bagent=i B(bitems
))
8 max-utility ← vi(g) − min-payment
9 if r(g) > r(W) and max-utility ≥ uS
10 then gS
← g
11 uS
← max-utility
12 cpbsearch(bids, g − b, n − 1) £ b is Out
13 else max-revenue ← r(g) + max(h(¯Ig), hi(¯Ig))
14 if max-revenue ≤ r(W)
15 then cpbsearch(bids, g − b, n − 1) £ b is Out
16 elseif bagent
= i
17 then min-payment ← (r(W) + )
−(r(g) − ri(g)) − h(¯Ig)
18 max-utility ← vi(g) − min-payment
19 if max-utility > uS
20 then cpbsearch({x ∈ bids |
xitems
∩ bitems
= ∅}, g, n + 1) £ b is In
21 cpbsearch(bids, g − b, n − 1) £ b is Out
22 else
23 cpbsearch({x ∈ bids |
xitems
∩ bitems
= ∅}, g, n + 1) £ b is In
24 cpbsearch(bids, g − b, n − 1) £ b is Out
25 return
Figure 4: The cpbsearch recursive procedure where
bids is the set of available bids, g is the current
partial solution and n is a value that indicates how deep
in the list bids the algorithm has to search.
then find g∗
by searching for the gS
, stored in C-Table[S],
that maximizes the agent"s utility, considering only the set
of items S with size(S) ≤ k. The problem remains in
maintaining the C-Table updated and avoiding to search every
gS
every time. cachedpausebid deals with this and other
details.
Let B be the set of bids that contains the new best bids,
that is, B contains the bids recently added to B and the bids
that have changed price (always higher), bidder, or both and
were already in B. Let ¯S = Items − S be the complement
of S (the set of items not included in S). cachedpausebid
takes three parameters: i the agent, k the current stage of
the auction, and k-changed a boolean that is true right after
the auction moved to the next stage. Initially C-Table has
one row or entry for each set S for which vi(S) > 0. We
start by eliminating the entries corresponding to each set S
for which vi(S) < B(S) from C-Table (line 3). Then, in the
case that k-changed is true, for each set S with size(S) = k,
we add to B a bid for that set with value equal to vi(S)
and bidder agent i (line 5); this a bid that the agent is now
allowed to consider. We then search for g∗
amongst the gS
stored in C-Table, for this we only need to consider the sets
with size(S) ≤ k (line 8). But how do we know that the gS
in C-Table[S] is still the best solution for S? There are only
two cases when we are not sure about that and we need
to do a search to update C-Table[S]. These cases are: i)
When k-changed is true and size(S) ≤ k, since there was
no gS
stored in C-Table for this S. ii) When there exists at
least one bid in B for the set of items ¯S or a subset of it
submitted by an agent different than i, since it is probable
that this new bid can produce a solution better than the one
stored in C-Table[S].
We handle the two cases mentioned above in lines 13 to 26
of cachedpausebid. In both of these cases, since gS
must
contain a bid for S we need to find a bidset that cover the
missing items, that is ¯S. Thus, our search space consists
of all the bids on B for the set of items ¯S or for a subset
of it. We build the list bids that contains only those bids.
However, we put the bids from B at the beginning of bids
(line 14) since they are the ones that have changed. Then,
we replace the bids in bids that have a price lower than the
valuation the agent i has for those same items with a bid
from agent i for those items and value equal to the agent"s
valuation (lines 16-19).
The recursive procedure cpbsearch, called in line 25 of
cachedpausebid and shown in Figure 4, is the one that
finds the new gS
. cpbsearch is a slightly modified version
of our branch and bound search implemented in pbsearch.
The first modification is that it has a third parameter n that
indicates how deep on the list bids we want to search, since
it stops searching when n less or equal to zero and not only
when the list bids is empty (line 1). Each time that there is
a recursive call of cpbsearch n is decreased by one when a
bid from bids is discarded or out (lines 12, 15, 21, and 24)
and n remains the same otherwise (lines 20 and 23). We set
the value of n before calling cpbsearch, to be the size of the
list bids (cachedpausebid line 21) in case i), since we want
cpbsearch to search over all bids; and we set n to be the
number of bids from B included in bids (cachedpausebid
line 23) in case ii), since we know that only the those first n
bids in bids changed and can affect our current gS
.
Another difference with pbsearch is that the bound in
cpbsearch is uS
which we set to be 0 (cachedpausebid line
22) when in case i) and r(gS
)−min-payment (cachedpausebid
line 12) when in case ii). We call cpbsearch with g already
containing a bid for S. After cpbsearch is executed we
are sure that we have the right gS
, so we store it in the
corresponding C-Table[S] (cachedpausebid line 26).
When we reach line 27 in cachedpausebid, we are sure
that we have the right gS
. However, agent i"s bids in gS
are
still set to his own valuation and not to the lowest possible
price. If uS
is greater than the current u∗
, lines 31 to 34
in cachedpausebid are responsible for setting the agent"s
payments so that it can achieve its maximum utility uS
.
As in pausebid, we have chosen to distribute the payments
in proportion to the agent"s true valuation for each set of
items. In the case that uS
less than or equal to zero and
the valuation that the agent i has for the set of items S is
lower than the current value of the bid in B for the same
set of items, we remove the corresponding C-Table[S] since
we know that is not worthwhile to keep it in the cache table
(cachedpausebid line 38).
The cachedpausebid function is called when k > 1 and
returns the agent"s myopic utility-maximizing bidset, if there
is one. It assumes that W and B remains constant during
its execution.
698 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
generatevalues(i, items)
1 for x ∈ items
2 do vi(x) = expd(.01)
3 for n ← 1 . . . (num-bids − items)
4 do s1, s2 ←Two random sets of items with values.
5 vi(s1 ∪ s2) = vi(s1) + vi(s2) + expd(.01)
Figure 5: Algorithm for the generation of random
value functions. expd(x) returns a random number
taken from an exponential distribution with mean
1/x.
0
20
40
60
80
100
2 3 4 5 6 7 8 9 10
Number of Items
CachedPauseBid
3 3 3 3 3 3
3 3
3
3
PauseBid
+ + + + +
+ + +
+
+
Figure 6: Average percentage of convergence
(y-axis), which is the percentage of times that our
algorithms converge to the revenue-maximizing
solution, as function of the number of items in the
auction.
5. TEST AND COMPARISON
We have implemented both algorithms and performed a
series of experiments in order to determine how their
solution compares to the revenue-maximizing solution and how
their times compare with each other. In order to do our
tests we had to generate value functions for the agents1
.
The algorithm we used is shown in Figure 5. The type of
valuations it generates correspond to domains where a set
of agents must perform a set of tasks but there are cost
savings for particular agents if they can bundle together certain
subsets of tasks. For example, imagine a set of robots which
must pick up and deliver items to different locations. Since
each robot is at a different location and has different
abilities, each one will have different preferences over how to
bundle. Their costs for the item bundles are subadditive,
which means that their preferences are superadditive. The
first experiment we performed simply ensured the proper
1
Note that we could not use CATS [6] because it generates
sets of bids for an indeterminate number of agents. It is as
if you were told the set of bids placed in a combinatorial
auction but not who placed each bid or even how many
people placed bids, and then asked to determine the value
function of every participant in the auction.
0
20
40
60
80
100
2 3 4 5 6 7 8 9 10
Number of Items
CachedPauseBid
3
3
3 3
3 3 3 3 3
3
PauseBid
+ +
+ +
+
+ + +
+
+
Figure 7: Average percentage of revenue from our
algorithms relative to maximum revenue (y-axis) as
function of the number of items in the auction.
functioning of our algorithms. We then compared the
solutions found by both of them to the revenue-maximizing
solution as found by CASS when given a set of bids that
corresponds to the agents" true valuation. That is, for each
agent i and each set of items S for which vi(S) > 0 we
generated a bid. This set of bids was fed to CASS which
implements a centralized winner determination algorithm to find
the solution which maximizes revenue. Note, however, that
the revenue from the PAUSE auction on all the auctions is
always smaller than the revenue of the revenue-maximizing
solution when the agents bid their true valuations. Since
PAUSE uses English auctions the final prices (roughly)
represent the second-highest valuation, plus , for that set of
items.
We fixed the number of agents to be 5 and we
experimented with different number of items, namely from 2 to
10. We ran both algorithms 100 times for each
combination. When we compared the solutions of our algorithms
to the revenue-maximizing solution, we realized that they
do not always find the same distribution of items as the
revenue-maximizing solution (as shown in Figure 6). The
cases where our algorithms failed to arrive at the
distribution of the revenue-maximizing solution are those where
there was a large gap between the first and second
valuation for a set (or sets) of items. If the revenue-maximizing
solution contains the bid (or bids) using these higher
valuation then it is impossible for the PAUSE auction to find this
solution because that bid (those bids) is never placed. For
example, if agent i has vi(1) = 1000 and the second highest
valuation for (1) is only 10 then i only needs to place a bid
of 11 in order to win that item. If the revenue-maximizing
solution requires that 1 be sold for 1000 then that solution
will never be found because that bid will never be placed.
We also found that average percentage of times that our
algorithms converges to the revenue-maximizing solution
decreases as the number of items increases. For 2 items is
almost 100% but decreases a little bit less than 1 percent as
the items increase, so that this average percentage of
convergence is around 90% for 10 items. In a few instances our
algorithms find different solutions this is due to the different
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 699
1
10
100
1000
10000
2 3 4 5 6 7 8 9 10
Number of Items
CachedPauseBid
3
3
3
3
3
3
3
3
3
PauseBid
+
+
+
+
+
+
+
+
+
+
Figure 8: Average number of expanded nodes
(y-axis) as function of items in the auction.
ordering of the bids in the bids list which makes them search
in different order.
We know that the revenue generated by the PAUSE
auction is generally lower than the revenue of the
revenuemaximizing solution, but how much lower? To answer this
question we calculated percentage representing the
proportion of the revenue given by our algorithms relative to the
revenue given by CASS. We found that the percentage of
revenue of our algorithms increases in average 2.7% as the
number of items increases, as shown in Figure 7. However,
we found that cachedpausebid generates a higher revenue
than pausebid (4.3% higher in average) except for auctions
with 2 items where both have about the same percentage.
Again, this difference is produced by the order of the search.
In the case of 2 items both algorithms produce in average
a revenue proportion of 67.4%, while in the other extreme
(10 items), cachedpausebid produced in average a revenue
proportion of 91.5% while pausebid produced in average a
revenue proportion of 87.7%.
The scalability of our algorithms can be determined by
counting the number of nodes expanded in the search tree.
For this we count the number of times that pbsearch gets
invoked for each time that pausebid is called and the
number of times that fastpausebidsearch gets invoked for each
time that cachedpausebid, respectively for each of our
algorithms. As expected since this is an NP-Hard problem,
the number of expanded nodes does grow exponentially with
the number of items (as shown in Figure 8). However, we
found that cachedpausebid outperforms pausebid, since
it expands in average less than half the number of nodes.
For example, the average number of nodes expanded when
2 items is zero for cachedpausebid while for pausebid is
2; and in the other extreme (10 items) cachedpausebid
expands in average only 633 nodes while pausebid expands in
average 1672 nodes, a difference of more than 1000 nodes.
Although the number of nodes expanded by our algorithms
increases as function of the number of items, the actual
number of nodes is a much smaller than the worst-case scenario
of nn
where n is the number of items. For example, for 10
items we expand slightly more than 103
nodes for the case of
pausebid and less than that for the case of
cachedpause0.1
1
10
100
1000
2 3 4 5 6 7 8 9 10
Number of Items
CachedPauseBid
3
3
3
3
3
3
3
3
3
3
PauseBid
+
+
+
+
+
+
+
+
+
+
Figure 9: Average time in seconds that takes to
finish an auction (y-axis) as function of the number of
items in the auction.
bid which are much smaller numbers than 1010
. Notice also
that our value generation algorithm (Figure 5) generates a
number of bids that is exponential on the number of items,
as might be expected in many situations. As such, these
results do not support the conclusion that time grows
exponentially with the number of items when the number of
bids is independent of the number of items. We expect that
both algorithms will grow exponentially as a function the
number of bids, but stay roughly constant as the number of
items grows.
We wanted to make sure that less expanded nodes does
indeed correspond to faster execution, especially since our
algorithms execute different operations. We thus ran the
same experiment with all the agents in the same machine,
an Intel Centrino 2.0 GHz laptop PC with 1 GB of RAM and
a 7200 RMP 60 GB hard drive, and calculated the average
time that takes to finish an auction for each algorithm. As
shown in Figure 9, cachedpausebid is faster than
pausebid, the difference in execution speed is even more clear as
the number of items increases.
6. RELATED WORK
A lot of research has been done on various aspects of
combinatorial auctions. We recommend [2] for a good review.
However, the study of distributed winner determination
algorithms for combinatorial auctions is still relatively new.
One approach is given by the algorithms for distributing
the winner determination problem in combinatorial auctions
presented in [7], but these algorithms assume the
computational entities are the items being sold and thus end up
with a different type of distribution. The VSA algorithm
[3] is another way of performing distributed winner
determination in combinatorial auction but it assumes the bids
themselves perform the computation. This algorithm also
fails to converge to a solution for most cases. In [9] the
authors present a distributed mechanism for calculating VCG
payments in a mechanism design problem. Their
mechanism roughly amounts to having each agent calculate the
payments for two other agents and give these to a secure
700 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
central server which then checks to make sure results from
all pairs agree, otherwise a re-calculation is ordered. This
general idea, which they call the redundancy principle, could
also be applied to our problem but it requires the existence
of a secure center agent that everyone trusts. Another
interesting approach is given in [8] where the bidding agents
prioritize their bids, thus reducing the set of bids that the
centralized winner determination algorithm must consider,
making that problem easier. Finally, in the computation
procuring clock auction [1] the agents are given an
everincreasing percentage of the surplus achieved by their
proposed solution over the current best. As such, it assumes
the agents are impartial computational entities, not the set
of possible buyers as assumed by the PAUSE auction.
7. CONCLUSIONS
We believe that distributed solutions to the winner
determination problem should be studied as they offer a better fit
for some applications as when, for example, agents do not
want to reveal their valuations to the auctioneer or when
we wish to distribute the computational load among the
bidders. The PAUSE auction is one of a few approaches
to decentralize the winner determination problem in
combinatorial auctions. With this auction, we can even envision
completely eliminating the auctioneer and, instead, have
every agent performe the task of the auctioneer. However,
while PAUSE establishes the rules the bidders must obey, it
does not tell us how the bidders should calculate their bids.
We have presented two algorithms, pausebid and
cachedpausebid, that bidder agents can use to engage in a PAUSE
auction. Both algorithms implement a myopic utility
maximizing strategy that is guaranteed to find the bidset that
maximizes the agent"s utility given the set of outstanding
best bids at any given time, without considering possible
future bids. Both algorithms find, most of the time, the
same distribution of items as the revenue-maximizing
solution. The cases where our algorithms failed to arrive at that
distribution are those where there was a large gap between
the first and second valuation for a set (or sets) of items.
As it is an NP-Hard problem, the running time of our
algorithms remains exponential but it is significantly better than
a full search. pausebid performs a branch and bound search
completely from scratch each time it is invoked.
cachedpausebid caches partial solutions and performs a branch
and bound search only on the few portions affected by the
changes on the bids between consecutive times.
cachedpausebid has a better performance since it explores fewer
nodes (less than half) and it is faster. As expected the
revenue generated by a PAUSE auction is lower than the
revenue of a revenue-maximizing solution found by a
centralized winner determination algorithm, however we found
that cachedpausebid generates in average 4.7% higher
revenue than pausebid. We also found that the revenue
generated by our algorithms increases as function of the number
of items in the auction.
Our algorithms have shown that it is feasible to implement
the complex coordination constraints supported by
combinatorial auctions without having to resort to a centralized
winner determination algorithm. Moreover, because of the
design of the PAUSE auction, the agents in the auction also
have an incentive to perform the required computation. Our
bidding algorithms can be used by any multiagent system
that would use combinatorial auctions for coordination but
would rather not implement a centralized auctioneer.
8. REFERENCES
[1] P. J. Brewer. Decentralized computation procurement
and computational robustness in a smart market.
Economic Theory, 13(1):41-92, January 1999.
[2] P. Cramton, Y. Shoham, and R. Steinberg, editors.
Combinatorial Auctions. MIT Press, 2006.
[3] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.
Taming the computational complexity of
combinatorial auctions: Optimal and approximate
approaches. In Proceedings of the Sixteenth
International Joint Conference on Artificial
Intelligence, pages 548-553. Morgan Kaufmann
Publishers Inc., 1999.
[4] F. Kelly and R. Stenberg. A combinatorial auction
with multiple winners for universal service.
Management Science, 46(4):586-596, 2000.
[5] A. Land, S. Powell, and R. Steinberg. PAUSE: A
computationally tractable combinatorial auction. In
Cramton et al. [2], chapter 6, pages 139-157.
[6] K. Leyton-Brown, M. Pearson, and Y. Shoham.
Towards a universal test suite for combinatorial
auction algorithms. In Proceedings of the 2nd ACM
conference on Electronic commerce, pages 66-76.
ACM Press, 2000. http://cats.stanford.edu.
[7] M. V. Narumanchi and J. M. Vidal. Algorithms for
distributed winner determination in combinatorial
auctions. In LNAI volume of AMEC/TADA. Springer,
2006.
[8] S. Park and M. H. Rothkopf. Auctions with
endogenously determined allowable combinations.
Technical report, Rutgets Center for Operations
Research, January 2001. RRR 3-2001.
[9] D. C. Parkes and J. Shneidman. Distributed
implementations of vickrey-clarke-groves auctions. In
Proceedings of the Third International Joint
Conference on Autonomous Agents and MultiAgent
Systems, pages 261-268. ACM, 2004.
[10] M. H. Rothkopf, A. Pekec, and R. M. Harstad.
Computationally manageable combinational auctions.
Management Science, 44(8):1131-1147, 1998.
[11] T. Sandholm. An algorithm for winner determination
in combinatorial auctions. Artificial Intelligence,
135(1-2):1-54, February 2002.
[12] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.
CABOB: a fast optimal algorithm for winner
determination in combinatorial auctions. Management
Science, 51(3):374-391, 2005.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 701 | task and resource allocation;bidding algorithm;pause auction;progressive adaptive user selection environment;branch-on-bid tree;combinatorial auction;agent;coordination;search tree;branch and bound search;distributed allocation;combinatorial optimization problem;revenue-maximizing solution |
train_I-42 | A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements | Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. | 1. INTRODUCTION
Many historical problems in the AI community can be
transformed into Constraint Satisfaction Problems (CSP). With the
advent of distributed AI, multi-agent systems became a popular way
to model the complex interactions and coordination required to
solve distributed problems. CSPs were originally extended to
distributed agent environments in [9]. Early domains for
distributed constraint satisfaction problems (DisCSP) included job
shop scheduling [1] and resource allocation [2]. Many domains
for agent systems, especially teamwork coordination, distributed
scheduling, and sensor networks, involve overly constrained
problems that are difficult or impossible to satisfy for every constraint.
Recent approaches to solving problems in these domains rely
on optimization techniques that map constraints into multi-valued
utility functions. Instead of finding an assignment that satisfies all
constraints, these approaches find an assignment that produces a
high level of global utility. This extension to the original DisCSP
approach has become popular in multi-agent systems, and has been
labeled the Distributed Constraint Optimization Problem (DCOP)
[1].
Current algorithms that solve complete DCOPs use two main
approaches: search and dynamic programming. Search based
algorithms that originated from DisCSP typically use some form of
backtracking [10] or bounds propagation, as in ADOPT [3].
Dynamic programming based algorithms include DPOP and its
extensions [5, 6, 7]. To date, both categories of algorithms arrange
agents into a traditional pseudotree to solve the problem.
It has been shown in [6] that any constraint graph can be mapped
into a traditional pseudotree. However, it was also shown that
finding the optimal pseudotree was NP-Hard. We began to
investigate the performance of traditional pseudotrees generated by
current edge-traversal heuristics. We found that these heuristics
often produced little parallelism as the pseudotrees tended to have
high depth and low branching factors. We suspected that there
could be other ways to arrange the pseudotrees that would
provide increased parallelism and smaller message sizes. After
exploring these other arrangements we found that cross-edged
pseudotrees provide shorter depths and higher branching factors than
the traditional pseudotrees. Our hypothesis was that these
crossedged pseudotrees would outperform traditional pseudotrees for
some problem types.
In this paper we introduce an extension to the DPOP algorithm
that handles an extended set of pseudotree arrangements which
include cross-edged pseudotrees. We begin with a definition of
741
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
DCOP, traditional pseudotrees, and cross-edged pseudotrees. We
then provide a summary of the original DPOP algorithm and
introduce our DCPOP algorithm. We discuss the complexity of our
algorithm as well as the impact of pseudotree generation
heuristics. We then show that our Distributed Cross-edged Pseudotree
Optimization Procedure (DCPOP) performs significantly better in
practice than the original DPOP algorithm for some problem
instances. We conclude with a selection of ideas for future work and
extensions for DCPOP.
2. PROBLEM DEFINITION
DCOP has been formalized in slightly different ways in recent
literature, so we will adopt the definition as presented in [6]. A
Distributed Constraint Optimization Problem with n nodes and m
constraints consists of the tuple < X, D, U > where:
• X = {x1,..,xn} is a set of variables, each one assigned to a
unique agent
• D = {d1,..,dn} is a set of finite domains for each variable
• U = {u1,..,um} is a set of utility functions such that each
function involves a subset of variables in X and defines a
utility for each combination of values among these variables
An optimal solution to a DCOP instance consists of an assignment
of values in D to X such that the sum of utilities in U is maximal.
Problem domains that require minimum cost instead of maximum
utility can map costs into negative utilities. The utility functions
represent soft constraints but can also represent hard constraints
by using arbitrarily large negative values. For this paper we only
consider binary utility functions involving two variables. Higher
order utility functions can be modeled with minor changes to the
algorithm, but they also substantially increase the complexity.
2.1 Traditional Pseudotrees
Pseudotrees are a common structure used in search procedures
to allow parallel processing of independent branches. As defined in
[6], a pseudotree is an arrangement of a graph G into a rooted tree
T such that vertices in G that share an edge are in the same branch
in T. A back-edge is an edge between a node X and any node which
lies on the path from X to the root (excluding X"s parent). Figure 1
shows a pseudotree with four nodes, three edges (A-B, B-C,
BD), and one back-edge (A-C). Also defined in [6] are four types of
relationships between nodes exist in a pseudotree:
• P(X) - the parent of a node X: the single node higher in the
pseudotree that is connected to X directly through a tree edge
• C(X) - the children of a node X: the set of nodes lower in
the pseudotree that are connected to X directly through tree
edges
• PP(X) - the pseudo-parents of a node X: the set of nodes
higher in the pseudotree that are connected to X directly
through back-edges (In Figure 1, A = PP(C))
• PC(X) - the pseudo-children of a node X: the set of nodes
lower in the pseudotree that are connected to X directly
through back-edges (In Figure 1, C = PC(A))
Figure 1: A traditional pseudotree. Solid line edges
represent parent-child relationships and the dashed line represents
a pseudo-parent-pseudo-child relationship.
Figure 2: A cross-edged pseudotree. Solid line edges represent
parent-child relationships, the dashed line represents a
pseudoparent-pseudo-child relationship, and the dotted line
represents a branch-parent-branch-child relationship. The bolded
node, B, is the merge point for node E.
2.2 Cross-edged Pseudotrees
We define a cross-edge as an edge from node X to a node Y that is
above X but not in the path from X to the root. A cross-edged
pseudotree is a traditional pseudotree with the addition of cross-edges.
Figure 2 shows a cross-edged pseudotree with a cross-edge (D-E).
In a cross-edged pseudotree we designate certain edges as primary.
The set of primary edges defines a spanning tree of the nodes. The
parent, child, pseudo-parent, and pseudo-child relationships from
the traditional pseudotree are now defined in the context of this
primary edge spanning tree. This definition also yields two additional
types of relationships that may exist between nodes:
• BP(X) - the branch-parents of a node X: the set of nodes
higher in the pseudotree that are connected to X but are not
in the primary path from X to the root (In Figure 2, D =
BP(E))
• BC(X) - the branch-children of a node X: the set of nodes
lower in the pseudotree that are connected to X but are not in
any primary path from X to any leaf node (In Figure 2, E =
BC(D))
2.3 Pseudotree Generation
742 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Current algorithms usually have a pre-execution phase to
generate a traditional pseudotree from a general DCOP instance. Our
DCPOP algorithm generates a cross-edged pseudotree in the same
fashion. First, the DCOP instance < X, D, U > translates directly
into a graph with X as the set of vertices and an edge for each pair
of variables represented in U. Next, various heuristics are used to
arrange this graph into a pseudotree. One common heuristic is to
perform a guided depth-first search (DFS) as the resulting traversal
is a pseudotree, and a DFS can easily be performed in a distributed
fashion. We define an edge-traversal based method as any method
that produces a pseudotree in which all parent/child pairs share an
edge in the original graph. This includes DFS, breadth-first search,
and best-first search based traversals. Our heuristics that generate
cross-edged pseudotrees use a distributed best-first search traversal.
3. DPOP ALGORITHM
The original DPOP algorithm operates in three main phases. The
first phase generates a traditional pseudotree from the DCOP
instance using a distributed algorithm. The second phase joins utility
hypercubes from children and the local node and propagates them
towards the root. The third phase chooses an assignment for each
domain in a top down fashion beginning with the agent at the root
node.
The complexity of DPOP depends on the size of the largest
computation and utility message during phase two. It has been shown
that this size directly corresponds to the induced width of the
pseudotree generated in phase one [6]. DPOP uses polynomial time
heuristics to generate the pseudotree since finding the minimum
induced width pseudotree is NP-hard. Several distributed
edgetraversal heuristics have been developed to find low width
pseudotrees [8]. At the end of the first phase, each agent knows its
parent, children, pseudo-parents, and pseudo-children.
3.1 Utility Propagation
Agents located at leaf nodes in the pseudotree begin the process
by calculating a local utility hypercube. This hypercube at node
X contains summed utilities for each combination of values in the
domains for P(X) and PP(X). This hypercube has dimensional size
equal to the number of pseudo-parents plus one. A message
containing this hypercube is sent to P(X). Agents located at non-leaf
nodes wait for all messages from children to arrive. Once the agent
at node Y has all utility messages, it calculates its local utility
hypercube which includes domains for P(Y), PP(Y), and Y. The local
utility hypercube is then joined with all of the hypercubes from
the child messages. At this point all utilities involving node Y are
known, and the domain for Y may be safely eliminated from the
joined hypercube. This elimination process chooses the best utility
over the domain of Y for each combination of the remaining
domains. A message containing this hypercube is now sent to P(Y).
The dimensional size of this hypercube depends on the number of
overlapping domains in received messages and the local utility
hypercube. This dynamic programming based propagation phase
continues until the agent at the root node of the pseudotree has received
all messages from its children.
3.2 Value Propagation
Value propagation begins when the agent at the root node Z has
received all messages from its children. Since Z has no parents
or pseudo-parents, it simply combines the utility hypercubes
received from its children. The combined hypercube contains only
values for the domain for Z. At this point the agent at node Z
simply chooses the assignment for its domain that has the best utility.
A value propagation message with this assignment is sent to each
node in C(Z). Each other node then receives a value propagation
message from its parent and chooses the assignment for its domain
that has the best utility given the assignments received in the
message. The node adds its domain assignment to the assignments it
received and passes the set of assignments to its children. The
algorithm is complete when all nodes have chosen an assignment for
their domain.
4. DCPOP ALGORITHM
Our extension to the original DPOP algorithm, shown in
Algorithm 1, shares the same three phases. The first phase generates the
cross-edged pseudotree for the DCOP instance. The second phase
merges branches and propagates the utility hypercubes. The third
phase chooses assignments for domains at branch merge points and
in a top down fashion, beginning with the agent at the root node.
For the first phase we generate a pseudotree using several
distributed heuristics and select the one with lowest overall
complexity. The complexity of the computation and utility message size
in DCPOP does not directly correspond to the induced width of
the cross-edged pseudotree. Instead, we use a polynomial time
method for calculating the maximum computation and utility
message size for a given cross-edged pseudotree. A description of
this method and the pseudotree selection process appears in
Section 5. At the end of the first phase, each agent knows its
parent, children, pseudo-parents, pseudo-children, branch-parents, and
branch-children.
4.1 Merging Branches and Utility
Propagation
In the original DPOP algorithm a node X only had utility
functions involving its parent and its pseudo-parents. In DCPOP, a node
X is allowed to have a utility function involving a branch-parent.
The concept of a branch can be seen in Figure 2 with node E
representing our node X. The two distinct paths from node E to node
B are called branches of E. The single node where all branches of
E meet is node B, which is called the merge point of E.
Agents with nodes that have branch-parents begin by sending
a utility propagation message to each branch-parent. This
message includes a two dimensional utility hypercube with domains for
the node X and the branch-parent BP(X). It also includes a branch
information structure which contains the origination node of the
branch, X, the total number of branches originating from X, and the
number of branches originating from X that are merged into a
single representation by this branch information structure (this
number starts at 1). Intuitively when the number of merged branches
equals the total number of originating branches, the algorithm has
reached the merge point for X. In Figure 2, node E sends a utility
propagation message to its branch-parent, node D. This message
has dimensions for the domains of E and D, and includes branch
information with an origin of E, 2 total branches, and 1 merged
branch.
As in the original DPOP utility propagation phase, an agent at
leaf node X sends a utility propagation message to its parent. In
DCPOP this message contains dimensions for the domains of P(X)
and PP(X). If node X also has branch-parents, then the utility
propagation message also contains a dimension for the domain of X,
and will include a branch information structure. In Figure 2, node
E sends a utility propagation message to its parent, node C. This
message has dimensions for the domains of E and C, and includes
branch information with an origin of E, 2 total branches, and 1
merged branch.
When a node Y receives utility propagation messages from all of
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743
its children and branch-children, it merges any branches with the
same origination node X. The merged branch information structure
accumulates the number of merged branches for X. If the
cumulative total number of merged branches equals the total number of
branches, then Y is the merge point for X. This means that the
utility hypercubes present at Y contain all information about the
valuations for utility functions involving node X. In addition to the
typical elimination of the domain of Y from the utility hypercubes,
we can now safely eliminate the domain of X from the utility
hypercubes. To illustrate this process, we will examine what happens
in the second phase for node B in Figure 2.
In the second phase Node B receives two utility propagation
messages. The first comes from node C and includes dimensions
for domains E, B, and A. It also has a branch information structure
with origin of E, 2 total branches, and 1 merged branch. The second
comes from node D and includes dimensions for domains E and B.
It also has a branch information structure with origin of E, 2 total
branches, and 1 merged branch. Node B then merges the branch
information structures from both messages because they have the
same origination, node E. Since the number of merged branches
originating from E is now 2 and the total branches originating from
E is 2, node B now eliminates the dimensions for domain E. Node
B also eliminates the dimension for its own domain, leaving only
information about domain A. Node B then sends a utility
propagation message to node A, containing only one dimension for the
domain of A.
Although not possible in DPOP, this method of utility
propagation and dimension elimination may produce hypercubes at node Y
that do not share any domains. In DCPOP we do not join domain
independent hypercubes, but instead may send multiple hypercubes
in the utility propagation message sent to the parent of Y. This lazy
approach to joins helps to reduce message sizes.
4.2 Value Propagation
As in DPOP, value propagation begins when the agent at the root
node Z has received all messages from its children. At this point
the agent at node Z chooses the assignment for its domain that has
the best utility. If Z is the merge point for the branches of some
node X, Z will also choose the assignment for the domain of X.
Thus any node that is a merge point will choose assignments for
a domain other than its own. These assignments are then passed
down the primary edge hierarchy. If node X in the hierarchy has
branch-parents, then the value assignment message from P(X) will
contain an assignment for the domain of X. Every node in the
hierarchy adds any assignments it has chosen to the ones it received
and passes the set of assignments to its children. The algorithm is
complete when all nodes have chosen or received an assignment for
their domain.
4.3 Proof of Correctness
We will prove the correctness of DCPOP by first noting that
DCPOP fully extends DPOP and then examining the two cases for
value assignment in DCPOP. Given a traditional pseudotree as
input, the DCPOP algorithm execution is identical to DPOP. Using a
traditional pseudotree arrangement no nodes have branch-parents
or branch-children since all edges are either back-edges or tree
edges. Thus the DCPOP algorithm using a traditional pseudotree
sends only utility propagation messages that contain domains
belonging to the parent or pseudo-parents of a node. Since no node
has any branch-parents, no branches exist, and thus no node serves
as a merge point for any other node. Thus all value propagation
assignments are chosen at the node of the assignment domain.
For DCPOP execution with cross-edged pseudotrees, some
nodes serve as merge points. We note that any node X that is not a
merge point assigns its value exactly as in DPOP. The local utility
hypercube at X contains domains for X, P(X), PP(X), and BC(X).
As in DPOP the value assignment message received at X includes
the values assigned to P(X) and PP(X). Also, since X is not a merge
point, all assignments to BC(X) must have been calculated at merge
points higher in the tree and are in the value assignment message
from P(X). Thus after eliminating domains for which assignments
are known, only the domain of X is left. The agent at node X can
now correctly choose the assignment with maximum utility for its
own domain.
If node X is a merge point for some branch-child Y, we know
that X must be a node along the path from Y to the root, and from
P(Y) and all BP(Y) to the root. From the algorithm, we know that
Y necessarily has all information from C(Y), PC(Y), and BC(Y)
since it waits for their messages. Node X has information about all
nodes below it in the tree, which would include Y, P(Y), BP(Y),
and those PP(Y) that are below X in the tree. For any PP(Y) above
X in the tree, X receives the assignment for the domain of PP(Y)
in the value assignment message from P(X). Thus X has utility
information about all of the utility functions of which Y is a part.
By eliminating domains included in the value assignment message,
node X is left with a local utility hypercube with domains for X and
Y. The agent at node X can now correctly choose the assignments
with maximum utility for the domains of X and Y.
4.4 Complexity Analysis
The first phase of DCPOP sends one message to each P(X),
PP(X), and BP(X). The second phase sends one value assignment
message to each C(X). Thus, DCPOP produces a linear number of
messages with respect to the number of edges (utility functions) in
the cross-edged pseudotree and the original DCOP instance. The
actual complexity of DCPOP depends on two additional
measurements: message size and computation size.
Message size and computation size in DCPOP depend on the
number of overlapping branches as well as the number of
overlapping back-edges. It was shown in [6] that the number of
overlapping back-edges is equal to the induced width of the pseudotree. In
a poorly constructed cross-edged pseudotree, the number of
overlapping branches at node X can be as large as the total number
of descendants of X. Thus, the total message size in DCPOP in a
poorly constructed instance can be space-exponential in the total
number of nodes in the graph. However, in practice a well
constructed cross-edged pseudotree can achieve much better results.
Later we address the issue of choosing well constructed
crossedged pseudotrees from a set.
We introduce an additional measurement of the maximum
sequential path cost through the algorithm. This measurement
directly relates to the maximum amount of parallelism achievable by
the algorithm. To take this measurement we first store the total
computation size for each node during phase two and three. This
computation size represents the number of individual accesses to a
value in a hypercube at each node. For example, a join between two
domains of size 4 costs 4 ∗ 4 = 16. Two directed acyclic graphs
(DAG) can then be drawn; one with the utility propagation
messages as edges and the phase two costs at nodes, and the other with
value assignment messages and the phase three costs at nodes. The
maximum sequential path cost is equal to the sum of the longest
path on each DAG from the root to any leaf node.
5. HEURISTICS
In our assessment of complexity in DCPOP we focused on the
worst case possibly produced by the algorithm. We acknowledge
744 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Algorithm 1 DCPOP Algorithm
1: DCPOP(X; D; U)
Each agent Xi executes:
Phase 1: pseudotree creation
2: elect leader from all Xj ∈ X
3: elected leader initiates pseudotree creation
4: afterwards, Xi knows P(Xi), PP(Xi), BP(Xi), C(Xi), BC(Xi)
and PC(Xi)
Phase 2: UTIL message propagation
5: if |BP(Xi)| > 0 then
6: BRANCHXi ← |BP(Xi)| + 1
7: for all Xk ∈BP(Xi) do
8: UTILXi (Xk) ←Compute utils(Xi, Xk)
9: Send message(Xk,UTILXi (Xk),BRANCHXi )
10: if |C(Xi)| = 0(i.e. Xi is a leaf node) then
11: UTILXi (P(Xi)) ← Compute utils(P(Xi),PP(Xi))
for all PP(Xi)
12: Send message(P(Xi),
UTILXi (P(Xi)),BRANCHXi )
13: Send message(PP(Xi), empty UTIL,
empty BRANCH) to all PP(Xi)
14: activate UTIL Message handler()
Phase 3: VALUE message propagation
15: activate VALUE Message handler()
END ALGORITHM
UTIL Message handler(Xk,UTILXk (Xi),
BRANCHXk )
16: store UTILXk (Xi),BRANCHXk (Xi)
17: if UTIL messages from all children and branch children arrived
then
18: for all Bj ∈BRANCH(Xi) do
19: if Bj is merged then
20: join all hypercubes where Bj ∈UTIL(Xi)
21: eliminate Bj from the joined hypercube
22: if P(Xi) == null (that means Xi is the root) then
23: v ∗ i ← Choose optimal(null)
24: Send VALUE(Xi, v ∗ i) to all C(Xi)
25: else
26: UTILXi (P(Xi)) ← Compute utils(P(Xi),
PP(Xi))
27: Send message(P(Xi),UTILXi (P(Xi)),
BRANCHXi (P(Xi)))
VALUE Message handler(VALUEXi ,P(Xi))
28: add all Xk ← v ∗ k ∈VALUEXi ,P(Xi) to agent view
29: Xi ← v ∗ i =Choose optimal(agent view)
30: Send VALUEXl , Xi to all Xl ∈C(Xi)
that in real world problems the generation of the pseudotree has
a significant impact on the actual performance. The problem of
finding the best pseudotree for a given DCOP instance is NP-Hard.
Thus a heuristic is used for generation, and the performance of the
algorithm depends on the pseudotree found by the heuristic. Some
previous research focused on finding heuristics to generate good
pseudotrees [8]. While we have developed some heuristics that
generate good cross-edged pseudotrees for use with DCPOP, our
focus has been to use multiple heuristics and then select the best
pseudotree from the generated pseudotrees.
We consider only heuristics that run in polynomial time with
respect to the number of nodes in the original DCOP instance. The
actual DCPOP algorithm has worst case exponential complexity,
but we can calculate the maximum message size, computation size,
and sequential path cost for a given cross-edged pseudotree in
linear space-time complexity. To do this, we simply run the algorithm
without attempting to calculate any of the local utility hypercubes
or optimal value assignments. Instead, messages include
dimensional and branch information but no utility hypercubes.
After each heuristic completes its generation of a pseudotree, we
execute the measurement procedure and propagate the
measurement information up to the chosen root in that pseudotree. The
root then broadcasts the total complexity for that heuristic to all
nodes. After all heuristics have had a chance to complete, every
node knows which heuristic produced the best pseudotree. Each
node then proceeds to begin the DCPOP algorithm using its
knowledge of the pseudotree generated by the best heuristic.
The heuristics used to generate traditional pseudotrees perform
a distributed DFS traversal. The general distributed algorithm uses
a token passing mechanism and a linear number of messages.
Improved DFS based heuristics use a special procedure to choose the
root node, and also provide an ordering function over the neighbors
of a node to determine the order of path recursion. The DFS based
heuristics used in our experiments come from the work done in [4,
8].
5.1 The best-first cross-edged pseudotree
heuristic
The heuristics used to generate cross-edged pseudotrees
perform a best-first traversal. A general distributed best-first
algorithm for node expansion is presented in Algorithm 2. An
evaluation function at each node provides the values that are used to
determine the next best node to expand. Note that in this
algorithm each node only exchanges its best value with its neighbors.
In our experiments we used several evaluation functions that took
as arguments an ordered list of ancestors and a node, which
contains a list of neighbors (with each neighbor"s placement depth in
the tree if it was placed). From these we can calculate
branchparents, branch-children, and unknown relationships for a potential
node placement. The best overall function calculated the value as
ancestors−(branchparents+branchchildren) with the
number of unknown relationships being a tiebreak. After completion
each node has knowledge of its parent and ancestors, so it can
easily determine which connected nodes are pseudo-parents,
branchparents, pseudo-children, and branch-children.
The complexity of the best-first traversal depends on the
complexity of the evaluation function. Assuming a complexity of O(V )
for the evaluation function, which is the case for our best
overall function, the best-first traversal is O(V · E) which is at worst
O(n3
). For each v ∈ V we perform a place operation, and find the
next node to place using the getBestNeighbor operation. The place
operation is at most O(V ) because of the sent messages.
Finding the next node uses recursion and traverses only already placed
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745
Algorithm 2 Distributed Best-First Search Algorithm
root ← electedleader
next(root, ∅)
place(node, parent)
node.parent ← parent
node.ancestors ← parent.ancestors ∪ parent
send placement message (node, node.ancestors) to all
neighbors of node
next(current, previous)
if current is not placed then
place(current, previous)
next(current, ∅)
else
best ← getBestNeighbor(current, previous)
if best = ∅ then
if previous = ∅ then
terminate, all nodes are placed
next(previous, ∅)
else
next(best, current)
getBestNeighbor(current, previous)
best ← ∅; score ← 0
for all n ∈ current.neighbors do
if n! = previous then
if n is placed then
nscore ← getBestNeighbor(n, current)
else
nscore ← evaluate(current, n)
if nscore > score then
score ← nscore
best ← n
return best, score
nodes, so it has O(V ) recursions. Each recursion performs a
recursive getBestNeighbor operation that traverses all placed nodes
and their neighbors. This operation is O(V · E), but results can
be cached using only O(V ) space at each node. Thus we have
O(V ·(V +V +V ·E)) = O(V 2
·E). If we are smart about evaluating
local changes when each node receives placement messages from
its neighbors and cache the results the getBestNeighbor operation
is only O(E). This increases the complexity of the place operation,
but for all placements the total complexity is only O(V · E). Thus
we have an overall complexity of O(V ·E+V ·(V +E)) = O(V ·E).
6. COMPARISON OF COMPLEXITY IN
DPOP AND DCPOP
We have already shown that given the same input, DCPOP
performs the same as DPOP. We also have shown that we can
accurately predict performance of a given pseudotree in linear
spacetime complexity. If we use a constant number of heuristics to
generate the set of pseudotrees, we can choose the best pseudotree in
linear space-time complexity. We will now show that there exists
a DCOP instance for which a cross-edged pseudotree outperforms
all possible traditional pseudotrees (based on edge-traversal
heuristics).
In Figure 3(a) we have a DCOP instance with six nodes. This
is a bipartite graph with each partition fully connected to the other
(a) (b) (c)
Figure 3: (a) The DCOP instance (b) A traditional pseudotree
arrangement for the DCOP instance (c) A cross-edged
pseudotree arrangement for the DCOP instance
partition. In Figure 3(b) we see a traditional pseudotree
arrangement for this DCOP instance. It is easy to see that any
edgetraversal based heuristic cannot expand two nodes from the same
partition in succession. We also see that no node can have more
than one child because any such arrangement would be an invalid
pseudotree. Thus any traditional pseudotree arrangement for this
DCOP instance must take the form of Figure 3(b). We can see that
the back-edges F-B and F-A overlap node C. Node C also has a
parent E, and a back-edge with D. Using the original DPOP
algorithm (or DCPOP since they are identical in this case), we find that
the computation at node C involves five domains: A, B, C, D, and
E.
In contrast, the cross-edged pseudotree arrangement in
Figure 3(c) requires only a maximum of four domains in any
computation during DCPOP. Since node A is the merge point for branches
from both B and C, we can see that each of the nodes D, E, and F
have two overlapping branches. In addition each of these nodes has
node A as its parent. Using the DCPOP algorithm we find that the
computation at node D (or E or F) involves four domains: A, B, C,
and D (or E or F).
Since no better traditional pseudotree arrangement can be
created using an edge-traversal heuristic, we have shown that DCPOP
can outperform DPOP even if we use the optimal pseudotree found
through edge-traversal. We acknowledge that pseudotree
arrangements that allow parent-child relationships without an actual
constraint can solve the problem in Figure 3(a) with maximum
computation size of four domains. However, current heuristics used
with DPOP do not produce such pseudotrees, and such a heuristic
would be difficult to distribute since each node would require
information about nodes with which it has no constraint. Also, while we
do not prove it here, cross-edged pseudotrees can produce smaller
message sizes than such pseudotrees even if the computation size
is similar. In practice, since finding the best pseudotree
arrangement is NP-Hard, we find that heuristics that produce cross-edged
pseudotrees often produce significantly smaller computation and
message sizes.
7. EXPERIMENTAL RESULTS
746 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Existing performance metrics for DCOP algorithms include the
total number of messages, synchronous clock cycles, and message
size. We have already shown that the total number of messages is
linear with respect to the number of constraints in the DCOP
instance. We also introduced the maximum sequential path cost (PC)
as a measurement of the maximum amount of parallelism
achievable by the algorithm. The maximum sequential path cost is equal
to the sum of the computations performed on the longest path from
the root to any leaf node. We also include as metrics the
maximum computation size in number of dimensions (CD) and
maximum message size in number of dimensions (MD). To analyze the
relative complexity of a given DCOP instance, we find the
minimum induced width (IW) of any traditional pseudotree produced
by a heuristic for the original DPOP.
7.1 Generic DCOP instances
For our initial tests we randomly generated two sets of problems
with 3000 cases in each. Each problem was generated by
assigning a random number (picked from a range) of constraints to each
variable. The generator then created binary constraints until each
variable reached its maximum number of constraints. The first set
uses 20 variables, and the best DPOP IW ranges from 1 to 16 with
an average of 8.5. The second set uses 100 variables, and the best
DPOP IW ranged from 2 to 68 with an average of 39.3. Since most
of the problems in the second set were too complex to actually
compute the solution, we took measurements of the metrics using the
techniques described earlier in Section 5 without actually solving
the problem. Results are shown for the first set in Table 1 and for
the second set in Table 2.
For the two problem sets we split the cases into low density and
high density categories. Low density cases consist of those
problems that have a best DPOP IW less than or equal to half of the
total number of nodes (e.g. IW ≤ 10 for the 20 node problems
and IW ≤ 50 for the 100 node problems). High density problems
consist of the remainder of the problem sets.
In both Table 1 and Table 2 we have listed performance
metrics for the original DPOP algorithm, the DCPOP algorithm using
only cross-edged pseudotrees (DCPOP-CE), and the DCPOP
algorithm using traditional and cross-edged pseudotrees (DCPOP-All).
The pseudotrees used for DPOP were generated using 5
heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB,
and DFS MCN BEC. These are all versions of the guided DFS
traversal discussed in Section 5. The cross-edged pseudotrees used
for DCPOP-CE were generated using 5 heuristics: MCN, LCN,
MCN A-B, LCN A-B, and LCSG A-B. These are all versions of
the best-first traversal discussed in Section 5.
For both DPOP and DCPOP-CE we chose the best pseudotree
produced by their respective 5 heuristics for each problem in the
set. For DCPOP-All we chose the best pseudotree produced by all
10 heuristics for each problem in the set. For the CD and MD
metrics the value shown is the average number of dimensions. For the
PC metric the value shown is the natural logarithm of the
maximum sequential path cost (since the actual value grows
exponentially with the complexity of the problem).
The final row in both tables is a measurement of improvement
of DCPOP-All over DPOP. For the CD and MD metrics the value
shown is a reduction in number of dimensions. For the PC metric
the value shown is a percentage reduction in the maximum
sequential path cost (% = DP OP −DCP OP
DCP OP
∗ 100). Notice that
DCPOPAll outperforms DPOP on all metrics. This logically follows from
our earlier assertion that given the same input, DCPOP performs
exactly the same as DPOP. Thus given the choice between the
pseudotrees produced by all 10 heuristics, DCPOP-All will always
outLow Density High Density
Algorithm CD MD PC CD MD PC
DPOP 7.81 6.81 3.78 13.34 12.34 5.34
DCPOP-CE 7.94 6.73 3.74 12.83 11.43 5.07
DCPOP-All 7.62 6.49 3.66 12.72 11.36 5.05
Improvement 0.18 0.32 13% 0.62 0.98 36%
Table 1: 20 node problems
Low Density High Density
Algorithm CD MD PC CD MD PC
DPOP 33.35 32.35 14.55 58.51 57.50 19.90
DCPOP-CE 33.49 29.17 15.22 57.11 50.03 20.01
DCPOP-All 32.35 29.57 14.10 56.33 51.17 18.84
Improvement 1.00 2.78 104% 2.18 6.33 256%
Table 2: 100 node problems
Figure 4: Computation Dimension Size
Figure 5: Message Dimension Size
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 747
Figure 6: Path Cost
DCPOP Improvement
Ag Mtg Vars Const IW CD MD PC
10 4 12 13.5 2.25 -0.01 -0.01 5.6%
30 14 44 57.6 3.63 0.09 0.09 10.9%
50 24 76 101.3 4.17 0.08 0.09 10.7%
100 49 156 212.9 5.04 0.16 0.20 30.0%
150 74 236 321.8 5.32 0.21 0.23 35.8%
200 99 316 434.2 5.66 0.18 0.22 29.5%
Table 3: Meeting Scheduling Problems
perform DPOP. Another trend we notice is that the improvement is
greater for high density problems than low density problems. We
show this trend in greater detail in Figures 4, 5, and 6. Notice
how the improvement increases as the complexity of the problem
increases.
7.2 Meeting Scheduling Problem
In addition to our initial generic DCOP tests, we ran a series
of tests on the Meeting Scheduling Problem (MSP) as described
in [6]. The problem setup includes a number of people that are
grouped into departments. Each person must attend a specified
number of meetings. Meetings can be held within departments or
among departments, and can be assigned to one of eight time slots.
The MSP maps to a DCOP instance where each variable represents
the time slot that a specific person will attend a specific meeting.
All variables that belong to the same person have mutual exclusion
constraints placed so that the person cannot attend more than one
meeting during the same time slot. All variables that belong to the
same meeting have equality constraints so that all of the
participants choose the same time slot. Unary constraints are placed on
each variable to account for a person"s valuation of each meeting
and time slot.
For our tests we generated 100 sample problems for each
combination of agents and meetings. Results are shown in Table 3. The
values in the first five columns represent (in left to right order), the
total number of agents, the total number of meetings, the total
number of variables, the average total number of constraints, and the
average minimum IW produced by a traditional pseudotree. The
last three columns show the same metrics we used for the generic
DCOP instances, except this time we only show the improvements
of DCPOP-All over DPOP. Performance is better on average for
all MSP instances, but again we see larger improvements for more
complex problem instances.
8. CONCLUSIONS AND FUTURE WORK
We presented a complete, distributed algorithm that solves
general DCOP instances using cross-edged pseudotree arrangements.
Our algorithm extends the DPOP algorithm by adding additional
utility propagation messages, and introducing the concept of branch
merging during the utility propagation phase. Our algorithm also
allows value assignments to occur at higher level merge points
for lower level nodes. We have shown that DCPOP fully extends
DPOP by performing the same operations given the same input.
We have also shown through some examples and experimental data
that DCPOP can achieve greater performance for some problem
instances by extending the allowable input set to include cross-edged
pseudotrees.
We placed particular emphasis on the role that edge-traversal
heuristics play in the generation of pseudotrees. We have shown
that the performance penalty is minimal to generate multiple
heuristics, and that we can choose the best generated pseudotree
in linear space-time complexity. Given the importance of a good
pseudotree for performance, future work will include new
heuristics to find better pseudotrees. Future work will also include
adapting existing DPOP extensions [5, 7] that support different problem
domains for use with DCPOP.
9. REFERENCES
[1] J. Liu and K. P. Sycara. Exploiting problem structure for
distributed constraint optimization. In V. Lesser, editor,
Proceedings of the First International Conference on
Multi-Agent Systems, pages 246-254, San Francisco, CA,
1995. MIT Press.
[2] P. J. Modi, H. Jung, M. Tambe, W.-M. Shen, and S. Kulkarni.
A dynamic distributed constraint satisfaction approach to
resource allocation. Lecture Notes in Computer Science,
2239:685-700, 2001.
[3] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. An
asynchronous complete method for distributed constraint
optimization. In AAMAS 03, 2003.
[4] A. Petcu. Frodo: A framework for open/distributed
constraint optimization. Technical Report No. 2006/001
2006/001, Swiss Federal Institute of Technology (EPFL),
Lausanne (Switzerland), 2006. http://liawww.epfl.ch/frodo/.
[5] A. Petcu and B. Faltings. A-dpop: Approximations in
distributed optimization. In poster in CP 2005, pages
802-806, Sitges, Spain, October 2005.
[6] A. Petcu and B. Faltings. Dpop: A scalable method for
multiagent constraint optimization. In IJCAI 05, pages
266-271, Edinburgh, Scotland, Aug 2005.
[7] A. Petcu, B. Faltings, and D. Parkes. M-dpop: Faithful
distributed implementation of efficient social choice
problems. In AAMAS 06, pages 1397-1404, Hakodate,
Japan, May 2006.
[8] G. Ushakov. Solving meeting scheduling problems using
distributed pseudotree-optimization procedure. Master"s
thesis, ´Ecole Polytechnique F´ed´erale de Lausanne, 2005.
[9] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara.
Distributed constraint satisfaction for formalizing distributed
problem solving. In International Conference on Distributed
Computing Systems, pages 614-621, 1992.
[10] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. The
distributed constraint satisfaction problem: Formalization
and algorithms. Knowledge and Data Engineering,
10(5):673-685, 1998.
748 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | cross-edged pseudotree;teamwork coordination;distributed constraint optimization;job shop scheduling;agent;pseudotree arrangement;multi-agent coordination;edge-traversal heuristic;maximum sequential path cost;multi-valued utility function;distribute constraint satisfaction and optimization;multi-agent system;resource allocation;global utility |
train_I-43 | Dynamics Based Control with an Application to Area-Sweeping Problems | In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. | 1. INTRODUCTION
Planning and control constitutes a central research area in
multiagent systems and artificial intelligence. In recent years, Partially
Observable Markov Decision Processes (POMDPs) [12] have
become a popular formal basis for planning in stochastic
environments. In this framework, the planning and control problem is often
addressed by imposing a reward function, and computing a policy
(of choosing actions) that is optimal, in the sense that it will result
in the highest expected utility. While theoretically attractive, the
complexity of optimally solving a POMDP is prohibitive [8, 7].
We take an alternative view of planning in stochastic
environments. We do not use a (state-based) reward function, but instead
optimize over a different criterion, a transition-based specification
of the desired system dynamics. The idea here is to view
planexecution as a process that compels a (stochastic) system to change,
and a plan as a dynamic process that shapes that change according
to desired criteria. We call this general planning framework
Dynamics Based Control (DBC).
In DBC, the goal of a planning (or control) process becomes to
ensure that the system will change in accordance with specific
(potentially stochastic) target dynamics. As actual system behavior
may deviate from that which is specified by target dynamics (due
to the stochastic nature of the system), planning in such
environments needs to be continual [4], in a manner similar to classical
closed-loop controllers [16]. Here, optimality is measured in terms
of probability of deviation magnitudes.
In this paper, we present the structure of Dynamics Based
Control. We show that the recently developed Extended Markov
Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT
employing greedy action selection, which is a specific
parameterization among the options possible within DBC. EMT is an efficient
instantiation of DBC.
To evaluate DBC, we carried out a set of experiments applying
multi-target EMT to the Tag Game [11]; this is a variant on the
area sweeping problem, where an agent is trying to tag a moving
target (quarry) whose position is not known with certainty.
Experimental data demonstrates that even with a simple model of the
environment and a simple design of target dynamics, high success
rates can be produced both in catching the quarry, and in surprising
the quarry (as expressed by the observed entropy of the controlled
agent"s position).
The paper is organized as follows. In Section 2 we motivate DBC
using area-sweeping problems, and discuss related work. Section 3
introduces the Dynamics Based Control (DBC) structure, and its
specialization to Markovian environments. This is followed by a
review of the Extended Markov Tracking (EMT) approach as a
DBC-structured control regimen in Section 4. That section also
discusses the limitations of EMT-based control relative to the
general DBC framework. Experimental settings and results are then
presented in Section 5. Section 6 provides a short discussion of
the overall approach, and Section 7 gives some concluding remarks
and directions for future work.
790
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
2. MOTIVATION AND RELATED WORK
Many real-life scenarios naturally have a stochastic target
dynamics specification, especially those domains where there exists
no ultimate goal, but rather system behavior (with specific
properties) that has to be continually supported. For example, security
guards perform persistent sweeps of an area to detect any sign of
intrusion. Cunning thieves will attempt to track these sweeps, and
time their operation to key points of the guards" motion. It is thus
advisable to make the guards" motion dynamics appear irregular
and random.
Recent work by Paruchuri et al. [10] has addressed such
randomization in the context of single-agent and distributed POMDPs. The
goal in that work was to generate policies that provide a measure of
action-selection randomization, while maintaining rewards within
some acceptable levels. Our focus differs from this work in that
DBC does not optimize expected rewards-indeed we do not
consider rewards at all-but instead maintains desired dynamics
(including, but not limited to, randomization).
The Game of Tag is another example of the applicability of the
approach. It was introduced in the work by Pineau et al. [11]. There
are two agents that can move about an area, which is divided into a
grid. The grid may have blocked cells (holes) into which no agent
can move. One agent (the hunter) seeks to move into a cell
occupied by the other (the quarry), such that they are co-located (this is
a successful tag). The quarry seeks to avoid the hunter agent, and
is always aware of the hunter"s position, but does not know how the
hunter will behave, which opens up the possibility for a hunter to
surprise the prey. The hunter knows the quarry"s probabilistic law
of motion, but does not know its current location. Tag is an instance
of a family of area-sweeping (pursuit-evasion) problems.
In [11], the hunter modeled the problem using a POMDP. A
reward function was defined, to reflect the desire to tag the quarry,
and an action policy was computed to optimize the reward
collected over time. Due to the intractable complexity of determining
the optimal policy, the action policy computed in that paper was
essentially an approximation.
In this paper, instead of formulating a reward function, we use
EMT to solve the problem, by directly specifying the target
dynamics. In fact, any search problem with randomized motion, the
socalled class of area sweeping problems, can be described through
specification of such target system dynamics. Dynamics Based
Control provides a natural approach to solving these problems.
3. DYNAMICS BASED CONTROL
The specification of Dynamics Based Control (DBC) can be
broken into three interacting levels: Environment Design Level, User
Level, and Agent Level.
• Environment Design Level is concerned with the formal
specification and modeling of the environment. For
example, this level would specify the laws of physics within the
system, and set its parameters, such as the gravitation
constant.
• User Level in turn relies on the environment model produced
by Environment Design to specify the target system
dynamics it wishes to observe. The User Level also specifies the
estimation or learning procedure for system dynamics, and the
measure of deviation. In the museum guard scenario above,
these would correspond to a stochastic sweep schedule, and a
measure of relative surprise between the specified and actual
sweeping.
• Agent Level in turn combines the environment model from
the Environment Design level, the dynamics estimation
procedure, the deviation measure and the target dynamics
specification from User Level, to produce a sequence of actions
that create system dynamics as close as possible to the
targeted specification.
As we are interested in the continual development of a stochastic
system, such as happens in classical control theory [16] and
continual planning [4], as well as in our example of museum sweeps,
the question becomes how the Agent Level is to treat the
deviation measurements over time. To this end, we use a probability
threshold-that is, we would like the Agent Level to maximize the
probability that the deviation measure will remain below a certain
threshold.
Specific action selection then depends on system formalization.
One possibility would be to create a mixture of available system
trends, much like that which happens in Behavior-Based Robotic
architectures [1]. The other alternative would be to rely on the
estimation procedure provided by the User Level-to utilize the
Environment Design Level model of the environment to choose actions,
so as to manipulate the dynamics estimator into believing that a
certain dynamics has been achieved. Notice that this manipulation is
not direct, but via the environment. Thus, for strong enough
estimator algorithms, successful manipulation would mean a successful
simulation of the specified target dynamics (i.e., beyond discerning
via the available sensory input).
DBC levels can also have a back-flow of information (see
Figure 1). For instance, the Agent Level could provide data about
target dynamics feasibility, allowing the User Level to modify the
requirement, perhaps focusing on attainable features of system
behavior. Data would also be available about the system response to
different actions performed; combined with a dynamics estimator
defined by the User Level, this can provide an important tool for the
environment model calibration at the Environment Design Level.
UserEnv. Design Agent
Model
Ideal Dynamics
Estimator
Estimator
Dynamics Feasibility
System Response Data
Figure 1: Data flow of the DBC framework
Extending upon the idea of Actor-Critic algorithms [5], DBC
data flow can provide a good basis for the design of a learning
algorithm. For example, the User Level can operate as an exploratory
device for a learning algorithm, inferring an ideal dynamics target
from the environment model at hand that would expose and verify
most critical features of system behavior. In this case, feasibility
and system response data from the Agent Level would provide key
information for an environment model update. In fact, the
combination of feasibility and response data can provide a basis for the
application of strong learning algorithms such as EM [2, 9].
3.1 DBC for Markovian Environments
For a Partially Observable Markovian Environment, DBC can
be specified in a more rigorous manner. Notice how DBC discards
rewards, and replaces it by another optimality criterion (structural
differences are summarized in Table 1):
• Environment Design level is to specify a tuple
< S, A, T, O, Ω, s0 >, where:
- S is the set of all possible environment states;
- s0 is the initial state of the environment (which can also
be viewed as a probability distribution over S);
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791
- A is the set of all possible actions applicable in the
environment;
- T is the environment"s probabilistic transition function:
T : S ×A → Π(S). That is, T(s |a, s) is the
probability that the environment will move from state s to state
s under action a;
- O is the set of all possible observations. This is what
the sensor input would look like for an outside observer;
- Ω is the observation probability function:
Ω : S × A × S → Π(O).
That is, Ω(o|s , a, s) is the probability that one will
observe o given that the environment has moved from
state s to state s under action a.
• User Level, in the case of Markovian environment, operates
on the set of system dynamics described by a family of
conditional probabilities F = {τ : S × A → Π(S)}. Thus
specification of target dynamics can be expressed by q ∈ F,
and the learning or tracking algorithm can be represented as
a function L : O×(A×O)∗
→ F; that is, it maps sequences
of observations and actions performed so far into an estimate
τ ∈ F of system dynamics.
There are many possible variations available at the User Level
to define divergence between system dynamics; several of
them are:
- Trace distance or L1 distance between two
distributions p and q defined by
D(p(·), q(·)) =
1
2 x
|p(x) − q(x)|
- Fidelity measure of distance
F(p(·), q(·)) =
x
p(x)q(x)
- Kullback-Leibler divergence
DKL(p(·) q(·)) =
x
p(x) log
p(x)
q(x)
Notice that the latter two are not actually metrics over the
space of possible distributions, but nevertheless have
meaningful and important interpretations. For instance,
KullbackLeibler divergence is an important tool of information
theory [3] that allows one to measure the price of encoding an
information source governed by q, while assuming that it is
governed by p.
The User Level also defines the threshold of dynamics
deviation probability θ.
• Agent Level is then faced with a problem of selecting a
control signal function a∗
to satisfy a minimization problem as
follows:
a∗
= arg min
a
Pr(d(τa, q) > θ)
where d(τa, q) is a random variable describing deviation of
the dynamics estimate τa, created by L under control signal
a, from the ideal dynamics q. Implicit in this minimization
problem is that L is manipulated via the environment, based
on the environment model produced by the Environment
Design Level.
3.2 DBC View of the State Space
It is important to note the complementary view that DBC and
POMDPs take on the state space of the environment. POMDPs
regard state as a stationary snap-shot of the environment;
whatever attributes of state sequencing one seeks are reached through
properties of the control process, in this case reward accumulation.
This can be viewed as if the sequencing of states and the attributes
of that sequencing are only introduced by and for the controlling
mechanism-the POMDP policy.
DBC concentrates on the underlying principle of state
sequencing, the system dynamics. DBC"s target dynamics specification can
use the environment"s state space as a means to describe, discern,
and preserve changes that occur within the system. As a result,
DBC has a greater ability to express state sequencing properties,
which are grounded in the environment model and its state space
definition.
For example, consider the task of moving through rough terrain
towards a goal and reaching it as fast as possible. POMDPs would
encode terrain as state space points, while speed would be ensured
by negative reward for every step taken without reaching the
goalaccumulating higher reward can be reached only by faster motion.
Alternatively, the state space could directly include the notion of
speed. For POMDPs, this would mean that the same concept is
encoded twice, in some sense: directly in the state space, and
indirectly within reward accumulation. Now, even if the reward
function would encode more, and finer, details of the properties of
motion, the POMDP solution will have to search in a much larger
space of policies, while still being guided by the implicit concept
of the reward accumulation procedure.
On the other hand, the tactical target expression of variations and
correlations between position and speed of motion is now grounded
in the state space representation. In this situation, any further
constraints, e.g., smoothness of motion, speed limits in different
locations, or speed reductions during sharp turns, are explicitly and
uniformly expressed by the tactical target, and can result in faster
and more effective action selection by a DBC algorithm.
4. EMT-BASED CONTROL AS A DBC
Recently, a control algorithm was introduced called EMT-based
Control [13], which instantiates the DBC framework. Although it
provides an approximate greedy solution in the DBC sense, initial
experiments using EMT-based control have been encouraging [14,
15]. EMT-based control is based on the Markovian environment
definition, as in the case with POMDPs, but its User and Agent
Levels are of the Markovian DBC type of optimality.
• User Level of EMT-based control defines a limited-case
target system dynamics independent of action:
qEMT : S → Π(S).
It then utilizes the Kullback-Leibler divergence measure to
compose a momentary system dynamics estimator-the
Extended Markov Tracking (EMT) algorithm. The algorithm
keeps a system dynamics estimate τt
EMT that is capable of
explaining recent change in an auxiliary Bayesian system
state estimator from pt−1 to pt, and updates it conservatively
using Kullback-Leibler divergence. Since τt
EMT and pt−1,t
are respectively the conditional and marginal probabilities
over the system"s state space, explanation simply means
that
pt(s ) =
s
τt
EMT (s |s)pt−1(s),
and the dynamics estimate update is performed by solving a
792 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment
Level
Approach
MDP Markovian DBC
Environment
< S, A, T, O, Ω >,where
S - set of states
A - set of actions
Design
T : S × A → Π(S) - transition
O - observation set
Ω : S × A × S → Π(O)
User
r : S × A × S → R q : S × A → Π(S)
F(π∗
) → r L(o1, ..., ot) → τ
r - reward function q - ideal dynamics
F - reward remodeling L - dynamics estimator
θ - threshold
Agent π∗
= arg max
π
E[ γt
rt] π∗
= arg min
π
Prob(d(τ q) > θ)
minimization problem:
τt
EMT = H[pt, pt−1, τt−1
EMT ]
= arg min
τ
DKL(τ × pt−1 τt−1
EMT × pt−1)
s.t.
pt(s ) =
s
(τ × pt−1)(s , s)
pt−1(s) =
s
(τ × pt−1)(s , s)
• Agent Level in EMT-based control is suboptimal with
respect to DBC (though it remains within the DBC
framework), performing greedy action selection based on
prediction of EMT"s reaction. The prediction is based on the
environment model provided by the Environment Design level,
so that if we denote by Ta the environment"s transition
function limited to action a, and pt−1 is the auxiliary Bayesian
system state estimator, then the EMT-based control choice is
described by
a∗
= arg min
a∈A
DKL(H[Ta × pt, pt, τt
EMT ] qEMT × pt−1)
Note that this follows the Markovian DBC framework precisely:
the rewarding optimality of POMDPs is discarded, and in its place
a dynamics estimator (EMT in this case) is manipulated via action
effects on the environment to produce an estimate close to the
specified target system dynamics. Yet as we mentioned, naive
EMTbased control is suboptimal in the DBC sense, and has several
additional limitations that do not exist in the general DBC framework
(discussed in Section 4.2).
4.1 Multi-Target EMT
At times, there may exist several behavioral preferences. For
example, in the case of museum guards, some art items are more
heavily guarded, requiring that the guards stay more often in their
close vicinity. On the other hand, no corner of the museum is to
be left unchecked, which demands constant motion. Successful
museum security would demand that the guards adhere to, and
balance, both of these behaviors. For EMT-based control, this would
mean facing several tactical targets {qk}K
k=1, and the question
becomes how to merge and balance them. A balancing mechanism
can be applied to resolve this issue.
Note that EMT-based control, while selecting an action, creates
a preference vector over the set of actions based on their predicted
performance with respect to a given target. If these preference
vectors are normalized, they can be combined into a single unified
preference. This requires replacement of standard EMT-based action
selection by the algorithm below [15]:
• Given:
- a set of target dynamics {qk}K
k=1,
- vector of weights w(k)
• Select action as follows
- For each action a ∈ A predict the future state
distribution ¯pa
t+1 = Ta ∗ pt;
- For each action, compute
Da = H(¯pa
t+1, pt, PDt)
- For each a ∈ A and qk tactical target, denote
V (a, k) = DKL (Da qk) pt
.
Let Vk(a) = 1
Zk
V (a, k), where Zk =
a∈A
V (a, k) is
a normalization factor.
- Select a∗
= arg min
a
k
k=1 w(k)Vk(a)
The weights vector w = (w1, ..., wK ) allows the additional
tuning of importance among target dynamics without the need
to redesign the targets themselves. This balancing method is also
seamlessly integrated into the EMT-based control flow of
operation.
4.2 EMT-based Control Limitations
EMT-based control is a sub-optimal (in the DBC sense)
representative of the DBC structure. It limits the User by forcing EMT to
be its dynamic tracking algorithm, and replaces Agent optimization
by greedy action selection. This kind of combination, however, is
common for on-line algorithms. Although further development of
EMT-based controllers is necessary, evidence so far suggests that
even the simplest form of the algorithm possesses a great deal of
power, and displays trends that are optimal in the DBC sense of the
word.
There are two further, EMT-specific, limitations to EMT-based
control that are evident at this point. Both already have partial
solutions and are subjects of ongoing research.
The first limitation is the problem of negative preference. In the
POMDP framework for example, this is captured simply, through
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793
the appearance of values with different signs within the reward
structure. For EMT-based control, however, negative preference
means that one would like to avoid a certain distribution over
system development sequences; EMT-based control, however,
concentrates on getting as close as possible to a distribution. Avoidance is
thus unnatural in native EMT-based control.
The second limitation comes from the fact that standard
environment modeling can create pure sensory actions-actions that do
not change the state of the world, and differ only in the way
observations are received and the quality of observations received. Since
the world state does not change, EMT-based control would not be
able to differentiate between different sensory actions.
Notice that both of these limitations of EMT-based control are
absent from the general DBC framework, since it may have a
tracking algorithm capable of considering pure sensory actions and,
unlike Kullback-Leibler divergence, a distribution deviation measure
that is capable of dealing with negative preference.
5. EMT PLAYING TAG
The Game of Tag was first introduced in [11]. It is a single agent
problem of capturing a quarry, and belongs to the class of area
sweeping problems. An example domain is shown in Figure 2.
0 51 2 3 4 6
7 8 10 12 13
161514
17 18 19
2221
23
9 11Q A
20
Figure 2: Tag domain; an agent (A) attempts to seek and
capture a quarry (Q)
The Game of Tag extremely limits the agent"s perception, so that
the agent is able to detect the quarry only if they are co-located in
the same cell of the grid world. In the classical version of the game,
co-location leads to a special observation, and the ‘Tag" action can
be performed. We slightly modify this setting: the moment that
both agents occupy the same cell, the game ends. As a result, both
the agent and its quarry have the same motion capability, which
allows them to move in four directions, North, South, East, and
West. These form a formal space of actions within a Markovian
environment.
The state space of the formal Markovian environment is described
by the cross-product of the agent and quarry"s positions. For
Figure 2, it would be S = {s0, ..., s23} × {s0, ..., s23}.
The effects of an action taken by the agent are deterministic, but
the environment in general has a stochastic response due to the
motion of the quarry. With probability q0
1
it stays put, and with
probability 1 − q0 it moves to an adjacent cell further away from the
1
In our experiments this was taken to be q0 = 0.2.
agent. So for the instance shown in Figure 2 and q0 = 0.1:
P(Q = s9|Q = s9, A = s11) = 0.1
P(Q = s2|Q = s9, A = s11) = 0.3
P(Q = s8|Q = s9, A = s11) = 0.3
P(Q = s14|Q = s9, A = s11) = 0.3
Although the evasive behavior of the quarry is known to the
agent, the quarry"s position is not. The only sensory information
available to the agent is its own location.
We use EMT and directly specify the target dynamics. For the
Game of Tag, we can easily formulate three major trends: catching
the quarry, staying mobile, and stalking the quarry. This results in
the following three target dynamics:
Tcatch(At+1 = si|Qt = sj, At = sa) ∝
1 si = sj
0 otherwise
Tmobile(At+1 = si|Qt = so, At = sj) ∝
0 si = sj
1 otherwise
Tstalk(At+1 = si|Qt = so, At = sj) ∝ 1
dist(si,so)
Note that none of the above targets are directly achievable; for
instance, if Qt = s9 and At = s11, there is no action that can move
the agent to At+1 = s9 as required by the Tcatch target dynamics.
We ran several experiments to evaluate EMT performance in the
Tag Game. Three configurations of the domain shown in Figure 3
were used, each posing a different challenge to the agent due to
partial observability. In each setting, a set of 1000 runs was performed
with a time limit of 100 steps. In every run, the initial position of
both the agent and its quarry was selected at random; this means
that as far as the agent was concerned, the quarry"s initial position
was uniformly distributed over the entire domain cell space.
We also used two variations of the environment observability
function. In the first version, observability function mapped all
joint positions of hunter and quarry into the position of the hunter as
an observation. In the second, only those joint positions in which
hunter and quarry occupied different locations were mapped into
the hunter"s location. The second version in fact utilized and
expressed the fact that once hunter and quarry occupy the same cell
the game ends.
The results of these experiments are shown in Table 2.
Balancing [15] the catch, move, and stalk target dynamics described in
the previous section by the weight vector [0.8, 0.1, 0.1], EMT
produced stable performance in all three domains.
Although direct comparisons are difficult to make, the EMT
performance displayed notable efficiency vis-`a-vis the POMDP
approach. In spite of a simple and inefficient Matlab implementation
of the EMT algorithm, the decision time for any given step
averaged significantly below 1 second in all experiments. For the
irregular open arena domain, which proved to be the most difficult, 1000
experiment runs bounded by 100 steps each, a total of 42411 steps,
were completed in slightly under 6 hours. That is, over 4 × 104
online steps took an order of magnitude less time than the offline
computation of POMDP policy in [11]. The significance of this
differential is made even more prominent by the fact that, should the
environment model parameters change, the online nature of EMT
would allow it to maintain its performance time, while the POMDP
policy would need to be recomputed, requiring yet again a large
overhead of computation.
We also tested the behavior cell frequency entropy, empirical
measures from trial data. As Figure 4 and Figure 5 show,
empir794 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A
Q
Q A
0 1 2 3 4
5 6 7 8 9
10 11 12
13 14 15
16 17 18
A
Q
Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor
Table 2: Performance of the EMT-based solution in three Tag
Game domains and two observability models: I) omniposition
quarry, II) quarry is not at hunter"s position
Model Domain Capture% E(Steps) Time/Step
I
Dead-ends 100 14.8 72(mSec)
Arena 80.2 42.4 500(mSec)
Circle 91.4 34.6 187(mSec)
II
Dead-ends 100 13.2 91(mSec)
Arena 96.8 28.67 396(mSec)
Circle 94.4 31.63 204(mSec)
ical entropy grows with the length of interaction. For runs where
the quarry was not captured immediately, the entropy reaches
between 0.85 and 0.952
for different runs and scenarios. As the agent
actively seeks the quarry, the entropy never reaches its maximum.
One characteristic of the entropy graph for the open arena
scenario particularly caught our attention in the case of the
omniposition quarry observation model. Near the maximum limit of trial
length (100 steps), entropy suddenly dropped. Further analysis of
the data showed that under certain circumstances, a fluctuating
behavior occurs in which the agent faces equally viable versions of
quarry-following behavior. Since the EMT algorithm has greedy
action selection, and the state space does not encode any form of
commitment (not even speed or acceleration), the agent is locked
within a small portion of cells. It is essentially attempting to
simultaneously follow several courses of action, all of which are
consistent with the target dynamics. This behavior did not occur in our
second observation model, since it significantly reduced the set of
eligible courses of action-essentially contributing to tie-breaking
among them.
6. DISCUSSION
The design of the EMT solution for the Tag Game exposes the
core difference in approach to planning and control between EMT
or DBC, on the one hand, and the more familiar POMDP approach,
on the other. POMDP defines a reward structure to optimize, and
influences system dynamics indirectly through that optimization.
EMT discards any reward scheme, and instead measures and
influences system dynamics directly.
2
Entropy was calculated using log base equal to the number of
possible locations within the domain; this properly scales entropy
expression into the range [0, 1] for all domains.
Thus for the Tag Game, we did not search for a reward function
that would encode and express our preference over the agent"s
behavior, but rather directly set three (heuristic) behavior preferences
as the basis for target dynamics to be maintained. Experimental
data shows that these targets need not be directly achievable via the
agent"s actions. However, the ratio between EMT performance and
achievability of target dynamics remains to be explored.
The tag game experiment data also revealed the different
emphasis DBC and POMDPs place on the formulation of an environment
state space. POMDPs rely entirely on the mechanism of reward
accumulation maximization, i.e., formation of the action selection
procedure to achieve necessary state sequencing. DBC, on the
other hand, has two sources of sequencing specification: through
the properties of an action selection procedure, and through direct
specification within the target dynamics. The importance of the
second source was underlined by the Tag Game experiment data,
in which the greedy EMT algorithm, applied to a POMDP-type
state space specification, failed, since target description over such a
state space was incapable of encoding the necessary behavior
tendencies, e.g., tie-breaking and commitment to directed motion.
The structural differences between DBC (and EMT in
particular), and POMDPs, prohibits direct performance comparison, and
places them on complementary tracks, each within a suitable niche.
For instance, POMDPs could be seen as a much more natural
formulation of economic sequential decision-making problems, while
EMT is a better fit for continual demand for stochastic change, as
happens in many robotic or embodied-agent problems.
The complementary properties of POMDPs and EMT can be
further exploited. There is recent interest in using POMDPs in hybrid
solutions [17], in which the POMDPs can be used together with
other control approaches to provide results not easily achievable
with either approach by itself. DBC can be an effective partner in
such a hybrid solution. For instance, POMDPs have prohibitively
large off-line time requirements for policy computation, but can
be readily used in simpler settings to expose beneficial behavioral
trends; this can serve as a form of target dynamics that are provided
to EMT in a larger domain for on-line operation.
7. CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a novel perspective on the
process of planning and control in stochastic environments, in the form
of the Dynamics Based Control (DBC) framework. DBC
formulates the task of planning as support of a specified target system
dynamics, which describes the necessary properties of change within
the environment. Optimality of DBC plans of action are measured
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795
0 20 40 60 80 100
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Dead−ends
0 20 40 60 80 100
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Arena
0 20 40 60 80 100
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Circle
Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment
scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.
0 10 20 30 40 50 60
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Dead−ends
0 20 40 60 80 100
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Arena
0 20 40 60 80 100
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Steps
Entropy
Circle
Figure 5: Observation Model II: quarry not observed at hunter"s position. Entropy development with length of Tag Game for the
three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.
with respect to the deviation of actual system dynamics from the
target dynamics.
We show that a recently developed technique of Extended Markov
Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can
be seen as a specific case of DBC parameterization, which employs
a greedy action selection procedure.
Since EMT exhibits the key features of the general DBC
framework, as well as polynomial time complexity, we used the
multitarget version of EMT [15] to demonstrate that the class of area
sweeping problems naturally lends itself to dynamics-based
descriptions, as instantiated by our experiments in the Tag Game
domain.
As enumerated in Section 4.2, EMT has a number of
limitations, such as difficulty in dealing with negative dynamic
preference. This prevents direct application of EMT to such problems
as Rendezvous-Evasion Games (e.g., [6]). However, DBC in
general has no such limitations, and readily enables the formulation
of evasion games. In future work, we intend to proceed with the
development of dynamics-based controllers for these problems.
8. ACKNOWLEDGMENT
The work of the first two authors was partially supported by
Israel Science Foundation grant #898/05, and the third author was
partially supported by a grant from Israel"s Ministry of Science and
Technology.
9. REFERENCES
[1] R. C. Arkin. Behavior-Based Robotics. MIT Press, 1998.
[2] J. A. Bilmes. A gentle tutorial of the EM algorithm and its
application to parameter estimation for Gaussian mixture and
Hidden Markov Models. Technical Report TR-97-021,
Department of Electrical Engeineering and Computer
Science, University of California at Berkeley, 1998.
[3] T. M. Cover and J. A. Thomas. Elements of information
theory. Wiley, 1991.
[4] M. E. desJardins, E. H. Durfee, C. L. Ortiz, and M. J.
Wolverton. A survey of research in distributed, continual
planning. AI Magazine, 4:13-22, 1999.
[5] V. R. Konda and J. N. Tsitsiklis. Actor-Critic algorithms.
SIAM Journal on Control and Optimization,
42(4):1143-1166, 2003.
[6] W. S. Lim. A rendezvous-evasion game on discrete locations
with joint randomization. Advances in Applied Probability,
29(4):1004-1017, December 1997.
[7] M. L. Littman, T. L. Dean, and L. P. Kaelbling. On the
complexity of solving Markov decision problems. In
Proceedings of the 11th Annual Conference on Uncertainty
in Artificial Intelligence (UAI-95), pages 394-402, 1995.
[8] O. Madani, S. Hanks, and A. Condon. On the undecidability
of probabilistic planning and related stochastic optimization
problems. Artificial Intelligence Journal, 147(1-2):5-34,
July 2003.
[9] R. M. Neal and G. E. Hinton. A view of the EM algorithm
796 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
that justifies incremental, sparse, and other variants. In M. I.
Jordan, editor, Learning in Graphical Models, pages
355-368. Kluwer Academic Publishers, 1998.
[10] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus. Security
in multiagent systems by policy randomization. In
Proceeding of AAMAS 2006, 2006.
[11] J. Pineau, G. Gordon, and S. Thrun. Point-based value
iteration: An anytime algorithm for pomdps. In International
Joint Conference on Artificial Intelligence (IJCAI), pages
1025-1032, August 2003.
[12] M. L. Puterman. Markov Decision Processes. Wiley Series in
Probability and Mathematical Statistics: Applied Probability
and Statistics Section. Wiley-Interscience Publication, New
York, 1994.
[13] Z. Rabinovich and J. S. Rosenschein. Extended Markov
Tracking with an application to control. In The Workshop on
Agent Tracking: Modeling Other Agents from Observations,
at the Third International Joint Conference on Autonomous
Agents and Multiagent Systems, pages 95-100, New-York,
July 2004.
[14] Z. Rabinovich and J. S. Rosenschein. Multiagent
coordination by Extended Markov Tracking. In The Fourth
International Joint Conference on Autonomous Agents and
Multiagent Systems, pages 431-438, Utrecht, The
Netherlands, July 2005.
[15] Z. Rabinovich and J. S. Rosenschein. On the response of
EMT-based control to interacting targets and models. In The
Fifth International Joint Conference on Autonomous Agents
and Multiagent Systems, pages 465-470, Hakodate, Japan,
May 2006.
[16] R. F. Stengel. Optimal Control and Estimation. Dover
Publications, 1994.
[17] M. Tambe, E. Bowring, H. Jung, G. Kaminka,
R. Maheswaran, J. Marecki, J. Modi, R. Nair, J. Pearce,
P. Paruchuri, D. Pynadath, P. Scerri, N. Schurr, and
P. Varakantham. Conflicts in teamwork: Hybrids to the
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 797 | dynamics base control;target dynamics;partially observable markov decision problem;reward function;tag game;stochastic environment;area-sweeping problem;environment design level;user level;extended markov tracking;dynamics based control;control;game of tag;multi-agent system;action-selection randomization;agent level;system dynamics;robotic |
train_I-45 | Implementing Commitment-Based Interactions∗ | Although agent interaction plays a vital role in MAS, and messagecentric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitmentbased interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems. | 1. INTRODUCTION
Agents are social, and agent interaction plays a vital role in
multiagent systems. Consequently, design and implementation of agent
interaction is an important research topic.
The standard approach for designing agent interactions is
messagecentric: interactions are defined by interaction protocols that give
the permissible sequences of messages, specified using notations
such as finite state machines, Petri nets, or Agent UML.
It has been argued that this message-centric approach to
interaction design is not a good match for intelligent agents. Intelligent
agents should exhibit the ability to persist in achieving their goals
in the face of failure (robustness) by trying different approaches
(flexibility). On the other hand, when following an interaction
protocol, an agent has limited flexibility and robustness: the ability to
persistently try alternative means to achieving the interaction"s aim
is limited to those options that the protocol"s designer provided, and
in practice, message-centric design processes do not tend to lead to
protocols that are flexible or robust.
Recognising these limitations of the traditional approach to
designing agent interactions, a number of approaches have been
proposed in recent years that move away from message-centric
interaction protocols, and instead consider designing agent interactions
using higher-level concepts such as social commitments [8, 10,
18] or interaction goals [2]. There has also been work on richer
forms of interaction in specific settings, such as teams of
cooperative agents [5, 11].
However, although there has been work on designing flexible and
robust agent interactions, there has been virtually no work on
providing programming language support for implementing such
interactions. Current Agent Oriented Programming Languages
(AOPLs) do not provide support for implementing flexible and robust
agent interactions using higher-level concepts than messages.
Indeed, modern AOPLs [1], with virtually no exceptions, provide
only simple message sending as the basis for implementing agent
interaction.
This paper presents what, to the best of our knowledge, is the
second AOPL to support high-level, flexible, and robust agent
interaction implementation. The first such language, STAPLE, was
proposed a few years ago [9], but is not described in detail, and is
arguably impractical for use by non-specialists, due to its logical
basis and heavy reliance on temporal and modal logic.
This paper presents a scheme for extending BDI-like AOPLs
to support direct implementation of agent interactions that are
designed using Yolum & Singh"s commitment machine (CM)
framework [19]. In the remainder of this paper we briefly review
commitment machines and present a simple abstraction of BDI AOPLs
which lies in the common subset of languages such as Jason, 3APL,
and CAN. We then present a scheme for translating commitment
machines to this language, and indicate how the language needs
to be extended to support this. We then extend our scheme to
address a range of issues concerned with distribution, including turn
tracking [7], and race conditions.
2. BACKGROUND
2.1 Commitment Machines
The aim of the commitment machine framework is to allow for
the definition of interactions that are more flexible than traditional
message-centric approaches. A Commitment Machine (CM) [19]
specifies an interaction between entities (e.g. agents, services,
processes) in terms of actions that change the interaction state. This
interact state consists of fluents (predicates that change value over
time), but also social commitments, both base-level and conditional.
A base-level social commitment is an undertaking by debtor A to
creditor B to bring about condition p, denoted C(A, B, p). This is
sometimes abbreviated to C(p), where it is not important to specify
the identities of the entities in question. For example, a
commitment by customer C to merchant M to make the fluent paid true
would be written as C(C, M, paid).
A conditional social commitment is an undertaking by debtor A
to creditor B that should condition q become true, A will then
commit to bringing about condition p. This is denoted by CC(A, B, q, p),
and, where the identity of the entities involved is unimportant (or
obvious), is abbreviated to CC(q p) where the arrow is a
reminder of the causal link between q becoming true and the creation
of a commitment to make p true. For example, a commitment to
make the fluent paid true once goods have been received would be
written CC(goods paid).
The semantics of commitments (both base-level and conditional)
is defined with rules that specify how commitments change over
time. For example, the commitment C(p) (or CC(q p)) is
discharged when p becomes true; and the commitment CC(q p) is
replaced by C(p) when q becomes true. In this paper we use the
more symmetric semantics proposed by [15] and subsequently
reformalised by [14]. In brief, these semantics deal with a number of
more complex cases, such as where commitments are created when
conditions already hold: if p holds when CC(p q) is meant to
be created, then C(q) is created instead of CC(p q).
An interaction is defined by specifying the entities involved, the
possible contents of the interaction state (both fluents and
commitments), and (most importantly) the actions that each entity can
perform along with the preconditions and effects of each action,
specified as add and delete lists.
A commitment machine (CM) defines a range of possible
interactions that each start in some state1
, and perform actions until
reaching a final state. A final state is one that has no base-level
commitments. One way of visualising the interactions that are
possible with a given commitment machine is to generate the finite
state machine corresponding to the CM. For example, figure 1 gives
the FSM2
corresponding to the NetBill [18] commitment machine:
a simple CM where a customer (C) and merchant (M) attempt to
trade using the following actions3
:
1
Unlike standard interaction protocols, or finite state machines,
there is no designated initial state for the interaction.
2
The finite state machine is software-generated: the nodes and
connections were computed by an implementation of the axioms
(available from http://www.winikoff.net/CM) and were then laid out by
graphviz (http://www.graphviz.org/).
3
We use the notation A(X) : P ⇒ E to indicate that action A is
performed by entity X, has precondition P (with : P omitted if
empty) and effect E.
• sendRequest(C) ⇒ request
• sendQuote(M) ⇒ offer
where offer ≡ promiseGoods ∧ promiseReceipt and
promiseGoods ≡ CC(M, C, accept, goods) and
promiseReceipt ≡ CC(M, C, pay, receipt)
• sendAccept(C) ⇒ accept
where accept ≡ CC(C, M, goods, pay)
• sendGoods(M) ⇒ promiseReceipt ∧ goods
where promiseReceipt ≡ CC(M, C, pay, receipt)
• sendEPO(C) : goods ⇒ pay
• sendReceipt(M) : pay ⇒ receipt.
The commitment accept is the customer"s promise to pay once
goods have been sent, promiseGoods is the merchant"s promise
to send the goods once the customer accepts, and promiseReceipt
is the merchant"s promise to send a receipt once payment has been
made.
As seen in figure 1, commitment machines can support a range
of interaction sequences.
2.2 An Abstract Agent ProgrammingLanguage
Agent programming languages in the BDI tradition (e.g. dMARS,
JAM, PRS, UM-PRS, JACK, AgentSpeak(L), Jason, 3APL, CAN,
Jadex) define agent behaviour in terms of event-triggered plans,
where each plan specifies what it is triggered by, under what
situations it can be considered to be applicable (defined using a so-called
context condition), and a plan body: a sequence of steps that can
include posting events which in turn triggers further plans. Given
a collection of plans and an event e that has been posted the agent
first collects all plans types that are triggered by that event (the
relevant plans), then evaluates the context conditions of these plans to
obtain a set of applicable plan instances. One of these is chosen
and is executed.
We now briefly define the formal syntax and semantics of a
Simple Abstract (BDI) Agent Programming Language (SAAPL). This
language is intended to be an abstraction that is in the common
subset of such languages as Jason [1, Chapter 1], 3APL [1,
Chapter 2], and CAN [16]. Thus, it is intentionally incomplete in some
areas, for instance it doesn"t commit to a particular mechanism for
dealing with plan failure, since different mechanisms are used by
different AOPLs.
An agent program (denoted by Π) consists of a collection of plan
clauses of the form e : C ← P where e is an event, C is a context
condition (a logical formula over the agent"s beliefs), and P is the
plan body. The plan body is built up from the following constructs.
We have the empty step which always succeeds and does nothing,
operations to add (+b) and delete (−b) beliefs, sending a message
m to agent N (↑N
m), and posting an event4
(e). These can be
sequenced (P; P).
C ::= b | C ∧ C | C ∨ C | ¬C | ∃x.C
P ::= | +b | −b | e | ↑N
m | P; P
Formal semantics for this language is given in figure 2. This
semantics is based on the semantics for AgentSpeak given by [12],
which in turn is based on the semantics for CAN [16]. The
semantics is in the style of Plotkin"s Structural Operational Semantics,
and assumes that operations exist that check whether a condition
4
We use ↓N
m as short hand for the event corresponding to
receiving message m from agent N.
874 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 1: Finite State Machine for NetBill (shaded = final states)
follows from a belief set, that add a belief to a belief set, and that
delete a belief from a belief set. In the case of beliefs being a set of
ground atoms these operations are respectively consequence
checking (B |= C), and set addition (B ∪ {b}) and deletion (B \ {b}).
More sophisticated belief management methods may be used, but
are not considered here.
We define a basic configuration S = Q, N, B, P where Q is a
(global) message queue (modelled as a sequence5
where messages
are added at one end and removed from the other end), N is the
name of the agent, B is the beliefs of the agent and P is the plan
body being executed (i.e. the intention). We also define an agent
configuration, where instead of a single plan body P there is a set
of plan instances, Γ. Finally, a complete MAS is a pair Q, As of a
global message queue Q and a set of agent configurations (without
the queue, Q). The global message queue is a sequence of triplets
of the form sender:recipient:message.
A transition S0 −→ S1 specifies that executing S0 a single step
yields S1. We annotate the arrow with an indication of whether
the configuration in question is basic, an agent configuration, or a
MAS configuration. The transition relation is defined using rules
of the form S −→ S or of the form
S −→ Sr
S −→ Sr ; the latter are
conditional with the top (numerator) being the premise and the bottom
(denominator) being the conclusion.
Note that there is non-determinism in SAAPL, e.g. the choice
of plan to execute from a set of applicable plans. This is resolved
by using selection functions: SO selects one of the applicable plan
instances to handle a given event, SI selects which of the plan
instances that can be executed should be executed next, and SA
selects which agent should execute (a step) next.
3. IMPLEMENTING COMMITMENT-BASED
INTERACTIONS
In this section we present a mapping from a commitment
machine to a collection of SAAPL programs (one for each role). We
begin by considering the simple case of two interacting agents, and
5
The + operator is used to denote sequence concatenation.
assume that the agents take turns to act. In section 4 we relax these
assumptions.
Each action A(X) : P ⇒ E is mapped to a number of plans:
there is a plan (for agent X) with context condition P that
performs the action (i.e. applies the effects E to the agent"s beliefs)
and sends a message to the other agent, and a plan (for the other
agent) that updates its state when a message is received from X.
For example, given the action sendAccept(C) ⇒ accept we have
the following plans, where each plan is preceded by M: or C:
to indicate which agent that plan belongs to. Note that where the
identify of the sender (respectively recipient) is obvious, i.e. the
other agent, we abbreviate ↑N
m to ↑m (resp. ↓N
m to ↓m). Turn
taking is captured through the event ı (short for interact): the
agent that is active has an ı event that is being handled. Handling
the event involves sending a message to the other agent, and then
doing nothing until a response is received.
C: ı : true ← +accept; ↑sendAccept.
M: ↓sendAccept : true ← +accept; ı.
If the action has a non-trivial precondition then there are two plans
in the recipient: one to perform the action (if possible), and another
to report an error if the action"s precondition doesn"t hold (we
return to this in section 4). For example, the action sendReceipt(M) :
pay ⇒ receipt generates the following plans:
M: ı : pay ← +receipt; ↑sendReceipt.
C: ↓sendReceipt : pay ← +receipt; ı.
C: ↓sendReceipt : ¬pay ← . . . report error . . . .
In addition to these plans, we also need plans to start and finish
the interaction. An interaction can be completed whenever there
are no base-level commitments, so both agents have the following
plans:
ı : ¬∃p.C(p) ← ↑done.
↓done : ¬∃p.C(p) ← .
↓done : ∃p.C(p) ← . . . report error . . . .
An interaction is started by setting up an agent"s initial beliefs, and
then having it begin to interact. Exactly how to do this depends
on the agent platform: e.g. the agent platform in question may
offer a simple way to load beliefs from a file. A generic approach
that is a little cumbersome, but is portable, is to send each of the
agents involved in the interaction a sequence of init messages, each
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 875
Q, N, B, +b
Basic
−→ Q, N, B ∪ {b},
Q, N, B, −b
Basic
−→ Q, N, B \ {b},
Δ = {Piθ|(ti : ci ← Pi) ∈ Π ∧ tiθ = e ∧ B |= ciθ}
Q, N, B, e
Basic
−→ Q, N, B, SO(Δ)
Q, N, B, P1
Basic
−→ Q , N, B , P
Q, N, B, P1; P2
Basic
−→ Q , N, B , P ; P2
Q, N, B, ; P
Basic
−→ Q, N, B, P
Q, N, B, ↑NB m
Basic
−→ Q + N:NB:m, N, B,
Q = NA:N:m + Q
Q, N, B, Γ
Agent
−→ Q , N, B, Γ ∪ {↓NA m}
P = SI(Γ) Q, N, B, P
Basic
−→ Q , N, B , P
Q, N, B, Γ
Agent
−→ Q , N, B , (Γ \ {P}) ∪ {P }
P = SI(Γ) P =
Q, N, B, Γ
Agent
−→ Q, N, B, (Γ \ {P})
N, B, Γ = SA(As) Q, N, B, Γ
Agent
−→ Q , N, B , Γ
Q, As
MAS
−→ Q , (As ∪ { N, B , Γ }) \ { N, B, Γ }
Figure 2: Operational Semantics for SAAPL
containing a belief to be added; and then send one of the agents a
start message which begins the interaction. Both agents thus have
the following two plans:
↓init(B) : true ← +B.
↓start : true ← ı.
Figure 3 gives the SAAPL programs for both merchant and
customer that implement the NetBill protocol. For conciseness the
error reporting plans are omitted.
We now turn to refining the context conditions. There are three
refinements that we consider. Firstly, we need to prevent
performing actions that have no effect on the interaction state. Secondly,
an agent may want to specify that certain actions that it is able to
perform should not be performed unless additional conditions hold.
For example, the customer may not want to agree to the merchant"s
offer unless the goods have a certain price or property. Thirdly, the
context conditions of the plans that terminate the interaction need to
be refined in order to avoid terminating the interaction prematurely.
For each plan of the form ı : P ← +E; ↑m we replace the
context condition P with the enhanced condition P ∧ P ∧ ¬E where
P is any additional conditions that the agent wishes to impose,
and ¬E is the negation of the effects of the action. For
example, the customer"s payment plan becomes (assuming no additional
conditions, i.e. no P ): ı : goods ∧ ¬pay ← +pay; ↑sendEPO.
For each plan of the form ↓m : P ← +E; ı we could add ¬E to
the precondition, but this is redundant, since it is already checked
by the performer of the action, and if the action has no effect then
Customer"s plans:
ı : true ← +request; ↑sendRequest.
ı : true ← +accept; ↑sendAccept.
ı : goods ← +pay; ↑sendEPO.
↓sendQuote : true ← +promiseGoods;
+promiseReceipt; ı.
↓sendGoods : true ← +promiseReceipt; +goods; ı.
↓sendReceipt : pay ← +receipt; ı.
Merchant"s plans:
ı : true ← +promiseGoods;
+promiseReceipt; ↑sendQuote.
ı : true ← +promiseReceipt; +goods; ↑sendGoods.
ı : pay ← +receipt; ↑sendReceipt.
↓sendRequest : true ← +request; ı.
↓sendAccept : true ← +accept; ı.
↓sendEPO : goods ← +pay; ı.
Shared plans (i.e. plans of both agents):
ı : ¬∃p.C(p) ← ↑done.
↓done : ¬∃p.C(p) ← .
↓init(B) : true ← +B.
↓start : true ← ı.
Where
accept ≡ CC(goods pay)
promiseGoods ≡ CC(accept goods)
promiseReceipt ≡ CC(pay receipt)
offer ≡ promiseGoods ∧ promiseReceipt
Figure 3: SAAPL Implementation of NetBill
the sender won"t perform it and send the message (see also the
discussion in section 4).
When specifying additional conditions (P ), some care needs to
be taken to avoid situations where progress cannot be made because
the only action(s) possible are prevented by additional conditions.
One way of indicating preference between actions (in many agent
platforms) is to reorder the agent"s plans. This is clearly safe, since
actions are not prevented, just considered in a different order.
The third refinement of context conditions concerns the plans
that terminate the interaction. In the Commitment Machine
framework any state that has no base-level commitment is final, in that
the interaction may end there (or it may continue). However, only
some of these final states are desirable final states. Which final
states are considered to be desirable depends on the domain and
the desired interaction outcome. In the NetBill example, the
desirable final state is one where the goods have been sent and paid
for, and a receipt issued (i.e. goods ∧ pay ∧ receipt). In order to
prevent an agent from terminating the interaction too early we add
this as a precondition to the termination plan:
ı : goods ∧ pay ∧ receipt ∧ ¬∃p.C(p) ← ↑done.
Figure 4 shows the plans that are changed from figure 3.
In order to support the realisation of CMs, we need to change
SAAPL in a number of ways. These changes, which are discussed
below, can be applied to existing BDI languages to make them
commitment machine supportive. We present the three changes,
explain what they involve, and for each change explain how the
change was implemented using the 3APL agent oriented
programming language. The three changes are:
1. extending the beliefs of the agent so that they can contain
commitments;
876 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Customer"s plans:
ı : ¬request ← +request; ↑sendRequest.
ı : ¬accept ← +accept; ↑sendAccept.
ı : goods ∧ ¬pay ← +pay; ↑sendEPO.
Merchant"s plans:
ı : ¬offer ← +promiseGoods; +promiseReceipt;
↑sendQuote.
ı : ¬(promiseReceipt ∧ goods) ←
+promiseReceipt; +goods; ↑sendGoods.
ı : pay ∧ ¬receipt ← +receipt; ↑sendReceipt.
Where
accept ≡ CC(goods pay)
promiseGoods ≡ CC(accept goods)
promiseReceipt ≡ CC(pay receipt)
offer ≡ promiseGoods ∧ promiseReceipt
Figure 4: SAAPL Implementation of NetBill with refined
context conditions (changed plans only)
2. changing the definition of |= to encompass implied
commitments; and
3. whenever a belief is added, updating existing commitments,
according to the rules of commitment dynamics.
Extending the notion of beliefs to encompass commitments in
fact requires no change in agent platforms that are prolog-like and
support terms as beliefs (e.g. Jason, 3APL, CAN). However, other
agent platforms do require an extension. For example, JACK, which
is an extension of Java, would require changes to support
commitments that can be nested. In the case of 3APL no change is needed
to support this.
Whenever a context condition contains commitments,
determining whether the context condition is implied by the agent"s beliefs
(B |= C) needs to take into account the notion of implied
commitments [15]. In brief, a commitment can be considered to follow
from a belief set B if the commitment is in the belief set (C ∈ B),
but also under other conditions. For example, a commitment to pay
C(pay) can be considered to be implied by a belief set containing
pay because the commitment may have held and been discharged
when pay was made true. Similar rules apply for conditional
commitments. These rules, which were introduced in [15] were
subsequently re-formalised in a simpler form by [14] resulting in the
four inference rules in the bottom part of figure 5.
The change that needs to be made to SAAPL to support
commitment machine implementations is to extend the definition of |= to
include these four rules. For 3APL this was realised by having each
agent include the following Prolog clauses:
holds(X) :- clause(X,true).
holds(c(P)) :- holds(P).
holds(c(P)) :- clause(cc(Q,P),true), holds(Q).
holds(cc(_,Q)) :- holds(Q).
holds(cc(_,Q)) :- holds(c(Q)).
The first clause simply says that anything holds if it is in agent"s
beliefs (clause(X,true) is true if X is a fact). The
remaining four clauses correspond respectively to the inference rules C1,
C2, CC1 and CC2. To use these rules we then modify context
conditions in our program so that instead of writing, for
example, cc(m,c, pay, receipt) we write holds(cc(m,c,
pay, receipt)).
B = norm(B ∪ {b})
Q, N, B, +b −→ Q, N, B ,
function norm(B)
B ← B
for each b ∈ B do
if b = C(p) ∧ B |= p then B ← B \ {b}
elseif b = CC(p q) then
if B |= q then B ← B \ {b}
elseif B |= p then B ← (B \ {b}) ∪ {C(q)}
elseif B |= C(q) then B ← B \ {b}
endif
endif
endfor
return B
end function
B |= P
B |= C(P)
C1
CC(Q P) ∈ B B |= Q
B |= P
C2
B |= CC(P Q)
B |= Q
CC1
B |= C(Q)
B |= CC(P Q)
CC2
Figure 5: New Operational Semantics
The final change is to update commitments when a belief is
added. Formally, this is done by modifying the semantic rule for
belief addition so that it applies an algorithm to update
commitments. The modified rule and algorithm (which mirrors the
definition of norm in [14]) can be found in the top part of figure 5.
For 3APL this final change was achieved by manually inserting
update() after updating beliefs, and defining the following rules
for update():
update() <- c(P) AND holds(P)
| {Deletec(P) ; update()},
update() <- cc(P,Q) AND holds(Q)
| {Deletecc(P,Q) ; update()},
update() <- cc(P,Q) AND holds(P)
| {Deletecc(P,Q) ; Addc(Q) ; update()},
update() <- cc(P,Q) AND holds(c(Q))
| {Deletecc(P,Q) ; update()},
update() <- true | Skip
where Deletec and Deletecc delete respectively a base-level
and conditional commitment, and Addc adds a base-level
commitment.
One aspect that doesn"t require a change is linking commitments
and actions. This is because commitments don"t trigger actions
directly: they may trigger actions indirectly, but in general their effect
is to prevent completion of an interaction while there are
outstanding (base level) commitments.
Figure 6 shows the message sequences from a number of runs of
a 3APL implementation of the NetBill commitment machine6
. In
order to illustrate the different possible interactions the code was
modified so that each agent selected randomly from the actions
that it could perform, and a number of runs were made with the
customer as the initiator, and then with the merchant as the
initiator. There are other possible sequences of messages, not shown,
6
Source code is available from http://www.winikoff.net/CM
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 877
Figure 6: Sample runs from 3APL implementation (alternating turns)
including the obvious one: request, quote, accept, goods, payment,
receipt, and then done.
One minor difference between the 3APL implementation and
SAAPL concerns the semantics of messages. In the semantics of
SAAPL (and of most AOPLs), receiving a message is treated as an
event. However, in 3APL, receiving a message is modelled as the
addition to the agent"s beliefs of a fact indicating that the message
was received [6]. Thus in the 3APL implementation we have PG
rules that are triggered by these beliefs, rather than by any event.
One issue with this approach is that the belief remains there, so we
need to ensure that the belief in question is either deleted once
handled, or that we modify preconditions of plans to avoid handling it
more than once. In our implementation we delete these received
beliefs when they are handled, to avoid duplicate handling of
messages.
4. BEYOND TWO PARTICIPANTS
Generalising to more than two interaction participants requires
revisiting how turn management is done, since it is no longer
possible to assume alternating turns [7].
In fact, perhaps surprisingly, even in the two participant setting,
an alternating turn setup is an unreasonable assumption! For
example, consider the path (in figure 1) from state 1 to 15 (sendGoods)
then to state 12 (sendAccept). The result, in an alternating turn
setup, is a dead-end: there is only a single possible action in state
12, namely sendEPO, but this action is done by the customer, and
it is the merchant"s turn to act! Figure 7 shows the FSM for NetBill
with alternating initiative.
A solution to this problem that works in this example, but doesn"t
generalise7
, is to weaken the alternating turn taking regime by
allowing an agent to act twice in a row if its second action is driven
by a commitment.
A general solution is to track whose turn it is to act. This can be
done by working out which agents have actions that are able to be
performed in the current state. If there is only a single active agent,
then it is clearly that agent"s turn to act. However, if more than
one agent is active then somehow the agents need to work out who
should act next. Working this out by negotiation is not a particularly
good solution for two reasons. Firstly, this negotiation has to be
done at every step of the interaction where more than one agent is
active (in the NetBill, this applies to seven out of sixteen states), so
it is highly desirable to have a light-weight mechanism for doing
this. Secondly, it is not clear how the negotiation can avoid an
infinite regress situation (you go first, no, you go first, . ..)
without imposing some arbitrary rule. It is also possible to resolve
who should act by imposing an arbitrary rule, for example, that the
customer always acts in preference to the merchant, or that each
agent has a numerical priority (perhaps determined by the order in
which they joined the interaction?) that determines who acts.
An alternative solution, which exploits the symmetrical
properties of commitment machines, is to not try and manage turn taking.
7
Consider actions A1(C) ⇒ p, A2(C) ⇒ q, and A3(M) : p ∧
q ⇒ r.
Figure 7: NetBill with alternating initiative
Instead of tracking and controlling whose turn it is, we simply allow
the agents to act freely, and rely on the properties of the interaction
space to ensure that things work out, a notion that we shall make
precise, and prove, in the remainder of this section.
The issue with having multiple agents be active simultaneously
is that instead of all agents agreeing on the current interaction state,
agents can be in different states. This can be visualised as each
agent having its own copy of the FSM that it navigates through
where it is possible for agents to follow different paths through the
FSM. The two specific issues that need to be addressed are:
1. Can agents end up in different final states?
2. Can an agent be in a position where an error occurs because
it cannot perform an action corresponding to a received
message?
We will show that, because actions commute under certain
assumptions, agents cannot end up in different final states, and
furthermore, that errors cannot occur (again, under certain
assumptions).
By actions commute we mean that the state resulting from
performing a sequence of actions A1 . . . An is the same, regardless of
the order in which the actions are performed. This means that even
if agents take different paths through the FSM, they still end up in
the same resulting state, because once all messages have been
processed, all agents will have performed the same set of actions. This
addresses the issue of ending up in different final states. We return
to the possibility of errors occurring shortly.
Definition 1 (Monotonicity) An action is monotonic if it does not
delete8
any fluents or commitments. A Commitment Machine is
8
That is directly deletes, it is fine to discharge commitments by
adding fluents/commitments.
878 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
monotonic if all of its actions are monotonic. (Adapted from [14,
Definition 6])
Theorem 1 If A1 and A2 are monotonic actions, then performing
A1 followed by A2 has the same effect on the agent"s beliefs as
performing A2 followed by A1. (Adapted from [14, Theorem 2]).
This assumes that both actions can be performed. However, it is
possible for the performance of A1 to disable A2 from being done.
For example, if A1 has the effect +p, and A2 has precondition
¬p, then although both actions may be enabled in the initial state,
they cannot be performed in either order. We can prevent this by
ensuring that actions" preconditions do not contain negation (or
implication), since a monotonic action cannot result in a precondition
that is negation-free becoming false. Note that this restriction only
applies to the original action precondition, P, not to any additional
preconditions imposed by the agent (P ). This is because only P
is used to determine whether another agent is able to perform the
action.
Thus monotonic CMs with preconditions that do not contain
negations have actions that commute. However, in fact, the
restriction to monotonic CMs is unnecessarily strong: all that is needed
is that whenever there is a choice of agent that can act, then the
possible actions are monotonic. If there is only a single agent that
can act, then no restriction is needed on the actions: they may or
may not be monotonic.
Definition 2 (Locally Monotonic) A commitment machine is
locally monotonic if for any state S either (a) only a single agent
has actions that can be performed; or (b) all actions that can be
performed in S are monotonic.
Theorem 2 In a locally monotonic CM, once all messages have
been processed, all agents will be in the same state. Furthermore,
no errors can occur.
Proof: Once all messages have been processed we have that all
agents will have performed the same action set, perhaps in a
different order. The essence of the proof is to argue that as long as
agents haven"t yet converged to the same state, all actions must
be monotonic, and hence that these actions commute, and cannot
disable any other actions.
Consider the first point of divergence, where an agent performs
action A and at the same time another agent (call it XB) performs
action B. Clearly, this state has actions of more than one agent
enabled, so, since the CM is locally monotonic, the relevant actions
must be monotonic. Therefore, after doing A, the action B must
still be enabled, and so the message to do B can be processed by
updating the recipient agent"s beliefs with the effects of B.
Furthermore, because monotonic actions commute, the result of doing
A before B is the same as doing B before A:
S
A
−−−−−→ SA
?
?
yB B
?
?
y
SB −−−−−→
A
SAB
However, what happens if the next action after A is not B, but
C? Because B is enabled, and C is not done by agent XB (see
below), we must have that C is also monotonic, and hence (a) the
result of doing A and B and C is the same regardless of the order
in which the three actions are done; and (b) C doesn"t disable B,
so B can still be done after C.
S
A
−−−−−→ SA
C
−−−−−→ SAC
?
?
yB B
?
?
y B
?
?
y
SB −−−−−→
A
SAB −−−−−→
C
SABC
The reason why C cannot be done by XB is that messages are
processed in the order of their arrival9
. From the perspective of
XB the action B was done before C, and therefore from any other
agent"s perspective the message saying that B was done must be
received (and processed) before a message saying that C is done.
This argument can be extended to show that once agents start
taking different paths through the FSM all actions taken until the
point where they converge on a single state must be monotonic,
and hence it is always possible to converge (because actions aren"t
disabled), so the interaction is error free; and the resulting state
once convergence occurs is the same (because monotonic actions
commute).
This theorem gives a strong theoretical guarantee that not
doing turn management will not lead to disaster. This is analogous
to proving that disabling all traffic lights would not lead to any
accidents, and is only possible because the refined CM axioms are
symmetrical.
Based on this theorem the generic transformation from CM to
code should allow agents to act freely, which is achieved by simply
changing ı : P ∧ P ∧ ¬E ← +E; ↑A to
ı : P ∧ P ∧ ¬E ← +E; ↑A; ı
For example, instead of ı : ¬request ← +request; ↑sendRequest
we have ı : ¬request ← +request; ↑sendRequest; ı.
One consequence of the theorem is that it is not necessary to
ensure that agents process messages before continuing to
interact. However, in order to avoid unnecessary parallelism, which can
make debugging harder, it may still be desirable to process
messages before performing actions.
Figure 8 shows a number of runs from the 3APL implementation
that has been modified to allow free, non-alternating, interaction.
5. DISCUSSION
We have presented a scheme for mapping commitment machines
to BDI platforms (using SAAPL as an exemplar), identified three
changes that needed to be made to SAAPL to support CM-based
interaction, and shown that turn management can be avoided in
CMbased interaction, provided the CM is locally monotonic. The three
changes to SAAPL, and the translation scheme from commitment
machine to BDI plans are both applicable to any BDI language.
As we have mentioned in section 1, there has been some work
on designing flexible and robust agent interaction, but virtually no
work on implementing flexible and robust interactions.
We have already discussed STAPLE [9, 10]. Another piece of
work that is relevant is the work by Cheong and Winikoff on their
Hermes methodology [2]. Although the main focus of their work is
a pragmatic design methodology, they also provide guidelines for
implementing Hermes designs using BDI platforms (specifically
Jadex) [3]. However, since Hermes does not yield a design that is
formal, it is only possible to generate skeleton code that then needs
to be completed. Also, they do not address the turn taking issue:
how to decide which agent acts when more than one agent is able
to act.
9
We also assume that the communication medium does not deliver
messages out of order, which is the case for (e.g.) TCP.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 879
Figure 8: Sample runs from 3APL implementation (non-alternating turns)
The work of Kremer and Flores (e.g. [8]) also uses
commitments, and deals with implementation. However, they provide
infrastructure support (CASA) rather than a programming language,
and do not appear to provide assistance to a programmer seeking to
implement agents.
Although we have implemented the NetBill interaction using
3APL, the changes to the semantics were done by modifying our
NetBill 3APL program, rather than by modifying the 3APL
implementation itself. Clearly, it would be desirable to modify the
semantics of 3APL (or of another language) directly, by changing
the implementation. Also, although we have not done so, it should
be clear that the translation from a CM to its implementation could
easily be automated.
Another area for further work is to look at how the assumptions
required to ensure that actions commute can be relaxed.
Finally, there is a need to perform empirical evaluation. There
has already been some work on comparing Hermes with a
conventional message-centric approach to designing interaction, and
this has shown that using Hermes results in designs that are
significantly more flexible and robust [4]. It would be interesting to
compare commitment machines with Hermes, but, since
commitment machines are a framework, not a design methodology, we
need to compare Hermes with a methodology for designing
interactions that results in commitment machines [13, 17].
6. REFERENCES
[1] R. H. Bordini, M. Dastani, J. Dix, and A. E. F. Seghrouchni,
editors. Multi-Agent Programming: Languages, Platforms
and Applications. Springer, 2005.
[2] C. Cheong and M. Winikoff. Hermes: Designing
goal-oriented agent interactions. In Proceedings of the 6th
International Workshop on Agent-Oriented Software
Engineering (AOSE-2005), July 2005.
[3] C. Cheong and M. Winikoff. Hermes: Implementing
goal-oriented agent interactions. In Proceedings of the Third
international Workshop on Programming Multi-Agent
Systems (ProMAS), July 2005.
[4] C. Cheong and M. Winikoff. Hermes versus prometheus: A
comparative evaluation of two agent interaction design
approaches. Submitted for publication, 2007.
[5] P. R. Cohen and H. J. Levesque. Teamwork. Nous,
25(4):487-512, 1991.
[6] M. Dastani, J. van der Ham, and F. Dignum. Communication
for goal directed agents. In Proceedings of the Agent
Communication Languages and Conversation Policies
Workshop, 2002.
[7] F. P. Dignum and G. A. Vreeswijk. Towards a testbed for
multi-party dialogues. In Advances in Agent Communication,
pages 212-230. Springer, LNCS 2922, 2004.
[8] R. Kremer and R. Flores. Using a performative subsumption
lattice to support commitment-based conversations. In
F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. P. Singh, and
M. Wooldridge, editors, Autonomous Agents and Multi-Agent
Systems (AAMAS), pages 114-121. ACM Press, 2005.
[9] S. Kumar and P. R. Cohen. STAPLE: An agent programming
language based on the joint intention theory. In Proceedings
of the Third International Joint Conference on Autonomous
Agents & Multi-Agent Systems (AAMAS 2004), pages
1390-1391. ACM Press, July 2004.
[10] S. Kumar, M. J. Huber, and P. R. Cohen. Representing and
executing protocols as joint actions. In Proceedings of the
First International Joint Conference on Autonomous Agents
and Multi-Agent Systems, pages 543 - 550, Bologna, Italy,
15 - 19 July 2002. ACM Press.
[11] M. Tambe and W. Zhang. Towards flexible teamwork in
persistent teams: Extended report. Journal of Autonomous
Agents and Multi-agent Systems, 2000. Special issue on
Best of ICMAS 98.
[12] M. Winikoff. An AgentSpeak meta-interpreter and its
applications. In Third International Workshop on
Programming Multi-Agent Systems (ProMAS), pages
123-138. Springer, LNCS 3862 (post-proceedings, 2006),
2005.
[13] M. Winikoff. Designing commitment-based agent
interactions. In Proceedings of the 2006 IEEE/WIC/ACM
International Conference on Intelligent Agent Technology
(IAT-06), 2006.
[14] M. Winikoff. Implementing flexible and robust agent
interactions using distributed commitment machines.
Multiagent and Grid Systems, 2(4), 2006.
[15] M. Winikoff, W. Liu, and J. Harland. Enhancing
commitment machines. In J. Leite, A. Omicini, P. Torroni,
and P. Yolum, editors, Declarative Agent Languages and
Technologies II, number 3476 in Lecture Notes in Artificial
Intelligence (LNAI), pages 198-220. Springer, 2004.
[16] M. Winikoff, L. Padgham, J. Harland, and J. Thangarajah.
Declarative & procedural goals in intelligent agent systems.
In Proceedings of the Eighth International Conference on
Principles of Knowledge Representation and Reasoning
(KR2002), Toulouse, France, 2002.
[17] P. Yolum. Towards design tools for protocol development. In
F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. P. Singh, and
M. Wooldridge, editors, Autonomous Agents and Multi-Agent
Systems (AAMAS), pages 99-105. ACM Press, 2005.
[18] P. Yolum and M. P. Singh. Flexible protocol specification and
execution: Applying event calculus planning using
commitments. In Proceedings of the 1st Joint Conference on
Autonomous Agents and MultiAgent Systems (AAMAS),
pages 527-534, 2002.
[19] P. Yolum and M. P. Singh. Reasoning about commitments in
the event calculus: An approach for specifying and executing
protocols. Annals of Mathematics and Artificial Intelligence
(AMAI), 2004.
880 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | social commitment;commitment machine;interaction goal;race condition;agent interaction;agent-oriented programming language;bdi-style plan;bdi;commitment-based interaction;messagecentric approach;agent orient program language;herme design;belief desire intention;netbill interaction;turn tracking;commitment machine framework;belief management method |
train_I-46 | Modular Interpreted Systems | We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models. | 1. INTRODUCTION
The logical foundations of multi-agent systems have received
much attention in recent years. Logic has been used to represent
and reason about, e.g., knowledge [7], time [6], cooperation and
strategic ability [3]. Lately, an increasing amount of research has
focused on higher level representation languages for models of such
logics, motivated mainly by the need for compact representations,
and for representations that correspond more closely to the actual
systems which are modeled. Multi-agent systems are open systems,
in the sense that agents interact with an environment only partially
known in advance. Thus, we need representations of models of
multi-agent systems which are modular, in the sense that a
component, such as an agent, can be replaced, removed, or added, without
major changes to the representation of the whole model. However,
as we argue in this paper, few existing representation languages are
both modular, compact and computationally grounded on the one
hand, and allow for representing properties of both knowledge and
strategic ability, on the other.
In this paper we present a new class of representations for
models of open multi-agent systems, which are modular, compact and
come with an implicit methodology for modeling and designing
actual systems.
The structure of the paper is as follows. First, in Section 2, we
present the background of our work - that is, logics that combine
time, knowledge, and strategies. More precisely: modal logics that
combine branching time, knowledge, and strategies under
incomplete information. We start with computation tree logic CTL, then
we add knowledge (CTLK), and then we discuss two variants of
alternating-time temporal logic (ATL): one for the perfect, and one
for the imperfect information case. The semantics of logics like the
ones presented in Section 2 are usually defined over explicit models
(Kripke structures) that enumerate all possible (global) states of the
system. However, enumerating these states is one of the things one
mostly wants to avoid, because there are too many of them even
for simple systems. Thus, we usually need representations that are
more compact. Another reason for using a more specialized class of
models is that general Kripke structures do not always give enough
help in terms of methodology, both at the stage of design, nor at
implementation. This calls for a semantics which is more grounded, in
the sense that the correspondence between elements of the model,
and the entities that are modeled, is more immediate. In Section 3,
we present an overview of representations that have been used for
modeling and model checking systems in which time, action (and
possibly knowledge) are important; we mention especially
representations used for theoretical analysis. We point out that the
compact and/or grounded representations of temporal models do not
play their role in a satisfactory way when agents" strategies are
considered. Finally, in Section 4, we present our framework of
modular interpreted systems (MIS), and show where it fits in the
picture. We conclude with a somewhat surprising hypothesis, that
model checking ability under imperfect information for MIS can be
computationally cheaper than model checking perfect information.
Until now, almost all complexity results were distinctly in favor of
perfect information strategies (and the others were indifferent).
2. LOGICS OF TIME, KNOWLEDGE, AND
STRATEGIC ABILITY
First, we present the logics CTL, CTLK, ATL and ATLir that are
the starting point of our study.
2.1 Branching Time: CTL
Computation tree logic CTL [6] includes operators for temporal
properties of systems: i.e., path quantifier E (there is a path),
together with temporal operators: f(in the next state), 2 (always
from now on) and U (until).1
Every occurrence of a temporal
operator is immediately preceded by exactly one path quantifier
(this variant of the language is sometimes called vanilla CTL).
Let Π be a set of atomic propositions with a typical element p.
CTL formulae ϕ are defined as follows:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | E fϕ | E2ϕ | Eϕ U ϕ.
The semantics of CTL is based on Kripke models M = St, R, π ,
which include a nonempty set of states St, a state transition relation
R ⊆ St × St, and a valuation of propositions π : Π → P(St).
A path λ in M refers to a possible behavior (or computation) of
system M, and can be represented as an infinite sequence of states
q0q1q2... such that qiRqi+1 for every i = 0, 1, 2, .... We denote
the ith state in λ by λ[i]. A q-path is a path that starts in q.
Interpretation of a formula in a state q in model M is defined as follows:
M, q |= p iff q ∈ π(p);
M, q |= ¬ϕ iff M, q |= ϕ;
M, q |= ϕ ∧ ψ iff M, q |= ϕ and M, q |= ψ;
M, q |= E fϕ iff there is a q-path λ such that M, λ[1] |= ϕ;
M, q |= E2ϕ iff there is a q-path λ such that M, λ[i] |= ϕ for
every i ≥ 0;
M, q |= Eϕ U ψ iff there is a q-path λ and i ≥ 0 such that
M, λ[i] |= ψ and M, λ[j] |= ϕ for every 0 ≤ j < i.
2.2 Adding Knowledge: CTLK
CTLK [19] is a straightforward combination of CTL and standard
epistemic logic [10, 7]. Let Agt = {1, ..., k} be a set of agents with
a typical element a. Epistemic logic uses operators for representing
agents" knowledge: Kaϕ is read as agent a knows that ϕ. Models
of CTLK extend models of CTL with epistemic indistinguishability
relations ∼a⊆ St × St (one per agent). We assume that all ∼a are
equivalences. The semantics of epistemic operators is defined as
follows:
M, q |= Kaϕ iff M, q |= ϕ for every q such that q ∼a q .
Note that, when talking about agents" knowledge, we
implicitly assume that agents may have imperfect information about the
actual current state of the world (otherwise the notion of
knowledge would be trivial). This does not have influence on the way
we model evolution of a system as a single unit, but it will become
important when particular agents and their strategies come to the
fore.
2.3 Agents and Their Strategies: ATL
Alternating-time temporal logic ATL [3] is a logic for
reasoning about temporal and strategic properties of open computational
systems (multi-agent systems in particular). The language of ATL
consists of the following formulae:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | A fϕ | A 2ϕ | A ϕ U ϕ.
where A ⊆ Agt. Informally, A ϕ says that agents A have a
collective strategy to enforce ϕ. It should be noted that the CTL path
quantifiers A, E can be expressed with ∅ , Agt respectively.
The semantics of ATL is defined in so called concurrent game
structures (CGSs). A CGS is a tuple
M = Agt, St, Act, d, o, Π, π ,
1
Additional operators A (for every path) and 3 (sometime in
the future) are defined in the usual way.
consisting of: a set Agt = {1, . . . , k} of agents; set St of states;
valuation of propositions π : Π → P(St); set Act of atomic
actions. Function d : Agt × St → P(Act) indicates the actions
available to agent a ∈ Agt in state q ∈ St. Finally, o is a
deterministic transition function which maps a state q ∈ St and an
action profile α1, . . . , αk ∈ Actk
, αi ∈ d(i, q), to another state
q = o(q, α1, . . . , αk).
DEFINITION 1. A (memoryless) strategy of agent a is a function
sa : St → Act such that sa(q) ∈ d(a, q).2
A collective strategy SA
for a team A ⊆ Agt specifies an individual strategy for each agent
a ∈ A. Finally, the outcome of strategy SA in state q is defined as
the set of all computations that may result from executing SA from
q on:
out(q, SA) = {λ = q0q1q2... | q0 = q and for every i = 1, 2, ...
there exists αi−1
1 , ..., αi−1
k such that αi−1
a = SA(a)(qi−1)
for each a ∈ A, αi−1
a ∈ d(a, qi−1) for each a /∈ A, and
o(qi−1, αi−1
1 , ..., αi−1
k ) = qi}.
The semantics of cooperation modalities is as follows:
M, q |= A fϕ iff there is a collective strategy SA such that,
for every λ ∈ out(q, SA), we have M, λ[1] |= ϕ;
M, q |= A 2ϕ iff there exists SA such that, for every λ ∈
out(q, SA), we have M, λ[i] for every i ≥ 0;
M, q |= A ϕ U ψ iff there exists SA such that for every λ ∈
out(q, SA) there is a i ≥ 0, for which M, λ[i] |= ψ, and
M, λ[j] |= ϕ for every 0 ≤ j < i.
2.4 Agents with Imperfect Information: ATLir
As ATL does not include incomplete information in its scope, it
can be seen as a logic for reasoning about agents who always have
complete knowledge about the current state of the whole system.
ATLir [21] includes the same formulae as ATL, except that the
cooperation modalities are presented with a subscript: A ir indicates
that they address agents with imperfect information and imperfect
recall. Formally, the recursive definition of ATLir formulae is:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | A ir
fϕ | A ir2ϕ | A irϕ U ϕ
Models of ATLir, concurrent epistemic game structures (CEGS),
can be defined as tuples M = Agt, St, Act, d, o, ∼1, ..., ∼k, Π, π ,
where Agt, St, Act, d, o, Π, π is a CGS, and ∼1, ..., ∼k are
epistemic (equivalence) relations. It is required that agents have the
same choices in indistinguishable states: q ∼a q implies d(a, q) =
d(a, q ). ATLir restricts the strategies that can be used by agents
to uniform strategies, i.e. functions sa : St → Act, such that: (1)
sa(q) ∈ d(a, q), and (2) if q ∼a q then sa(q) = sa(q ). A collective
strategy is uniform if it contains only uniform individual strategies.
Again, the function out(q, SA) returns the set of all paths that may
result from agents A executing collective strategy SA from state q.
The semantics of ATLir formulae can be defined as follows:
M, q |= A ir
fϕ iff there is a uniform collective strategy SA
such that, for every a ∈ A, q such that q ∼a q , and λ ∈
out(SA, q ), we have M, λ[1] |= ϕ;
2
This is a deviation from the original semantics of ATL [3], where
strategies assign agents" choices to sequences of states, which
suggests that agents can by definition recall the whole history of each
game. While the choice of one or another notion of strategy affects
the semantics of the full ATL
∗
, and most ATL extensions (e.g. for
games with imperfect information), it should be pointed out that
both types of strategies yield equivalent semantics for pure ATL
(cf. [21]).
898 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
M, q |= A ir2ϕ iff there exists SA such that, for every a ∈ A,
q such that q ∼a q , and λ ∈ out(SA, q ), we have M, λ[i]
for every i ≥ 0;
M, q |= A irϕ U ψ iff there exist SA such that, for every a ∈ A,
q such that q ∼a q , and λ ∈ out(SA, q ), there is i ≥ 0 for
which M, λ[i] |= ψ, and M, λ[j] |= ϕ for every 0 ≤ j < i.
That is, A irϕ holds iff A have a uniform collective strategy, such
that for every path that can possibly result from execution of the
strategy according to at least one agent from A, ϕ is the case.
3. MODELS AND MODEL CHECKING
In this section, we present and discuss various (existing)
representations of systems that can be used for modeling and model
checking. We believe that the two most important points of
reference are in this case: (1) the modeling formalism (i.e., the logic
and the semantics we use), and (2) the phenomenon, or more
generally, the domain we are going to model (to which we will often
refer as the real world). Our aim is a representation which is
reasonably close to the real world (i.e., it is sufficiently compact and
grounded), and still not too far away from the formalism (so that
it e.g. easily allows for theoretical analysis of computational
problems). We begin with discussing the merits of explicit
modelsin our case, these are transition systems, concurrent game structures
and CEGSs, presented in the previous section.
3.1 Explicit Models
Obviously, an advantage of explicit models is that they are very
close to the semantics of our logics (simply because they are the
semantics). On the other hand, they are in many ways difficult to
use to describe an actual system:
• Exponential size: temporal models usually have an
exponential number of states with respect to any higher-level
description (e.g. Boolean variables, n-ary attributes etc.). Also, their
size is exponential in the number of processes (or agents)
if the evolution of a system results from joint (synchronous
or asynchronous) actions of several active entities [15]. For
CGSs the situation is even worse: here, also the number of
transitions is exponential, even if we fix the number of states.3
In practice, this means that such representations are very
seldom scalable.
• Explicit models include no modularity. States in a model
refer to global states of the system; transitions in the model
correspond to global transitions as well, i.e., they represent
(in an atomic way) everything that may happen in one single
step, regardless of who has done it, to whom, and in what
way.
• Logics like ATL are often advertised as frameworks for
modeling and reasoning about open computational systems.
Ideally, one would like the elements of such a system to have
as little interdependencies as possible, so that they can be
plugged in and out without much hassle, for instance when
we want to test various designs or implementations of the
active component. In the case of a multi-agent system the
3
Another class of ATL models, alternating transition systems [2]
represent transitions in a more succinct way. While we still have
exponentially many states in an ATS, the number of transitions is
simply quadratic wrt. to states (like for CTL models).
Unfortunately, ATS are even less modular and harder to design than
concurrent game structures, and they cannot be easily extended to handle
incomplete information (cf. [9]).
need is perhaps even more obvious. We do not only need
to re-plug various designs of a single agent in the overall
architecture; we usually also need to change (e.g., increase)
the number of agents acting in a given environment without
necessarily changing the design of the whole system.
Unfortunately, ATL models are anything but open in this sense.
Theoretical complexity results for explicit models are as follows.
Model checking CTL and CTLK is P-complete, and can be done in
time O(ml), where m is the number of transitions in the model,
and l is the length of the formula [5]. Alternatively, it can be done
in time O(n2
l), where n is the number of states. Model checking
ATL is P-complete wrt. m, l and ΔP
3 -complete wrt. n, k, l (k being
the number of agents) [3, 12, 16]. Model checking ATLir is ΔP
2complete wrt. m, l and ΔP
3 -complete wrt. n, k, l [21, 13].
3.2 Compressed Representations
Explicit representation of all states and transitions is inefficient
in many ways. An alternative is to represent the state/transition
space in a symbolic way [17, 18].
Such models offer some hope for feasible model checking
properties of open/multi-agent systems, although it is well known that
they are compact only in a fraction of all cases.4
For us, however,
they are insufficient for another reason: they are merely optimized
representations of explicit models. Thus, they are neither more
open nor better grounded: they were meant to optimize
implementation rather than facilitate design or modeling methodology.
3.3 Interpreted Systems
Interpreted systems [11, 7] are held by many as a prime example
of computationally grounded models of distributed systems. An
interpreted system can be defined as a tuple IS = St1, ..., Stk, Stenv, R, π .
St1, ..., Stk are local state spaces of agents 1, ..., k, and Stenv is the
set of states of the environment. The set of global states is defined
as St = St1 × ... × Stk × Stenv; R ⊆ St × St is a transition relation,
and π : Π → P(St). While the transition relation encapsulates
the (possible) evolution of the system over time, the epistemic
dimension is defined by the local components of each global state:
q1, ..., qk, qenv ∼i q1, ..., qk, qenv iff qi = qi .
It is easy to see that such a representation is modular and
compact as far as we are concerned with states. Moreover, it gives a
natural (grounded) approach to knowledge, and suggests an
intuitive methodology for modeling epistemic states. Unfortunately,
the way transitions are represented in interpreted systems is neither
compact, nor modular, nor grounded: the temporal aspect of the
system is given by a joint transition function, exactly like in
explicit models. This is not without a reason: if we separate activities
of the agents too much, we cannot model interaction in the
framework any more, and interaction is the most interesting thing here.
But the bottom line is that the temporal dimension of an interpreted
system has exponential representation. And it is almost as difficult
to plug components in and out of an interpreted system, as for an
ordinary CTL or ATL model, since the local activity of an agent is
completely merged with his interaction with the rest of the system.
3.4 Concurrent Programs
The idea of concurrent programs has been long known in the
literature on distributed systems. Here, we use the formulation
from [15]. A concurrent program P is composed of k
concurrent processes, each described by a labeled transition system Pi =
Sti, Acti, Ri, Πi, πi , where Sti is the set of local states of process
4
Representation R of an explicit model M is compact if the size of
R is logarithmic with respect to the size of M.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 899
i, Acti is the set of local actions, Ri ⊆ Sti ×Acti ×Sti is a transition
relation, and Πi, πi are the set of local propositions and their
valuation. The behavior of program P is given by the product
automaton of P1, ..., Pk under the assumption that processes work
asynchronously, actions are interleaved, and synchronization is obtained
through common action names.
Concurrent programs have several advantages. First of all, they
are modular and compact. They allow for local modeling of
components - much more so than interpreted systems (not only states,
but also actions are local here). Moreover, they allow for
representing explicit interaction between local transitions of reactive
processes, like willful communication, and synchronization. On the
other hand, they do not allow for representing implicit,
incidental, or not entirely benevolent interaction between processes. For
example, if we want to represent the act of pushing somebody, the
pushed object must explicitly execute an action of being pushed,
which seems somewhat ridiculous. Side effects of actions are also
not easy to model. Still, this is a minor complaint in the context
of CTL, because for temporal logics we are only interested in the
flow of transitions, and not in the underlying actions. For temporal
reasoning about k asynchronous processes with no implicit
interaction, concurrent programs seem just about perfect.
The situation is different when we talk about autonomous,
proactive components (like agents), acting together (cooperatively or
adversely) in a common environment - and we want to address
their strategies and abilities. Now, particular actions are no less
important than the resulting transitions. Actions may influence other
agents" local states without their consent, they may have side
effects on other agents" states etc. Passing messages and/or calling
procedures is by no means the only way of interaction between
agents. Moreover, the availability of actions (to an agent) should
not depend on the actions that will be executed by other agents at
the same time - these are the outcome states that may depend on
these actions! Finally, we would often like to assume that agents act
synchronously. In particular, all agents play simultaneously in
concurrent game structures. But, assuming synchrony and autonomy
of actions, synchronization can no longer be a means of
coordination.
To sum up, we need a representation which is very much like
concurrent programs, but allows for modeling agents that play
synchronously, and which enables modeling more sophisticated
interaction between agents" actions. The first postulate is easy to satisfy,
as we show in the following section. The second will be addressed
in Section 4.
We note that model checking CTL against concurrent programs
is PSPACE-complete in the number of local states and the length
of the formula [15].
3.5 Synchronous CP and Simple Reactive
Modules
The semantics of ATL is based on synchronous models where
availability of actions does not depend on the actions currently
executed by the other players. A slightly different variant of
concurrent programs can be defined via synchronous product of programs,
so that all agents play simultaneously.5
Unfortunately, under such
interpretation, no direct interaction between agents" actions can be
modeled at all.
DEFINITION 2. A synchronous concurrent program consists of
k concurrent processes Pi = Sti, Acti, Ri, Πi, πi with the
follow5
The concept is not new, of course, and has already existed in folk
knowledge, although we failed to find an explicit definition in the
literature.
ing unfolding to a CGS: Agt = {1, ..., k}, St =
Qk
i=1 Sti, Act =
Sk
i=1 Acti, d(i, q1, ..., qk ) = {αi | qi, αi, qi ∈ Ri for some qi ∈
Sti}, o( q1, ..., qk , α1, ..., αk) = q1, ..., qk such that qi, αi, qi ∈
Ri for every i; Π =
Sk
i=1 Πi, and π(p) = πi(p) for p ∈ Πi.
We note that the simple reactive modules (SRML) from [22] can
be seen as a particular implementation of synchronous concurrent
programs.
DEFINITION 3. A SRML system is a tuple Σ, Π, m1, . . . , mk ,
where Σ = {1, . . . , k} is a set of modules (or agents), Π is a
set of Boolean variables, and, for each i ∈ Σ, we have mi =
ctri, initi, updatei , where ctri ⊆ Π. Sets initi and updatei consist
of guarded commands of the form φ ; v1 := ψ1; . . . ; vk := ψk,
where every vj ∈ ctri, and φ, ψ1, . . . , ψk are propositional
formulae over Π. It is required that ctr1, . . . ctrk partitions Π.
The idea is that agent i controls the variables ctri. The init guarded
commands are used to initialize the controlled variables, while the
update guarded commands can change their values in each round.
A guarded command is enabled if the guard φ is true in the current
state of the system. In each round an enabled update guarded
command is executed: each ψj is evaluated against the current state of
the system, and its logical value is assigned to vj. Several guarded
commands being enabled at the same time model non-deterministic
choice. Model checking ATL for SRML has been proved
EXPTIMEcomplete in the size of the model and the length of the formula [22].
3.6 Concurrent Epistemic Programs
Concurrent programs (both asynchronous and synchronous) can
be used to encode epistemic relations too - exactly in the same
way as interpreted systems do [20]. That is, when unfolding a
concurrent program to a model of CTLK or ATLir, we define that
q1, ..., qk ∼i q1, ..., qk iff qi = qi . Model checking CTLK
against concurrent epistemic programs is PSPACE-complete [20].
SRML can be also interpreted in the same way; then, we would
assume that every agent can see only the variables he controls.
Concurrent epistemic programs are modular and have a grounded
semantics. They are usually compact (albeit not always: for
example, an agent with perfect information will always blow up the size
of such a program). Still, they inherit all the problems of
concurrent programs with perfect information, discussed in Section 3.4:
limited interaction between components, availability of local
actions depending on the actual transition etc. The problems were
already important for agents with perfect information, but they
become even more crucial when agents have only limited knowledge
of the current situation. One of the most important applications of
logics that combine strategic and epistemic properties is
verification of communication protocols (e.g., in the context of security).
Now, we may want to, e.g., check agents" ability to pass an
information between them, without letting anybody else intercept the
message. The point is that the action of intercepting is by definition
enabled; we just look for a protocol in which the transition of
successful interception is never carried out. So, availability of actions
must be independent of the actions chosen by the other agents under
incomplete information. On the other hand, interaction is arguably
the most interesting feature of multi-agent systems, and it is really
hard to imagine models of strategic-epistemic logics, in which it is
not possible to represent communication.
3.7 Reactive Modules
Reactive modules [1] can be seen as a refinement of
concurrent epistemic programs (primarily used by the MOCHA model
checker [4]), but they are much more powerful, expressive and
900 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
grounded. We have already mentioned a very limited variant of
RML (i.e., SRML). The vocabulary of RML is very close to
implementations (in terms of general computational systems): the
modules are essentially collections of variables, states are just
valuations of variables; events/actions are variable updates. However,
the sets of variables controlled by different agents can overlap,
they can change over time etc. Moreover, reactive modules
support incomplete information (through observability of variables),
although it is not the main focus of RML. Again, the relationship
between sets of observable variables (and to sets of controlled
variables) is mostly left up to the designer of a system. Agents can act
synchronously as well as asynchronously.
To sum up, RML define a powerful framework for modeling
distributed systems with various kinds of synchrony and asynchrony.
However, we believe that there is still a need for a simpler and
slightly more abstract class of representations. First, the
framework of RML is technically complicated, involving a number
auxiliary concepts and their definitions. Second, it is not always
convenient to represent all that is going on in a multi-agent system
as reading and/or writing from/to program variables. This view
of a multi-agent system is arguably close to its computer
implementation, but usually rather distant from the real world
domainhence the need for a more abstract, and more conceptually flexible
framework. Third, the separation of the local complexity, and the
complexity of interaction is not straightforward. Our new proposal,
more in the spirit of interpreted systems, takes these observations
as the starting point. The proposed framework is presented in
Section 4.
4. MODULAR INTERPRETED SYSTEMS
The idea behind distributed systems (multi-agent systems even
more so) is that we deal with several loosely coupled components,
where most of the processing goes on inside components (i.e.,
locally), and only a small fraction of the processing occurs between
the components. Interaction is crucial (which makes concurrent
programs an insufficient modeling tool), but it usually consumes
much less of the agent"s resources than local computations (which
makes the explicit transition tables of CGS, CEGS, and interpreted
systems an overkill). Modular interpreted systems, proposed here,
extrapolate the modeling idea behind interpreted systems in a way
that allows for a tight control of the interaction complexity.
DEFINITION 4. A modular interpreted system (MIS) is defined
as a tuple
S = Agt, env, Act, In ,
where Agt = {a1, ..., ak} is a set of agents, env is the environment,
Act is a set of actions, and In is a set of symbols called interaction
alphabet. Each agent has the following internal structure:
ai = Sti, di, outi, ini, oi, Πi, πi , where:
• Sti is a set of local states,
• di : Sti → P(Act) defines local availability of actions; for
convenience of the notation, we additionally define the set of
situated actions as Di = { qi, α | qi ∈ Sti, α ∈ di(qi)},
• outi, ini are interaction functions; outi : Di → In refers to
the influence that a given situated action (of agent ai) may
possibly have on the external world, and ini : Sti ×Ink
→ In
translates external manifestations of the other agents (and
the environment) into the impression that they make on
ai"s transition function depending on the local state of ai,
• oi : Di × In → Sti is a (deterministic) local transition
function,
• Πi is a set of local propositions of agent ai where we require
that Πi and Πj are disjunct when i = j, and
• πi : Πi → P(Sti) is a valuation of these propositions.
The environment env = Stenv, outenv, inenv, oenv, Πenv, πenv has the
same structure as an agent except that it does not perform actions,
and that thus outenv : Stenv → In and oenv : Stenv × In → Stenv.
Within our framework, we assume that every action is executed
by an actor, that is, an agent. As a consequence, every actor is
explicitly represented in a MIS as an agent, just like in the case of
CGS and CEGS. The environment, on the other hand, represents the
(passive) context of agents" actions. In practice, it serves to capture
the aspects of the global state that are not observable by any of the
agents.
The input functions ini seem to be the fragile spots here: when
given explicitly as tables, they have size exponential wrt. the
number of agents (and linear wrt. the size of In). However, we can
use, e.g., a construction similar to the one from [16] to represent
interaction functions more compactly.
DEFINITION 5. Implicit input function for state q ∈ Sti is given
by a sequence ϕ1, η1 , ..., ϕn, ηn , where each ηj ∈ In is an
interaction symbol, and each ϕj is a boolean combination of
propositions ˆηi
, with η ∈ In; ˆηi
stands for η is the symbol currently
generated by agent i. The input function is now defined as
follows: ini(q, 1, ..., k, env) = ηj iff j is the lowest index such that
{ˆ1
1, ..., ˆk
k, ˆenv
env} |= ϕj. It is required that ϕn ≡ , so that the
mapping is effective.
REMARK 1. Every ini can be encoded as an implicit input
function, with each ϕj being of polynomial size with respect to the
number of interaction symbols (cf. [16]).
Note that, for some domains, the MIS representation of a system
requires exponentially many symbols in the interaction alphabet In.
In such a case, the problem is inherent to the domain, and ini will
have size exponential wrt the number of agents.
4.1 Representing Agent Systems with MIS
Let Stg = (
Qk
i=1 Sti)×Stenv be the set of all possible global states
generated by a modular interpreted system S.
DEFINITION 6. The unfolding of a MIS S for initial states Q ⊆
Stg to a CEGS cegs(S, Q) = Agt , St , Π , π , Act , d , o , ∼1, ..., ∼k
is defined as follows:
• Agt = {1, ..., k} and Act = Act,
• St is the set of global states from Stg which are reachable
from some state in Q via the transition relation defined by o
(below),
• Π =
Sk
i=1 Πi ∪ Πenv,
• For each q = q1, . . . , qk, qenv ∈ St and i = 1, ..., k, env,
we define q ∈ π (p) iff p ∈ Πi and qi ∈ πi(p),
• d (i, q) = di(qi) for global state q = q1, ..., qk, qenv ,
• The transition function is constructed as follows. Let q =
q1, ..., qk, qenv ∈ St , and α = α1, ..., αk be an action
profile s.t. αi ∈ d (i, q). We define inputi(q, α) =
ini
`
qi, out1(q1, α1), . . . , outi−1(qi−1, αi−1), outi+1(qi+1, αi+1),
. . . , outk(qk, αk), outenv(qenv)
´
for each agent i = 1, . . . , k,
and inputenv(q, α) = inenv
`
qenv, out1(q1, α1), . . . , outk(qk, αk)
´
.
Then, o (q, α) = o1( q1, α1 , input1(q, α)), . . . ,
ok( qk, αk , inputk(q, α)), oenv(qenv, inputenv(q, α)) ;
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 901
• For each i = 1, ..., k: q1, ..., qk, qenv ∼i q1, ..., qk, qenv iff
qi = qi .6
REMARK 2. Note that MISs can be used as representations of
CGSs too. In that case, epistemic relations ∼i are simply omitted in
the unfolding. We denote the unfolding of a MIS S for initial states
Q into a CGS by cgs(S, Q).
Propositions 3 and 5 state that modular interpreted systems can
be used as representations for explicit models of multi-agent
systems. On the other hand, these representations are not always
compact, as demonstrated by Propositions 7 and 8.
PROPOSITION 3. For every CEGS M, there is a MIS SM
and a
set of global states Q of SM
such that cegs(SM
, Q) is isomorphic to
M.7
PROOF. Let M = {1, . . . , k}, St, Act, d, o, Π, π, ∼1, . . . , ∼k
be a CEGS. We construct a MIS SM
= {a1, . . . , ak}, env, Act, In
with agents ai = Sti, di, outi, ini, oi, Πi, πi and environment env =
Stenv, outenv, inenv, oenv, Πenv, πenv , plus a set Q ⊆ Stg of global
states, as follows.
• In = Act ∪ St ∪ (Actk−1
× St),
• Sti = {[q]∼i | q ∈ St} for 1 ≤ i ≤ k (i.e., Sti is the set of i"s
indistinguishability classes in M),
• Stenv = St,
• di([q]∼i ) = d(i, q) for 1 ≤ i ≤ k (this is well-defined since
d(i, q) = d(i, q ) whenever q ∼i q ),
• outi([q]∼i , αi) = αi for 1 ≤ i ≤ k; outenv(q) = q,
• ini([q]∼i , α1, . . . , αi−1, αi+1, . . . , αk, qenv) =
α1, . . . , αi−1, αi+1, . . . , αk, qenv for i ∈ {1, . . . , k};
inenv(q, α1 . . . , αk) = α1, . . . , αk ;
ini(x) and inenv(x) are arbitrary for other arguments x,
• oi( [q]∼i , αi , α1, . . . , αi−1, αi+1, . . . , αk, qenv ) =
[o(qenv, α1, . . . , αk)]∼i for 1 ≤ i ≤ k and αi ∈ di([q]∼i );
oenv(q, α1, . . . , αk ) = o(q, α1, . . . , αk);
oi and oenv are arbitrary for other arguments,
• Πi = ∅ for 1 ≤ i ≤ k, and Πenv = Π,
• πenv(p) = π(p)
• Q = { [q]∼1 , . . . , [q]∼k , q : q ∈ St}
Let M = cegs(SM
, Q) = Agt , St , Act , d , o , Π , π , ∼1, . . . , ∼k .
We argue that M and M are isomorphic by establishing a
oneto-one correspondence between the respective sets of states, and
showing that the other parts of the structures agree on
corresponding states.
First we show that, for any ˆq = [q ]∼1 , . . . , [q ]∼k , q ∈ Q
and any α = α1, . . . , αk such that αi ∈ d (i, ˆq ), we have
o (ˆq , α) = [q]∼1 , . . . , [q]∼k , q where q = o(q , α) (1)
Let ˆq = o (ˆq , α). Now, for any i: inputi(ˆq , α) = ini([q ]∼i ,
out1([q ]∼1 , α1), ..., outi−1([q ]∼i−1 , αi−1), outi+1([q ]∼i+1 , αi+1),
. . . , outk([q ]∼k , αk), outenv(q )) = ini([q ]∼i , α1, . . . , αi−1, αi+1,
6
This shows another difference between the environment and the
agents: the environment does not possess knowledge.
7
We say that two CEGS are isomorphic if they only differ in the
names of states and/or actions.
. . . , αk, q ) = α1, . . . , αi−1, αi+1, . . . , αk, q . Similarly, we get
that inputenv(ˆq , α) = α1, . . . , αk . Thus we get that o (ˆq , α) =
o1( [q ]∼1 , α1 , input1(ˆq , α)), . . . , ok( [q ]∼k , αk , inputk(ˆq , α)),
oenv(q , inputenv(ˆq , α)) = [o(q , α1, . . . , αk)]∼1 , . . . ,
[o(q , α1, . . . , αk)]∼k , o(q , α1, . . . , αk) . Thus, ˆq =
[q]∼1 , . . . , [q]∼k , q for q = o(q , α1, . . . , αk), which completes
the proof of (1).
We now argue that St = Q. Clearly, Q ⊆ St . Let ˆq ∈ St ;
we must show that ˆq ∈ Q. The argument is on induction on the
length of the least o path from Q to ˆq. The base case, ˆq ∈ Q, is
immediate. For the inductive step, ˆq = o (ˆq , α) for some ˆq ∈ Q,
and then we have that ˆq ∈ Q by (1). Thus, St = Q.
Now we have a one-to-one correspondence between St and St :
r ∈ St corresponds to [r]∼1 , . . . , [r]∼k , r ∈ St . It remains to
be shown that the other parts of the structures M and M agree on
corresponding states:
• Agt = Agt,
• Act = Act,
• Π =
Sk
i=1 Πi ∪ Πenv = Π,
• For p ∈ Π = Π: [q ]∼1 , . . . , [q ]∼k , q ∈ π (p) iff q ∈
πenv(p) iff q ∈ π(p) (same valuations at corresponding states),
• d (i, [q ]∼1 , . . . , [q ]∼k , q ) = di([q ]∼i ) = d(i, q),
• It follows immediately from (1), and the fact that Q = St ,
that o ( [q ]∼1 , . . . , [q ]∼k , q , α) = [r ]∼1 , . . . , [r ]∼k , r
iff o(q , α) = r (transitions on the same joint action in
corresponding states lead to corresponding states),
• [q ]∼1 , . . . , [q ]∼k , q ∼i [r ]∼1 , . . . , [r ]∼k , r iff [q ]∼i =
[r ]∼i iff q ∼i r (the accessibility relations relate
corresponding states), which completes the proof.
COROLLARY 4. For every CEGS M, there is an ATLir-equivalent
MIS S with initial states Q, that is, for every state q in M there is
a state q in cegs(S, Q) satisfying exactly the same ATLir formulae,
and vice versa.
PROPOSITION 5. For every CGS M, there is a MIS SM
and a set
of global states Q of SM
such that cgs(SM
, Q) is isomorphic to M.
PROOF. Let M = Agt, St, Act, d, o, Π, π be given. Now, let
ˆM = Agt, St, Act, d, o, Π, π, ∼1, . . . , ∼k for some arbitrary
accessibility relations ∼i over St. By Proposition 3, there exists a MIS
S
ˆM
with global states Q such that ˆM = cegs(S
ˆM
, Q) is isomorphic
to ˆM. Let M be the CGS obtained by removing the accessibility
relations from ˆM . Clearly, M is isomorphic to M.
COROLLARY 6. For every CGS M, there is an ATL-equivalent
MIS S with initial states Q. That is, for every state q in M there is a
state q in cgs(S, Q) satisfying exactly the same ATL formulae, and
vice versa.
PROPOSITION 7. The local state spaces in a MIS are not
always compact with respect to the underlying concurrent epistemic
game structure.
PROOF. Take a CEGS M in which agent i has always perfect
information about the current global state of the system. When
constructing a modular interpreted system S such that M = cegs(S, Q),
we have that Sti must be isomorphic with St.
902 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
The above property is a part of the interpreted systems heritage.
The next proposition stems from the fact that explicit models (and
interpreted systems) allow for intensive interaction between agents.
PROPOSITION 8. The size of In in S is, in general, exponential
with respect to the number of local states and local actions. This is
the case even when epistemic relations are not relevant (i.e., when
S is taken as a representation of an ordinary CGS).
PROOF. Consider a CGS M with agents Agt = {1, ..., k}, global
states St =
Qk
i=1{qi
0, ..., qi
i}, and actions Act = {0, 1}, all enabled
everywhere. The transition function is defined as
o( q1
j1
, ..., qk
jk
, α1, ..., αk) = q1
l1
, ..., qk
lk
, where li = (ji + α1 +
... + αk) mod i. Note that M can be represented as a modular
interpreted system with succinct local state spaces Sti = {qi
0, ..., qi
i}.
Still, the current actions of all agents are relevant to determine the
resulting local transition of agent i.
We will call items In, outi, ini the interaction layer of a
modular interpreted system S; the other elements of S constitute the local
layer of the MIS. In this paper we are ultimately interested in model
checking complexity with respect to the size of the local layer. To
this end, we will assume that the size of interaction layer is
polynomial in the number of local states and actions. Note that, by
Propositions 7 and 8, not every explicit model submits to compact
representation with a MIS. Still, as we declared at the beginning of
Section 4, we are mainly interested in a modeling framework for
systems of loosely coupled components, where interaction is
essential, but most processing is done locally anyway. More
importantly, the framework of MIS allows for separating the interaction
of agents from their local structure to a larger extent. Moreover, we
can control and measure the complexity of each layer in a finer way
than before. First, we can try to abstract from the complexity of a
layer (e.g. like in this paper, by assuming that the other layer is kept
within certain complexity bounds). Second, we can also measure
separately the interaction complexity of different agents.
4.2 Modular Interpreted Systems vs. Simple
Reactive Modules
In this section we show that simple reactive modules are (as we
already suggested) a specific (and somewhat limited)
implementation of modular interpreted systems. First, we define our (quite
strong) notion of equivalence of representations.
DEFINITION 7. Two representations are equivalent if they
unfold to isomorphic concurrent epistemic game structures. They are
CGS-equivalent if they unfold to the same CGS.
PROPOSITION 9. For any SRML there is a CGS-equivalent MIS.
PROOF. Consider an SRML R with k modules and n variables.
We construct S = Agt, Act, In with Agt = {a1, ..., ak}, Act =
{ 1, ..., n, ⊥1, ..., ⊥n}, and In =
Sk
i=1 Sti × Sti (the local state
spaces Sti will be defined in a moment). Let us assume without loss
of generality that ctri = {x1, ..., xr}. Also, we consider all guarded
commands of i to be of type γi,ψ : ψ ; xi := , or γ⊥
i,ψ : ψ ;
xi := ⊥. Now, agent ai in S has the following components: Sti =
P(ctri) (i.e., local states of ai are valuations of variables controlled
by i); di(qi) = { 1, ..., r, ⊥1, ..., ⊥r}; outi(qi, α) = qi, qi ;
ini(qi, q1, q1 , ..., qi−1, qi−1 , qi+1, qi+1 , qk, qk ) =
{xi ∈ ctri | q1, ..., qk |=
W
γi,ψ
ψ}, {xi ∈ ctri | q1, ..., qk |=
W
γ⊥
i,ψ
ψ} . To define local transitions, we consider three cases. If
t = f = ∅ (no update is enabled), then oi(qi, α, t, f ) = qi for
every action α. If t = ∅, we take any arbitrary ˆx ∈ t, and
define oi(qi, j, t, f ) = qi ∪ {xj} if xj ∈ t, and qi ∪ {ˆx} otherwise;
oi(qi, ⊥j, t, f ) = qi \ {xj} if xj ∈ f, and qi ∪ {ˆx} otherwise.
Moreover, if t = ∅ = f, we take any arbitrary ˆx ∈ f, and
define oi(qi, j, t, f ) = qi ∪ {xj} if xj ∈ t, and qi \ {ˆx} otherwise;
oi(qi, ⊥j, t, f ) = qi \{xj} if xj ∈ f, and qi \{ˆx} otherwise. Finally,
Πi = ctri, and qi ∈ πi(xj) iff xj ∈ qi.
The above construction shows that SRML have more compact
representation of states than MIS: ri local variables of agent i give
rise to 2ri
local states. In a way, reactive modules (both simple
and full) are two-level representations: first, the system is
represented as a product of modules; next, each module can be seen
as a product of its variables (together with their update operations).
Note, however, that specification of updates with respect to a single
variable in an SRML may require guarded commands of total length
O(2
Pk
i=1 ri
). Thus, the representation of transitions in SRML is (in
the worst case) no more compact than in MIS, despite the two-level
structure of SRML. We observe finally that MIS are more general,
because in SRML the current actions of other agents have no
influence on the outcome of agent i"s current action (although the
outcome can be influenced by other agents" current local states).
4.3 Model Checking Modular Interpreted
Systems
One of our main aims was to study the complexity of symbolic
model checking ATLir in a meaningful way. Following the
reviewers" remarks, we state our complexity results only as conjectures.
Preliminary proofs can be found in [14].
CONJECTURE 10. Model checking ATL for modular interpreted
systems is EXPTIME-complete.
CONJECTURE 11. Model checking ATLir for the class of
modular interpreted systems is PSPACE-complete.
A summary of complexity results for model checking
temporal and strategic logics (with and without epistemic component)
is given in the table below. The table presents completeness
results for various models and settings of input parameters. Symbols
n, k, m stand for the number of states, agents and transitions in an
explicit model; l is the length of the formula, and nlocal is the
number of local states in a concurrent program or modular interpreted
system. The new results, conjectured in this paper, are printed in
italics. Note that the result for model checking ATL against modular
interpreted systems is an extension of the result from [22].
m, l n, k, l nlocal, k, l
CTL P [5] P [5] PSPACE [15]
CTLK P [5, 8] P [5, 8] PSPACE [20]
ATL P [3] ΔP
3 [12, 16] EXPTIME
ATLir ΔP
2 [21, 13] ΔP
3 [13] PSPACE
If we are right, then the results for ATL and ATLir form an
intriguing pattern. When we compare model checking agents with
perfect vs. imperfect information, the first problem appears to be
much easier against explicit models measured with the number of
transitions; next, we get the same complexity class against explicit
models measured with the number of states and agents; finally,
model checking imperfect information turns out to be easier than
model checking perfect information for modular interpreted
systems. Why can it be so?
First, a MIS unfolds into CEGS and CGS in a different way. In
the first case, the MIS is assumed to encode the epistemic relations
explicitly (which makes it explode when we model agents with
perfect, or almost perfect information). In the latter case, the epistemic
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 903
aspect is ignored, which gives some extra room for encoding the
transition relation more efficiently. Another crucial factor is the
number of available strategies (relative to the size of input
parameters). The number of all strategies is exponential in the number
of global states; for uniform strategies, there are usually much less
of them but still exponentially many in general. Thus, the fact that
perfect information strategies can be synthesized incrementally has
a substantial impact on the complexity of the problem. However,
measured in terms of local states and agents, the number of all
strategies is doubly exponential, while there are only
exponentially many uniform strategies - which settles the results in favor of
imperfect information.
5. CONCLUSIONS
We have presented a new class of representations for open
multiagent systems. Our representations, called modular interpreted
systems, are: modular, in the sense that components can be changed,
replaced, removed or added, with as little changes to the whole
representation as possible; more compact than traditional explicit
representations; and grounded, in the sense that the correspondences
between the primitives of the model and the entities being
modeled are more immediate, giving a methodology for designing and
implementing systems. We also conjecture that the complexity of
model checking strategic ability for our representations is higher if
we assume perfect information than if we assume imperfect
information.
The solutions, proposed in this paper, are not necessarily
perfect (for example, the impression functions ini seem to be the
main source of non-modularity in MIS, and can be perhaps
improved), but we believe them to be a step in the right direction.
We also do not mean to claim that our representations should
replace more elaborate modeling languages like Promela or reactive
modules. We only suggest that there is a need for compact, modular
and reasonably grounded models that are more expressive than
concurrent (epistemic) programs, and still allow for easier theoretical
analysis than reactive modules. We also suggest that MIS might be
better suited for modeling simple multi-agent domains, especially
for human-oriented (as opposed to computer-oriented) design.
6. ACKNOWLEDGMENTS
We thank the anonymous reviewers and Andrzej Tarlecki for
their helpful remarks. Thomas Ågotnes" work on this paper was
supported by grants 166525/V30 and 176853/S10 from the
Research Council of Norway.
7. REFERENCES
[1] R. Alur and T. A. Henzinger. Reactive modules. Formal
Methods in System Design, 15(1):7-48, 1999.
[2] R. Alur, T. A. Henzinger, and O. Kupferman.
Alternating-time Temporal Logic. Lecture Notes in
Computer Science, 1536:23-60, 1998.
[3] R. Alur, T. A. Henzinger, and O. Kupferman.
Alternating-time Temporal Logic. Journal of the ACM,
49:672-713, 2002.
[4] R. Alur, T.A. Henzinger, F.Y.C. Mang, S. Qadeer, S.K.
Rajamani, and S. Tasiran. MOCHA user manual. In
Proceedings of CAV"98, volume 1427 of Lecture Notes in
Computer Science, pages 521-525, 1998.
[5] E.M. Clarke, E.A. Emerson, and A.P. Sistla. Automatic
verification of finite-state concurrent systems using temporal
logic specifications. ACM Transactions on Programming
Languages and Systems, 8(2):244-263, 1986.
[6] E.A. Emerson and J.Y. Halpern. "sometimes" and "not never"
revisited: On branching versus linear time temporal logic. In
Proceedings of the Annual ACM Symposium on Principles of
Programming Languages, pages 151-178, 1982.
[7] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.
Reasoning about Knowledge. MIT Press: Cambridge, MA,
1995.
[8] M. Franceschet, A. Montanari, and M. de Rijke. Model
checking for combined logics. In Proceedings of the 3rd
International Conference on Temporal Logic (ICTL), 2000.
[9] V. Goranko and W. Jamroga. Comparing semantics of logics
for multi-agent systems. Synthese, 139(2):241-280, 2004.
[10] J. Y. Halpern. Reasoning about knowledge: a survey. In
D. M. Gabbay, C. J. Hogger, and J. A. Robinson, editors, The
Handbook of Logic in Artificial Intelligence and Logic
Programming, Volume IV, pages 1-34. Oxford University
Press, 1995.
[11] J.Y. Halpern and R. Fagin. Modelling knowledge and action
in distributed systems. Distributed Computing,
3(4):159-177, 1989.
[12] W. Jamroga and J. Dix. Do agents make model checking
explode (computationally)? In M. P˘echou˘cek, P. Petta, and
L.Z. Varga, editors, Proceedings of CEEMAS 2005, volume
3690 of Lecture Notes in Computer Science, pages 398-407.
Springer Verlag, 2005.
[13] W. Jamroga and J. Dix. Model checking abilities of agents:
A closer look. Submitted, 2006.
[14] W. Jamroga and T. Ågotnes. Modular interpreted systems: A
preliminary report. Technical Report IfI-06-15, Clausthal
University of Technology, 2006.
[15] O. Kupferman, M.Y. Vardi, and P. Wolper. An
automata-theoretic approach to branching-time model
checking. Journal of the ACM, 47(2):312-360, 2000.
[16] F. Laroussinie, N. Markey, and G. Oreiby. Expressiveness
and complexity of ATL. Technical Report LSV-06-03, CNRS
& ENS Cachan, France, 2006.
[17] K.L. McMillan. Symbolic Model Checking: An Approach to
the State Explosion Problem. Kluwer Academic Publishers,
1993.
[18] K.L. McMillan. Applying SAT methods in unbounded
symbolic model checking. In Proceedings of CAV"02,
volume 2404 of Lecture Notes in Computer Science, pages
250-264, 2002.
[19] W. Penczek and A. Lomuscio. Verifying epistemic properties
of multi-agent systems via bounded model checking. In
Proceedings of AAMAS"03, pages 209-216, New York, NY,
USA, 2003. ACM Press.
[20] F. Raimondi and A. Lomuscio. The complexity of symbolic
model checking temporal-epistemic logics. In L. Czaja,
editor, Proceedings of CS&P"05, 2005.
[21] P. Y. Schobbens. Alternating-time logic with imperfect
recall. Electronic Notes in Theoretical Computer Science,
85(2), 2004.
[22] W. van der Hoek, A. Lomuscio, and M. Wooldridge. On the
complexity of practical ATL model checking. In P. Stone and
G. Weiss, editors, Proceedings of AAMAS"06, pages
201-208, 2006.
904 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | temporal and strategic logic;synchronous concurrent program;reactive module;model methodology;multi-agent system;higher level representation language;kripke structure;modeling methodology;open computational system;branching time;alternating-time temporal logic;model checking;computation tree logic ctl;model check;modular interpreted system |
train_I-47 | Operational Semantics of Multiagent Interactions | The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms - both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms. | 1. INTRODUCTION
The suitability of agent-based computing to manage the
complex patterns of interactions naturally occurring in the
development of large scale, open systems, has become one
of its major assets over the last few years [26, 24, 15].
Particularly, the organizational or social stance advocated by
institutional frameworks [2] and most multi-agent system
(MAS) methodologies [26, 10], provides an excellent basis to
deal with the complexity and dynamism of the interactions
among system components. This approach has resulted in
a wide spectrum of organizational and communicative
abstractions, such as institutions, normative positions, power
relationships, organizations, groups, scenes, dialogue games,
communicative actions (CAs), etc., to effectively model the
interaction space of MAS. This wealth of computational
abstractions has found currency in several programming
frameworks and software platforms (AMELI [9], MadKit [13],
INGENIAS toolkit [18], etc.), which leverage multi-agent
middlewares built upon raw ACL-based interaction mechanism
[14], and minimize the gap between organizational
metamodels and target implementation languages.
Still, these tools and frameworks are designed to support
a limited range of interaction capabilities that constrain
developers to a fixed set of particular, pre-defined abstractions.
The main hypothesis motivating this paper is that the
variety of multi-agent interaction mechanisms - both,
organizational and communicative, share a common semantic core.
This paper thus focuses on the fundamental building blocks
of multi-agent interactions: those which may be composed,
extended or refined in order to define more complex
organizational or communicative types of interactions.
Its first goal is to carry out a principled analysis of
multiagent interactions, departing from general features commonly
ascribed to agent-based computing: autonomy, situatedness
and sociality [26]. To approach this issue, we draw on the
notion of connector, put forward within the field of software
architectures [1, 17]. The outcome of this analysis will be a
connector-based model of multi-agent interactions between
autonomous social and situated components, i.e. agents,
attempting to identify their essential structure. Furthermore,
the paper also provides this model with a formal
execution semantics which describes the dynamics of multi-agent
(or social) interactions. Structural Operational Semantics
(SOS)[21], a common technique to specify the operational
semantics of programming languages, is used for this
purpose.
The paper is structured as follows: first, the major entities
and relationships which constitute the structure of social
interactions are introduced. Next, the dynamics of social
interactions will show how these entities and relationships
evolve. Last, relevant work in the literature is discussed
889
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
with respect to the proposal, limitations are addressed, and
current and future work is described.
2. SOCIAL INTERACTION STRUCTURE
From an architectural point of view, interactions between
software components are embodied in software connectors:
first-class entities defined on the basis of the different roles
played by software components and the protocols that
regulate their behaviour [1]. The roles of a connector represent
its participants, such as the caller and callee roles of an
RPC connector, or the sender and receiver roles in a
message passing connector. The attachment operation binds a
component to the role of a given connector.
The analysis of social interactions introduced in this
section gives rise to a new kind of social connector. It refines the
generic model in several respects, attending to the features
commonly ascribed to agent-based computing:
• According to the autonomy feature, we may
distinguish a first kind of participant (i.e. role) in a
social interaction, so-called agents. Basically, agents are
those software components which will be regarded as
autonomous within the scope of the interaction1
.
• A second group of participants, so-called
environmental resources, may be identified from the situatedness
feature. Unlike agents, resources represent those
nonautonomous components whose state may be
externally controlled by other components (agents or
resources) within the interaction. Moreover, the
participation of resources in an interaction is not
mandatory.
• Last, according to the sociality of agents, the
specification of social connector protocols - the glue linking
agents among themselves and with resources, will rely
on normative concepts such as permissions, obligations
and empowerments [23].
Besides agents, resources and social protocols, two other
kinds of entities are of major relevance in our analysis of
social interactions: actions, which represent the way in which
agents alter the environmental and social state of the
interaction; and events, which represent the changes in the
interaction resulting from the performance of actions or the
activity of environmental resources.
In the following, we describe the basic entities involved in
social interactions. Each kind of entity T will be specified as
a record type T l1 : T1, . . . ln : Tn , possibly followed by
a number of invariants, definitions, and the actions affecting
their state. Instances or values v of a record type T will be
represented as v = v1, . . . , vn : T. The type SetT
represents a collection of values drawn from type T. The type
QueueT represents a queue of values v : T waiting to be
processed. The value v in the expression [v| ] : Queue[T]
represents the head of the queue. The type Enum {v1, . . . , vn}
1
Note that we think of the autonomy feature in a relative,
rather than absolute, perspective. Basically, this means that
software components counting as agents in a social
interaction may behave non-autonomously in other contexts, e.g.
in their interactions through human-user interfaces. This
conceptualization of agenthood resembles the way in which
objects are understood in CORBA: as any kind of software
component (C, Prolog, Cobol, etc.) attached to an ORB.
represents an enumeration type whose values are v1, . . . ,
vn. Given some value v : T, the term vl
refers to the value
of the field l of a record type T. Given some labels l1, l2,
. . . , the expression vl1,l2,...
is syntactic sugar for ((vl1
)l2
) . . ..
The special term nil will be used to represent the absence
of proper value for an optional field, so that vl
= nil will be
true in those cases and false otherwise. The formal model
will be illustrated with several examples drawn from the
design of a virtual organization to aid in the management of
university courses.
2.1 Social Interactions
Social interactions shall be considered as composite
connectors [17], structured in terms of a tree of nested
subinteractions. Let"s consider an interaction representing a
university course (e.g. on data structures). On the one
hand, this interaction is actually a complex one, made up
of lower-level interactions. For instance, within the scope
of the course agents will participate in programming
assignment groups, lectures, tutoring meetings, examinations and
so on. Assignment groups, in turn, may hold a number of
assignment submissions and test requests interactions. A
test request may also be regarded as a complex interaction,
ultimately decomposed in the atomic, or bottom-level
interactions represented by communicative actions (e.g.
request, agree, refuse, . . . ). On the other hand, courses are
run within the scope of a particular degree (e.g. computer
science), a higher-level interaction. Traversing upwards from
a degree to its ancestors, we find its faculty, the university
and, finally, the multi-agent community or agent society.
The community is thus the top-level interaction which
subsumes any other kind of multi-agent interaction2
.
The organizational and communicative interaction types
identified above clearly differ in many ways. However, we
may identify four major components in all of them: the
participating agents, the resources that agents manipulate,
the protocol regulating the agent activities and the
subinteraction space. Accordingly, we may specify the type I
of social interactions, ranged over by the meta-variable i, as
follows:
I state : SI, ini : A, mem : Set A, env : Set R,
sub : Set I, prot : P, ch : CH
def. : (1) icontext = i1 ⇔ i ∈ isub
1
inv. : (2) iini = nil ⇔ icontext = nil
act. : setUp, join, create, destroy
where the member and environment fields represent the
agents (A) and local resources (R) participating in the
interaction; the sub-interaction field, its set of inner interactions;
and the protocol field the rules that govern the interaction
(P). The event channel, to be described in the next
section, allows the dispatching of local events to external
interactions. The context of some interaction is defined as its
super-interaction (def. 1), so that the context of the
toplevel interaction is nil.
The type SI Enum {open, closing, closed} represents
the possible execution states of the interaction. Any
interaction, but the top-level one, is set up within the context of
another interaction by an initiator agent. The initiator is
2
In the context of this application, a one-to-one mapping
between human users and software components attached to
the community as agents would be a right choice.
890 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
thus a mandatory feature for any interaction different to the
community (inv. 2). The life-cycle of the interaction begins
in the open state. Its sets of agent and resource participants,
initially empty, vary as agents join and leave the interaction,
and as they create and destroy resources from its local
environment. Eventually, the interaction may come to an end
(according to the protocol"s rules), or be explicitly closed by
some agent, thus prematurely disabling the activity of its
participants. The transient closing state will be described
in the next section.
2.2 Agents
Components attach themselves as agents in social
interactions with the purpose of achieving something. The purpose
declared by some agent when it joins an interaction shall be
regarded as the institutional goal that it purports to satisfy
within that context3
. The types of agents participating in
a given interaction are primarily identified from their
purposes. For instance, students are those agents participating
in a course who purport to obtain a certificate in the course"s
subject. Other members of the course include lecturers and
teaching assistants.
The type A of agents, ranged over by meta-variable a, is
defined as follows:
A state : SA, player : A, purp : F, att : Queue ACT ,
ev : Queue E, obl : Set O
def. : (3) acontext = i ⇔ a ∈ imem
(4) a1 ∈ aroles ⇔ aplayer
1 = a
(5) i ∈ apartIn ⇔ a1 ∈ imem ∧ a1 ∈ aroles
act. : see
where the purpose is represented as a well-formed boolean
formula, of a generic type F, which evaluates to true if the
purpose is satisfied and false otherwise. The context of some
agent is defined as the interaction in which it participates
(def. 3).
The type SA Enum {playing, leaving, succ, unsuc}
represents the execution state of the agent. Its life-cycle
begins in the playing state when its player agent joins the
interaction, or some software component is attached as an agent
to the multi-agent system (in this latter case, the player
value is nil). The derived roles and partIn features
represent the roles played by the agent and the contexts in which
these roles are played (def. 4, 5)4
. An agent may play roles
at interactions within or outside the scope of its context. For
instance, students of a course are played by student agents
belonging to the (undergraduate) degree, whereas lecturers
may be played by teachers of a given department and the
assistant role may be played by students of a Ph.D degree
(both, the department and the Ph.D. degrees, are modelled
as sub-interactions of the faculty).
Components will normally attempt to perform different
actions (e.g. to set up sub-interactions) in order to satisfy
their purposes within some interaction. Moreover,
components need to be aware of the current state of the interaction,
so that they will also be capable of observing certain events
from the interaction. Both, the visibility of the interaction
3
Thus, it may or may not correspond to actual internal
goals or intentions of the component.
4
Free variables in the antecedents/consequents of
implications shall be understood as universally/existentially
quantified.
and the attempts of members, are subject to the rules
governing the interaction. The attempts and events fields of
the agent structure represent the queues of attempts to
execute some actions (ACT ), and the events (E) received by
the agent which have not been observed yet. An agent may
update its event queue by seeing the state of some entity
of the community. The last field of the structure represents
the obligations (O) of agents, to be described later.
Eventually, the participation of some agent in the
interaction will be over. This may either happen when certain
conditions are met (specified by the protocol rules), or when
the agent takes the explicit decision of leaving the
interaction. In either case, the final state of the agent will be
successful if its purpose was satisfied; unsuccessful
otherwise. The transient leaving state will be described in the
next section.
2.3 Resources
Resources are software components which may represent
different types of non-autonomous informational or
computational entities. For instance, objectives, topics,
assignments, grades and exams are different kinds of informational
resources created by lecturers and assistants in the context
of the course interaction. Students may also create programs
to satisfy the requirements of some assignment. Other types
of computational resources put at the disposal of students
by teachers include compilers and interpreters.
The type R of resources, ranged over by meta-variable r,
can be specified by the following record type:
R cr : A, owners : Set A, op : Set OP
def. : (6) rcontext = i ⇔ r ∈ ienv
act. : take, share, give, invoke
Essentially, resources can be regarded as objects deployed
in a social setting. This means that resources are created,
accessed and manipulated by agents in a social interaction
context (def. 6), according to the rules specified by its
protocol. The mandatory feature creator represents the agent
who created this resource. Moreover, resources may have
owners. The ownership relationship between members and
resources is considered as a normative device aimed at the
simplification of the protocol"s rules that govern the
interaction of agents and the environment. Members may gain
ownership of some resource by taking it, and grant
ownership to other agents by giving or sharing their own
properties. For instance, the ownership of programs may be shared
by several students if the assignment can be performed by
groups of two or more students.
The last operations feature represents the interface of the
resource, consisting of a set of operations. A resource is
structured around several public operations that
participants may invoke, in accordance to the rules specified by the
interaction"s protocol. The set of operations of a resource
makes up its interface.
2.4 Protocols
The protocol of any interaction is made up of the rules
which govern its overall state and dynamics. The present
specification abstracts away the particular formalism used to
specify these rules, and focuses instead on several
requirements concerning the structure and interface of protocols.
Accordingly, the type P of protocols, ranged over by
metaThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 891
variable p, is defined as follows5
:
P emp : A × ACT → Boolean,
perm : A × ACT → Boolean,
obl :→ Set (A × Set O × Set E),
monitor : E → Set A,
finish :→ Boolean,
over : A → Boolean
def. : (7) pcontext = i ⇔ p = iprot
inv. : (8) pfinish() ∧ s ∈ pcontext,sub ⇒ sprot,finish()
(9) pfinish() ∧ a ∈ pcontext,mem ⇒ pover(a)
(10) pover(a) ∧ ai ∈ aroles ⇒ acontext,prot,over
i (ai)
(11) αadd ∪ {a} ⊆ pmonitor( a, α, )
act. : Close, Leave
We demand from protocols four major kinds of functions.
Firstly, protocols shall include rules to identify the
empowerments and permissions of any agent attempting to alter
the state of the interaction (e.g. its members, the
environment, etc.) through the execution of some action (e.g. join,
create, etc.). Empowerments shall be regarded as the
institutional capabilities which some agent possesses in order
to satisfy its purpose. Corresponding rules, encapsulated
by the empowered function field, shall allow to determine
whether some agent is capable to perform a given action
over the interaction. Empowerments may only be exercised
under certain circumstances - that permissions specify.
Permission rules shall allow to determine whether the attempt
of an empowered agent to perform some particular action
is satisfied or not (cf. permitted field). For instance, the
course"s protocol specifies that the agents empowered to
join the interaction as students are those students of the
degree who have payed the fee established for the course"s
subject, and own the certificates corresponding to its
prerequisite subjects. Permission rules, in turn, specify that those
students may only join the course in the admission stage.
Hence, even if some student has paid the fee, the attempt
to join the course will fail if the course has not entered the
corresponding stage6
.
Secondly, protocols shall allow to determine the
obligations of agents towards the interaction. Obligations
represent a normative device of social enforcement, fully
compatible with the autonomy of agents, used to bias their
behaviour in a certain direction. These kinds of rules shall
allow to determine whether some agent must perform an
action of a given type, as well as if some obligation was fulfilled,
violated or needs to be revoked. The function obligations of
the protocol structure thus identifies the agents whose
obligation set must be updated. Moreover, it returns for each
agent a collection of events representing the changes in the
obligation set. For instance, the course"s protocol
establishes that members of departments must join the course as
teachers whenever they are assigned to the course"s subject.
Thirdly, the protocol shall allow to specify monitoring
rules for the different events originating within the
interaction. Corresponding rules shall establish the set of agents
that must be awared of some event. For instance, this
func5
The formalization assumes that protocol"s functions
implicitly recieve as input the interaction being regulated.
6
The hasPaidFee relationship between (degree) students
and subject resources is represented by an additional,
application-dependent field of the agent structure for this
kind of roles. Similarly, the admission stage is an additional
boolean field of the structure for school interactions. The
generic types I, A, R and P are thus extendable.
tionality is exploited by teachers in order to monitor the
enrollment of students to the course.
Last, the protocol shall allow to control the state of the
interaction as well as the states of its members. Corresponding
rules identify the conditions under which some interaction
will be automatically finished, and whether the participation
of some member agent will be automatically over. Thus, the
function field finish returns true if the regulated interaction
must finish its execution. If so happens, a well-defined set of
protocols must ensure that its sub-interactions and members
are finished as well (inv. 8,9). Similarly, the function over
returns true if the participation of the specified member must
be over. Well-formed protocols must ensure the consistency
between these functions across playing roles (inv. 10)7
. For
instance, the course"s protocol establishes that the
participation of students is over when they gain ownership of the
course"s certificate or the chances to get it are exhausted.
It also establishes that the course must be finished when
the admission stage has passed and all the students finished
their participation.
3. SOCIAL INTERACTION DYNAMICS
The dynamics of the multi-agent community is influenced
by the external actions executed by software components
and the protocols governing their interactions. This section
focuses on the dynamics resulting from a particular kind of
external action: the attempt of some component, attached
to the community as an agent, to execute a given (internal)
action. The description of other external actions concerning
agents (e.g. observe the events from its event queue, enter
or exit from the community) and resources (e.g. a timer
resource may signal the pass of time) will be skipped.
The processing of some attempt may give rise to changes
in the scope of the target interaction, such as the
instantiation of new participants (agents or resources) or the
setting up of new sub-interactions. These resulting events
may cause further changes in the state of other interactions
(the target one included), namely, in its execution state as
well as in the execution state, obligations and visibility of
their members. This section will also describe the way in
which these events are processed. The resulting dynamics
described bellow allows for actions and events corresponding
to different agents and interactions to be processed
simultaneously. Due to lack of space, we only include some of the
operational rules that formalise the execution semantics.
3.1 Attempt processing
An attempt is defined by the structure AT T perf :
A, act : ACT , where the performer represents the agent
in charge of executing the specified action. This action is
intended to alter the state of some target interaction
(possibly, the performer"s context itself), and notify a collection
of addressees of the changes resulting from a successful
execution. Accordingly, the type ACT of actions, ranged over
by meta-variable α, is specified as follows:
ACT state : SACT , target : I, add : Set A
def. : (12) αperf = a ⇔ α ∈ aatt
7
The close and leave actions update the finish and over
function fields as explained in the next section. Additional
actions, such as permit, forbid, empower, etc., to update other
protocol"s fields are yet to be identified in future work.
892 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
where: the performer is formally defined as the agent who
stores the action in its queue of attempts, and the state field
represents the current phase of processing. This process
goes through four major phases, as specified by the
enumeration type SACT Enum {emp, perm, exec} :
empowerment checking, permission checking and action execution,
described in the sequel.
3.1.1 Empowerment checking
The post-condition of an attempt consists of inserting the
action in the queue of attempts of the specified performer.
As rule 1 specifies8
, this will only be possible if the
performer is empowered to execute that action according to the
rules that govern the state of the target interaction. If this
condition is not met, the attempt will simply be ignored.
Moreover, the performer agent must be in the playing state
(this pre-condition is also required for any rule concerning
the processing of attempts). If these pre-conditions are
satisfied the rule is fired and the processing of the action
continues in the permission checking stage. For instance, when
the software component attached as a student in a degree
attempts to join as a student the course in which some subject
is teached, the empowerment rules of the course interaction
are checked. If the (degree) student has passed the course"s
prerequisite subjects the join action will be inserted in its
queue of attempts and considered for execution.
αtarget,prot,emp(a, α)
a = playing, , , qACT , ,
a,α :AT T
−→ playing, , , qACT , ,
(1)
W here : (α )state
= perm
(qACT ) = insert(α , qACT )
3.1.2 Permissions checking
The processing of the action resumes when the possible
preceding actions in the performer"s queue of attempts are
fully processed and removed from the queue. Moreover,
there should be no pending events to be processed in the
interaction, for these events may cause the member or the
interaction to be finished (as will be shortly explained in the
next sub-section). If these conditions are met the
permissions to execute the given action (and notify the specified
addressees) are checked (e.g. it will be checked whether the
student paid the fee for the course"s subject). If the protocol
of the target interaction grants permission, the processing
of the attempt moves to the action execution stage (rule 2).
Otherwise, the action is discharged and removed from the
queue. Unlike unempowered attempts, a forbidden one will
cause an event to be generated and transfered to the event
channel for further processing.
αstate = perm ∧ acontext,ch,in,ev = ∅ ∧ αtarget,prot,perm(a, α)
a = playing, , , [α| ], , −→ playing, , , [α | ], ,
(2)
W here : (α )state
= exec
8
Labels of record instances are omitted to allow for more
compact specifications. Moreover, note that record updates
in where clauses only affect the specified fields.
3.1.3 Action execution
The transitions fired in this stage are classified
according to the different types of actions to be executed. The
intended effects of some actions may directly be achieved
in a single step, while others will required an indirect
approach and possibly several execution steps. Actions of the
first kind are constructive ones such as set up and join.
The second group of actions include those, such as close and
leave, whose effects are indirectly achieved by updating the
interaction protocol.
As an example of constructive action, let"s consider the
execution of a set up action, whose type is defined as
follows9
:
SetUp ACT · new : I
inv. : (13) αnew,mem = αnew,res = αnew,sub = ∅
(14) αnew,state = open
where the new field represents the new interaction to be
initiated. Its sets of participants (agents and resources) and
sub-interactions must be empty (inv. 13) and its state must
be open (inv. 14). The setting up of the new interaction may
thus affect its protocol and possible application-dependent
fields (e.g. the subject of a course interaction). According
to rule 3, the outcome of the execution is threefold: firstly,
the performer"s attempt queue is updated so that the
executing action is removed; secondly, the new interaction is
added to the target"s set of sub-interactions (moreover, its
initiator field is set to the performer agent); last, the event
representing this change (which includes a description of the
change, the agent that caused it and the action performed)
is inserted in the output port of the target"s event channel.
αstate = exec ∧ α : SetUp ∧ αnew = i
a = playing, , , [α|qACT ], , −→ playing, , , qACT , ,
αtarget = open, , , , , sI , c −→ open, , , , , sI ∪ i , c
(3)
W here : (i )ini
= a
(c )out,ev
= insert( a, α, sub(αtarget
, i ) , cout,ev
)
Let"s consider now the case of a close action. This action
represents an attempt by the performer to force some
interaction to finish, thus bypassing its current protocol rules
(those concerning the finish function). The way to achieve
this effect is to cause an update on the protocol so that the
finish function returns true afterwards10
. Accordingly, we
may specify this type of action as follows:
Close ACT · upd : (→ Bool) → (→ Bool)
inv. : (15) αtarget,state = open
(16) αtarget,context = nil
(17) αupd(αtarget,prot,finish)()
where the inherited target field represents the interaction
to be closed (which must be open and different to the
topinteraction, according to invariants 15 and 16) and the new
9
The resulting type consists of the fields of the ACT record
extended with an additional new field.
10
This strategy is also followed in the definition of leave and
may also be used in the definition of other types of actions
such as fire, permit, forbid, etc.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 893
update field represents a proper higher-order function to
update the target"s protocol (inv. 17). The transition which
models the execution of this action, specified by rule 4,
defines two effects in the target interaction: its protocol is
updated and the event representing this change is inserted
in its output port. This event will actually trigger the
closing process of the interaction as described in the next
subsection.
αstate = exec ∧ α : Close
a = playing, , , [α|qACT ], , −→ playing, , , qACT , ,
αtarget = open, , , , , p, c −→ open, , , , , p , c
(4)
W here : (p )finish
= αupd
(pfinish
)
(c )out,ev
= insert( a, α, finish(αtarget
) , cout,ev
)
3.2 Event Processing
The processing of events is encapsulated in the event
channels of interactions. Channels, ranged over by meta-variable
c, are defined by two input and output ports, according to
the following definition:
CH out : OutP, in : InP
inv. : (18) ccontext ∈ cout,disp( , , finish(ccontext) )
(19) ccontext ∈ cout,disp( , , over(a) )
(20) ccontext,sub ⊆ cout,disp(closing(ccontext))
(21) apartsIn ⊆ cout,disp(leaving(a))
(22) ccontext ∈ cout,disp(closed(i))
(23) {ccontext, aplayer,context} ⊆ cout,disp(left(a))
OutP ev : Queue E, disp : E → Set I, int : Set I, ag : Set A
InP ev : Queue E, stage : Enum {int, mem, obl}, ag : Set A
The output port stores and processes the events originated
within the scope of the channel"s interaction. Its first
purpose is to dispatch the local events to the agents
identified by the protocol"s monitoring function. Moreover, since
these events may influence the results of the finishing, over
and obligation functions of certain protocols, they will also
be dispatched to the input ports of the interactions
identified through a dispatching function - whose invariants will
be explained later on. Thus, input ports serve as a
coordination mechanism which activate the re-evaluation of the
above functios whenever some event is received11
.
Accordingly, the processing of some event goes through four major
stages: event dispatching, interaction state update, member
state update and obligations update. The first one takes place
in the output port of the interaction in which the event
originated, whereas the other ones execute in separate control
threads associated to the input ports of the interactions to
which the event was dispatched.
3.2.1 Event dispatching
The processing of some event stored in the output port is
triggered when all its preceding events have been dispatched.
As a first step, the auxiliary int and ag fields are initialised
11
Alternatively, we may have assumed that interactions are
fully aware of any change in the multi-agent community. In
this scenario, interactions would trigger themselves without
requiring any explicit notification. On the contrary, we
adhere to the more realistic assumption of limited awareness.
with the returned values of the dispatching and protocol"s
monitoring functions, respectively (rule 5). Then, additional
rules simply iterate over these collections until all agents and
interactions have been notified (i.e., both sets are empty).
Last, the event is removed from the queue and the auxiliary
fields are re-set to nil.
The dispatching function shall identify the set of
interactions (possibly, empty) that may be affected by the event
(which may include the channel"s interaction itself)12
. For
instance, according to the finishing rule of university courses
mentioned in the last section, the event representing the
end of the admission stage, originated within the scope of
the school interaction, will be dispatched to every course of
the school"s degrees. Concerning the monitoring function,
according to invariant 11 of protocols, if the event is
generated as the result of an action performance, the agents
to be notified will include the performer and addressees of
that action. Thus, according to the monitoring rule of
university courses, if a student of some degree joins a certain
course and specifies a colleague as addressee of that action,
the course"s teachers and itself will also be notified of the
successful execution.
ccontext,state
s = open ∧ ccontext,prot,monitor
s = mon
cs = [e| ], d, nil, nil , −→ [e| ], , d(e), mon(e) ,
(5)
3.2.2 Interaction state update
Input port activity is triggered when a new event is
received. Irrespective of the kind of incoming event, the first
processing action is to check whether the channel"s
interaction must be finished. Thus, the dispatching of the finish
event resulting from a close action (inv. 18) serves as a
trigger of the closing procedure. If the interaction has not to
be finished, the input port stage field is set to the member
state update stage and the auxiliary ag field is initialised to
the interaction members. Otherwise, we can consider two
possible scenarios. In the first one, the interaction has no
members and no sub-interactions. In this case, the
interaction can be inmediately closed down. As rule 6 shows,
the interaction is closed, removed from the context"s set of
sub-interactions and a closed event is inserted in its output
channel. According to invariant 22, this event will be later
inserted to its input channel to allow for further treatment.
cin,ev
1 = ∅ ∧ cin,stage
1 = int ∧ pfinish()
, , , , {i} ∪ sI , , c −→ , , , , sI , , c
i = , , ∅, , ∅, p, c1 −→ closed, , , , , ,
(6)
W here : (c )out,ev
= insert(closed(i), cout,ev
)
In the second scenario, the interaction has some member
or sub-interaction. In this case, clean-up is required prior to
the disposal of the interaction (e.g. if the admission period
ends and no student has matriculated for the course,
teachers has to be finished before finishing the course itself). As
rule 7 shows, the interaction is moved to the transient
closing state and a corresponding event is inserted in the output
port. According to invariant 20, the closing event will be
dispatched to every sub-interaction in order to activate its
closing procedure (guaranteed by invariant 8). Moreover,
12
This is essentially determined by the protocol rules of these
interactions. The way in which the dispatching function is
initialised and updated is out of the scope of this paper.
894 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
the stage and ag fields are properly initialised so that the
process goes on in the next member state update stage. This
stage will further initiate the leaving process of the members
(according to invariant 9).
cin,ev = ∅ ∧ cin,stage = int ∧ pfinish() ∧ (sA = ∅ ∨ sI = ∅)
i = open, , sA, , sI , p, c −→ closing, , sA, , sI , p, c
(7)
W here : (c )out,ev
= insert(closing(i), cout,ev
)
(c )in,stage
= mem
(c )in,ag
= sA
Eventually, every member will leave the interaction and
every sub-interaction will be closed. Corresponding events
will be received by the interaction (according to invariants
23 and 22) so that the conditions of the first scenario will
hold.
3.2.3 Member state update
This stage simply iterates over the members of the
interaction to check whether they must be finished according to
the protocol"s over function. When all members have been
checked, the stage field will be set to the next obligation
update stage and the auxiliary ag field will be initalised
with the agents identified by the protocol"s obligation
update function.
If some member has to end its participation in the
interaction and it is not playing any role, it will be inmediately
abandoned (successfully or unsuccessfully, according to the
satisfaction of its purpose). The corresponding event will
be forwarded to its interaction and to the interaction of its
player agent to account for further changes (inv. 23).
Otherwise, the member enters the transient leaving state, thus
preventing any action performance. Then, it waits for the
completion of the leaving procedures of its played roles,
triggered by proper dispatching of the leaving event (inv. 21).
3.2.4 Obligations update
In this stage, the obligations of agents (not necessaryly
members of the interaction) towards the interaction are
updated accordingly. When all the identified agents have been
updated, the event is removed from the input queue and
the stage field is set back to the interaction state update.
For instance, when a course interaction receives an event
representing the assignment of some department member to
its subject, an obligation to join the course as a teacher is
created for that member. Moreover, the event representing
this change is added to the output channel of the department
interaction.
4. DISCUSSION
This paper has attempted to expose a possible
semantic core underlying the wide spectrum of interaction types
between autonomous, social and situated software
components. In the realm of software architectures, this core has
been formalised as an operational model of social
connectors, intended to describe both the basic structure and
dynamics of multi-agent interactions, from the largest (the
agent society itself) down to the smallest ones
(communicative actions). Thus, top-level interactions may represent the
kind of agent-web pursued by large-scale initiatives such as
the Agentcities/openNet one [25]. Large-scale interactions,
modelling complex aggregates of agent interactions such as
those represented by e-institutions or virtual organizations
[2, 26], are also amenable to be conceptualised as
particular kinds of first-level social interactions. The last
levels of the interaction tree may represent small-scale
multiagent interactions such as those represented by interaction
protocols [11], dialogue games [16], or scenes [2]. Finally,
bottom-level interactions may represent communicative
actions. From this perspective, the member types of a CA
include the speaker and possibly many listeners. The
purpose of the speaker coincides with the illocutionary purpose
of the CA [22], whereas the purpose of any listener is to
declare that it (actually, the software component) successfully
processed the meaning of the CA.
The analysis of social interactions put forward in this
paper draws upon current proposals of the literature in
several general respects, such as the institutional and
organizational character of multi-agent systems [2, 26, 10, 7] and the
normative perspective on multi-agent protocols [12, 23, 20].
These proposals as well as others focusing in relevant
abstractions such as power relationships, contracts, trust and
reputation mechanisms in organizational settings, etc., could
be further exploited in order to characterize more accurately
the organizational character of some multi-agent
interactions. Similarly, the conceptualization of communicative
actions as atomic interactions may similarly benefit from
public semantics of communicative actions such as the one
introduced in [3]. Last, the abstract model of protocols may
be refined taking into account existing operational models
of norms [12, 6]. These analyses shall result in new
organizational and communicative abstractions obtained through
a refinement and/or extension of the general model of
social interactions. Thus, the proposed model is not intended
to capture every organizational or communicative feature of
multi-agent interactions, but to reveal their roots in basic
interaction mechanisms. In turn, this would allow for the
exploitation of common formalisms, particularly concerning
protocols.
Unlike the development of individual agents, which has
greatly benefited from the design of several agent
programming languages [4], societal features of multi-agent systems
are mostly implemented in terms of visual modelling [8, 18]
and a fixed set of interaction abstractions. We argue that
the current field of multi-agent system programming may
greatly benefit from multi-agent programming languages that
allow programmers to accommodate an open set of
interaction mechanisms. The model of social interactions put
forward in this paper is intended as the abstract machine
of a language of this type. This abstract machine would
be independent of particular agent architectures and
languages (i.e. software components may be programmed in a
BDI language such as Jason [5] or in a non-agent oriented
language).
On top of the presented execution semantics, current and
future work aims at the specification of the type system [19]
which allows to program the abstract machine, the
specification of the corresponding surface syntaxes (both textual
and visual) and the design and implementation of a virtual
machine over existing middleware technologies such as FIPA
platforms or Web services. We also plan to study particular
refinements and limitations to the proposed model,
particularly with respect to the dispatching of events, semantics
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 895
of obligations, dynamic updates of protocols and rule
formalisms. In this latter aspect, we plan to investigate the
use of Answer Set Programming to specify the rules of
protocols, attending to the role that incompleteness (rules may
only specify either necessary or sufficient conditions, for
instance), explicit negation (e.g. prohibitions) and defaults
play in this domain.
5. ACKNOWLEDGMENTS
The authors thank anonymous reviewers for their
comments and suggestions. Research sponsored by the Spanish
Ministry of Science and Education (MEC), project
TIN200615455-C03-03.
6. REFERENCES
[1] R. Allen and D. Garlan. A Formal Basis for
Architectural Connection. ACM Transactions on
Software Engineering and Methodology, 6(3):213-249,
June 1997.
[2] J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodr´ıguez,
and C. Sierra. Engineering open environments with
electronic institutions. Journal on Engineering
Applications of Artificial Intelligence, 18(2):191-204,
2005.
[3] G. Boella, R. Damiano, J. Hulstijn, and L. W. N.
van der Torre. Role-based semantics for agent
communication: embedding of the "mental attitudes"
and "social commitments" semantics. In AAMAS,
pages 688-690, 2006.
[4] R. H. Bordini, L. Braubach, M. Dastani, A. E. F.
Seghrouchni, J. J. G. Sanz, J. Leite, G. O"Hare,
A. Pokahr, and A. Ricci. A survey of programming
languages and platforms for multi-agent systems.
Informatica, 30:33-44, 2006.
[5] R. H. Bordini, J. F. H¨ubner, and R. Vieira. Jason and
the golden fleece of agent-oriented programming. In
R. H. Bordini, D. M., J. Dix, and
A. El Fallah Seghrouchni, editors, Multi-Agent
Programming: Languages, Platforms and Applications,
chapter 1. Springer-Verlag, 2005.
[6] O. Cliffe, M. D. Vos, and J. A. Padget. Specifying and
analysing agent-based social institutions using answer
set programming. In EUMAS, pages 476-477, 2005.
[7] V. Dignum, J. V´azquez-Salceda, and F. Dignum.
Omni: Introducing social structure, norms and
ontologies into agent organizations. In R. Bordini,
M. Dastani, J. Dix, and A. Seghrouchni, editors,
Programming Multi-Agent Systems Second
International Workshop ProMAS 2004, volume 3346
of LNAI, pages 181-198. Springer, 2005.
[8] M. Esteva, D. de la Cruz, and C. Sierra. ISLANDER:
an electronic institutions editor. In M. Gini, T. Ishida,
C. Castelfranchi, and W. L. Johnson, editors,
Proceedings of the First International Joint
Conference on Autonomous Agents and Multiagent
Systems (AAMAS"02), pages 1045-1052. ACM Press,
July 2002.
[9] M. Esteva, B. Rosell, J. A. Rodr´ıguez-Aguilar, and
J. L. Arcos. AMELI: An agent-based middleware for
electronic institutions. In Proceedings of the Third
International Joint Conference on Autonomous Agents
and Multiagent Systems, volume 1, pages 236-243,
2004.
[10] J. Ferber, O. Gutknecht, and F. Michel. From agents
to organizations: An organizational view of
multi-agent systems. In AOSE, pages 214-230, 2003.
[11] Foundation for Intelligent Physical Agents. FIPA
Interaction Protocol Library Specification.
http://www.fipa.org/repository/ips.html, 2003.
[12] A. Garc´ıa-Camino, J. A. Rodr´ıguez-Aguilar, C. Sierra,
and W. Vasconcelos. Norm-oriented programming of
electronic institutions. In AAMAS, pages 670-672,
2006.
[13] O. Gutknecht and J. Ferber. The MadKit agent
platform architecture. Lecture Notes in Computer
Science, 1887:48-55, 2001.
[14] JADE. The JADE project home page.
http://jade.cselt.it, 2005.
[15] M. Luck, P. McBurney, O. Shehory, and S. Willmott.
Agent Technology: Computing as Interaction - A
Roadmap for Agent-Based Computing. AgentLink III,
2005.
[16] P. McBurney and S. Parsons. A formal framework for
inter-agent dialogues. In J. P. M¨uller, E. Andre,
S. Sen, and C. Frasson, editors, Proceedings of the
Fifth International Conference on Autonomous
Agents, pages 178-179, Montreal, Canada, May 2001.
ACM Press.
[17] N. R. Mehta, N. Medvidovic, and S. Phadke. Towards
a taxonomy of software connectors. In Proceedings of
the 22nd International Conference on Software
Engineering, pages 178-187. ACM Press, June 2000.
[18] J. Pav´on and J. G´omez-Sanz. Agent oriented software
engineering with ingenias. In V. Marik, J. Muller, and
M. Pechoucek, editors, Proceedings of the 3rd
International Central and Eastern European
Conference on Multi-Agent Systems. Springer Verlag,
2003.
[19] B. C. Pierce. Types and Programming Languages. The
MIT Press, Cambridge, MA, 2002.
[20] J. Pitt, L. Kamara, M. Sergot, and A. Artikis. Voting
in multi-agent systems. Feb. 27 2006.
[21] G. Plotkin. A structural approach to operational
semantics. Technical Report DAIMI FN-19, Aarhus
University, Sept. 1981.
[22] J. Searle. Speech Acts. Cambridge University Press,
1969.
[23] M. Sergot. A computational theory of normative
positions. ACM Transactions on Computational Logic,
2(4):581-622, Oct. 2001.
[24] M. P. Singh. Agent-based abstractions for software
development. In F. Bergenti, M.-P. Gleizes, and
F. Zambonelli, editors, Methodologies and Software
Engineering for Agent Systems, chapter 1, pages 5-18.
Kluwer, 2004.
[25] S. Willmot and al. Agentcities / opennet testbed.
http://x-opennet.net, 2004.
[26] F. Zambonelli, N. R. Jennings, and M. Wooldridge.
Developing multiagent systems: The Gaia
methodology. ACM Transactions on Software
Engineering and Methodology, 12(3):317-370, July
2003.
896 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | social interaction;operational semantics;organizational and communicative abstraction;organizational programming language;structural operational semantics;multi-agent interaction connector-based model;connector-based model of multi-agent interaction;institutional framework;pre-defined abstraction;formal execution semantics;multiagent interaction;software architecture;software connector |
train_I-48 | Normative System Games | We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent"s goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. | 1. INTRODUCTION
Normative systems, or social laws, have proved to be an attractive
approach to coordination in multi-agent systems [13, 14, 10, 15, 1].
Although the various approaches to normative systems proposed in
the literature differ on technical details, they all share the same
basic intuition that a normative system is a set of constraints on the
behaviour of agents in the system; by imposing these constraints,
it is hoped that some desirable objective will emerge. The idea of
using social laws to coordinate multi-agent systems was proposed
by Shoham and Tennenholtz [13, 14]; their approach was extended
by van der Hoek et al. to include the idea of specifying a desirable
global objective for a social law as a logical formula, with the idea
being that the normative system would be regarded as successful
if, after implementing it (i.e., after eliminating all forbidden
actions), the objective formula was guaranteed to be satisfied in the
system [15]. However, this model did not take into account the
preferences of individual agents, and hence neglected to account
for possible strategic behaviour by agents when deciding whether
to comply with the normative system or not. This model of
normative systems was further extended by attributing to each agent
a single goal in [16]. However, this model was still too
impoverished to capture the kinds of decision making that take place when
an agent decides whether or not to comply with a social law. In
reality, strategic considerations come into play: an agent takes into
account not just whether the normative system would be beneficial
for itself, but also whether other agents will rationally choose to
participate.
In this paper, we develop a model of normative systems in which
agents are assumed to have multiple goals, of increasing priority.
We specify an agent"s goals as a hierarchy of formulae of
Computation Tree Logic (CTL), a widely used logic for representing the
properties of Kripke structures [8]: the intuition is that goals further
up the hierarchy are preferred by the agent over those that appear
further down the hierarchy. Using this scheme, we define a model
of ordinal utility, which in turn allows us to interpret our
Kripkebased normative systems as games, in which agents must determine
whether to comply with the normative system or not. We thus
provide a very natural bridge between logical structures and languages
and the techniques and concepts of game theory, which have proved
to be very powerful for analysing social contract-style scenarios
such as normative systems [3, 4]. We then characterise the
computational complexity of a number of decision problems associated
with these Kripke-based normative system games; for example, we
show that the complexity of checking whether there exists a
normative system which has the property of being a Nash implementation
is NP-complete.
2. KRIPKE STRUCTURES AND CTL
We use Kripke structures as our basic semantic model for
multiagent systems [8]. A Kripke structure is essentially a directed
graph, with the vertex set S corresponding to possible states of the
system being modelled, and the relation R ⊆ S × S capturing the
881
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
possible transitions of the system; intuitively, these transitions are
caused by agents in the system performing actions, although we do
not include such actions in our semantic model (see, e.g., [13, 2,
15] for related models which include actions as first class citizens).
We let S0
denote the set of possible initial states of the system.
Our model is intended to correspond to the well-known interleaved
concurrency model from the reactive systems literature: thus an
arc corresponds to the execution of an atomic action by one of the
processes in the system, which we call agents.
It is important to note that, in contrast to such models as [2, 15],
we are therefore here not modelling synchronous action. This
assumption is not in fact essential for our analysis, but it greatly
simplifies the presentation. However, we find it convenient to include
within our model the agents that cause transitions. We therefore
assume a set A of agents, and we label each transition in R with
the agent that causes the transition via a function α : R → A.
Finally, we use a vocabulary Φ = {p, q, . . .} of Boolean variables
to express the properties of individual states S: we use a function
V : S → 2Φ
to label each state with the Boolean variables true (or
satisfied) in that state.
Collecting these components together, an agent-labelled Kripke
structure (over Φ) is a 6-tuple:
K = S, S0
, R, A, α, V , where:
• S is a finite, non-empty set of states,
• S0
⊆ S (S0
= ∅) is the set of initial states;
• R ⊆ S × S is a total binary relation on S, which we refer to
as the transition relation1
;
• A = {1, . . . , n} is a set of agents;
• α : R → A labels each transition in R with an agent; and
• V : S → 2Φ
labels each state with the set of propositional
variables true in that state.
In the interests of brevity, we shall hereafter refer to an
agentlabelled Kripke structure simply as a Kripke structure. A path
over a transition relation R is an infinite sequence of states π =
s0, s1, . . . which must satisfy the property that ∀u ∈ N: (su , su+1) ∈
R. If u ∈ N, then we denote by π[u] the component indexed by
u in π (thus π[0] denotes the first element, π[1] the second, and so
on). A path π such that π[0] = s is an s-path. Let ΠR(s) denote
the set of s-paths over R; since it will usually be clear from
context, we often omit reference to R, and simply write Π(s). We will
sometimes refer to and think of an s-path as a possible
computation, or system evolution, from s.
EXAMPLE 1. Our running example is of a system with a single
non-sharable resource, which is desired by two agents. Consider
the Kripke structure depicted in Figure 1. We have two states, s and
t, and two corresponding Boolean variables p1 and p2, which are
1
In the branching time temporal logic literature, a relation R ⊆
S × S is said to be total iff ∀s ∃s : (s, s ) ∈ R. Note that
the term total relation is sometimes used to refer to relations
R ⊆ S × S such that for every pair of elements s, s ∈ S we
have either (s, s ) ∈ R or (s , s) ∈ R; we are not using the term
in this way here. It is also worth noting that for some domains,
other constraints may be more appropriate than simple totality. For
example, one might consider the agent totality requirement, that in
every state, every agent has at least one possible transition
available: ∀s∀i ∈ A∃s : (s, s ) ∈ R and α(s, s ) = i.
2p
t
p
2
2
1
s
1
1
Figure 1: The resource control running example.
mutually exclusive. Think of pi as meaning agent i has currently
control over the resource. Each agent has two possible actions,
when in possession of the resource: either give it away, or keep it.
Obviously there are infinitely many different s-paths and t-paths.
Let us say that our set of initial states S0
equals {s, t}, i.e., we
don"t make any assumptions about who initially has control over
the resource.
2.1 CTL
We now define Computation Tree Logic (CTL), a branching time
temporal logic intended for representing the properties of Kripke
structures [8]. Note that since CTL is well known and widely
documented in the literature, our presentation, though complete, will be
somewhat terse. We will use CTL to express agents" goals.
The syntax of CTL is defined by the following grammar:
ϕ ::= | p | ¬ϕ | ϕ ∨ ϕ | E fϕ | E(ϕ U ϕ) | A fϕ | A(ϕ U ϕ)
where p ∈ Φ. We denote the set of CTL formula over Φ by LΦ;
since Φ is understood, we usually omit reference to it.
The semantics of CTL are given with respect to the satisfaction
relation |=, which holds between pairs of the form K, s, (where
K is a Kripke structure and s is a state in K), and formulae of the
language. The satisfaction relation is defined as follows:
K, s |= ;
K, s |= p iff p ∈ V (s) (where p ∈ Φ);
K, s |= ¬ϕ iff not K, s |= ϕ;
K, s |= ϕ ∨ ψ iff K, s |= ϕ or K, s |= ψ;
K, s |= A fϕ iff ∀π ∈ Π(s) : K, π[1] |= ϕ;
K, s |= E fϕ iff ∃π ∈ Π(s) : K, π[1] |= ϕ;
K, s |= A(ϕ U ψ) iff ∀π ∈ Π(s), ∃u ∈ N, s.t. K, π[u] |= ψ
and ∀v, (0 ≤ v < u) : K, π[v] |= ϕ
K, s |= E(ϕ U ψ) iff ∃π ∈ Π(s), ∃u ∈ N, s.t. K, π[u] |= ψ
and ∀v, (0 ≤ v < u) : K, π[v] |= ϕ
The remaining classical logic connectives (∧, →, ↔) are
assumed to be defined as abbreviations in terms of ¬, ∨, in the
conventional manner. The remaining CTL temporal operators are
defined:
A♦ϕ ≡ A( U ϕ) E♦ϕ ≡ E( U ϕ)
A ϕ ≡ ¬E♦¬ϕ E ϕ ≡ ¬A♦¬ϕ
We say ϕ is satisfiable if K, s |= ϕ for some Kripke structure K
and state s in K; ϕ is valid if K, s |= ϕ for all Kripke structures
K and states s in K. The problem of checking whether K, s |= ϕ
for given K, s, ϕ (model checking) can be done in deterministic
polynomial time, while checking whether a given ϕ is satisfiable or
whether ϕ is valid is EXPTIME-complete [8]. We write K |= ϕ if
K, s0 |= ϕ for all s0 ∈ S0
, and |= ϕ if K |= ϕ for all K.
882 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
3. NORMATIVE SYSTEMS
For our purposes, a normative system is simply a set of constraints
on the behaviour of agents in a system [1]. More precisely, a
normative system defines, for every possible system transition, whether
or not that transition is considered to be legal or not. Different
normative systems may differ on whether or not a transition is
legal. Formally, a normative system η (w.r.t. a Kripke structure
K = S, S0
, R, A, α, V ) is simply a subset of R, such that R \ η
is a total relation. The requirement that R\η is total is a
reasonableness constraint: it prevents normative systems which lead to states
with no successor. Let N (R) = {η : (η ⊆ R) & (R \ η is total)}
be the set of normative systems over R. The intended
interpretation of a normative system η is that (s, s ) ∈ η means transition
(s, s ) is forbidden in the context of η; hence R \ η denotes the
legal transitions of η. Since it is assumed η is reasonable, we are
guaranteed that a legal outward transition exists for every state. We
denote the empty normative system by η∅, so η∅ = ∅. Note that
the empty normative system η∅ is reasonable with respect to any
transition relation R.
The effect of implementing a normative system on a Kripke
structure is to eliminate from it all transitions that are forbidden
according to this normative system (see [15, 1]). If K is a Kripke
structure, and η is a normative system over K, then K † η denotes the
Kripke structure obtained from K by deleting transitions forbidden
in η. Formally, if K = S, S0
, R, A, α, V , and η ∈ N (R), then
let K†η = K be the Kripke structure K = S , S0
, R , A , α , V
where:
• S = S , S0
= S0
, A = A , and V = V ;
• R = R \ η; and
• α is the restriction of α to R :
α (s, s ) =
j
α(s, s ) if (s, s ) ∈ R
undefined otherwise.
Notice that for all K, we have K † η∅ = K.
EXAMPLE 1. (continued) When thinking in terms of fairness, it
seems natural to consider normative systems η that contain (s, s)
or (t, t). A normative system with (s, t) would not be fair, in the
sense that A♦A ¬p1 ∨ A♦A ¬p2 holds: in all paths, from
some moment on, one agent will have control forever. Let us, for
later reference, fix η1 = {(s, s)}, η2 = {(t, t)}, and η3 = {(s, s),
(t, t)}.
Later, we will address the issue of whether or not agents should
rationally choose to comply with a particular normative system. In
this context, it is useful to define operators on normative systems
which correspond to groups of agents defecting from the
normative system. Formally, let K = S, S0
,R, A,α, V be a Kripke
structure, let C ⊆ A be a set of agents over K, and let η be a
normative system over K. Then:
• η C denotes the normative system that is the same as η
except that it only contains the arcs of η that correspond to
the actions of agents in C. We call η C the restriction of η
to C, and it is defined as:
η C = {(s, s ) : (s, s ) ∈ η & α(s, s ) ∈ C}.
Thus K † (η C) is the Kripke structure that results if only
the agents in C choose to comply with the normative system.
• η C denotes the normative system that is the same as η
except that it only contains the arcs of η that do not correspond
to actions of agents in C. We call η C the exclusion of C
from η, and it is defined as:
η C = {(s, s ) : (s, s ) ∈ η & α(s, s ) ∈ C}.
Thus K † (η C) is the Kripke structure that results if only
the agents in C choose not to comply with the normative
system (i.e., the only ones who comply are those in A \ C).
Note that we have η C = η (A\C) and η C = η (A\C).
EXAMPLE 1. (Continued) We have η1 {1} = η1 = {(s, s)},
while η1 {1} = η∅ = η1 {2}. Similarly, we have η3 {1} =
{(s, s)} and η3 {1} = {(t, t)}.
4. GOALS AND UTILITIES
Next, we want to be able to capture the goals that agents have, as
these will drive an agent"s strategic considerations - particularly, as
we will see, considerations about whether or not to comply with a
normative system. We will model an agent"s goals as a prioritised
list of CTL formulae, representing increasingly desired properties
that the agent wishes to hold. The intended interpretation of such a
goal hierarchy γi for agent i ∈ A is that the further up the
hierarchy a goal is, the more it is desired by i. Note that we assume
that if an agent can achieve a goal at a particular level in its goal
hierarchy, then it is unconcerned about goals lower down the
hierarchy. Formally, a goal hierarchy, γ, (over a Kripke structure K)
is a finite, non-empty sequence of CTL formulae
γ = (ϕ0, ϕ1, . . . , ϕk )
in which, by convention, ϕ0 = . We use a natural number
indexing notation to extract the elements of a goal hierarchy, so if
γ = (ϕ0, ϕ1, . . . , ϕk ) then γ[0] = ϕ0, γ[1] = ϕ1, and so on. We
denote the largest index of any element in γ by |γ|.
A particular Kripke structure K is said to satisfy a goal at
index x in goal hierarchy γ if K |= γ[x], i.e., if γ[x] is satisfied in all
initial states S0
of K. An obvious potential property of goal
hierarchies is monotonicity: where goals at higher levels in the hierarchy
logically imply those at lower levels in the hierarchy. Formally, a
goal hierarchy γ is monotonic if for all x ∈ {1, . . . , |γ|} ⊆ N, we
have |= γ[x] → γ[x − 1]. The simplest type of monotonic goal
hierarchy is where γ[x + 1] = γ[x] ∧ ψx+1 for some ψx+1, so at
each successive level of the hierarchy, we add new constraints to
the goal of the previous level. Although this is a natural property
of many goal hierarchies, it is not a property we demand of all goal
hierarchies.
EXAMPLE 1. (continued) Suppose the agents have similar, but
opposing goals: each agent i wants to keep the source as often and
long as possible for himself. Define each agent"s goal hierarchy as:
γi = ( ϕi
0 = , ϕi
1 = E♦pi ,
ϕi
2 = E E♦pi , ϕi
3 = E♦E pi ,
ϕi
4 = A E♦pi , ϕi
5 = E♦A pi
ϕi
6 = A A♦pi , ϕi
7 = A (A♦pi ∧ E pi ),
ϕi
8 = A pi )
The most desired goal of agent i is to, in every computation,
always have the resource, pi (this is expressed in ϕi
8). Thanks to our
reasonableness constraint, this goal implies ϕi
7 which says that, no
matter how the computation paths evolve, it will always be that all
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883
continuations will hit a point in which pi , and, moreover, there is a
continuation in which pi always holds. Goal ϕi
6 is a fairness
constraint implied by it. Note that A♦pi says that every computation
eventually reaches a pi state. This may mean that after pi has
happened, it will never happen again. ϕi
6 circumvents this: it says that,
no matter where you are, there should be a future pi state. The goal
ϕi
5 is like the strong goal ϕi
8 but it accepts that this is only achieved
in some computation, eventually. ϕi
4 requires that in every path,
there is always a continuation that eventually gives pi . Goal ϕi
3
says that pi should be true on some branch, from some moment on.
It implies ϕi
2 which expresses that there is a computation such that
everywhere during it, it is possible to choose a continuation that
eventually satisfies pi . This implies ϕi
1, which says that pi should
at least not be impossible. If we even drop that demand, we have
the trivial goal ϕi
0.
We remark that it may seem more natural to express a fairness
constraint ϕi
6 as A ♦pi . However, this is not a proper CTL
formula. It is in fact a formula in CTL
∗
[9], and in this logic, the two
expressions would be equivalent. However, our basic complexity
results in the next sections would not hold for the richer language
CTL
∗2
, and the price to pay for this is that we have to formulate
our desired goals in a somewhat more cumbersome manner than
we might ideally like. Of course, our basic framework does not
demand that goals are expressed in CTL; they could equally well
be expressed in CTL
∗
or indeed ATL [2] (as in [15]). We
comment on the implications of alternative goal representations at the
conclusion of the next section.
A multi-agent system collects together a Kripke structure
(representing the basic properties of a system under consideration: its
state space, and the possible state transitions that may occur in it),
together with a goal hierarchy, one for each agent, representing the
aspirations of the agents in the system. Formally, a multi-agent
system, M , is an (n + 1)-tuple:
M = K, γ1, . . . , γn
where K is a Kripke structure, and for each agent i in K, γi is a
goal hierarchy over K.
4.1 The Utility of Normative Systems
We can now define the utility of a Kripke structure for an agent.
The idea is that the utility of a Kripke structure is the highest index
of any goal that is guaranteed for that agent in the Kripke structure.
We make this precise in the function ui (·):
ui (K) = max{j : 0 ≤ j ≤ |γi | & K |= γi [j ]}
Note that using these definitions of goals and utility, it never
makes sense to have a goal ϕ at index n if there is a logically
weaker goal ψ at index n + k in the hierarchy: by definition of
utility, it could never be n for any structure K.
EXAMPLE 1. (continued) Let M = K, γ1, γ2 be the
multiagent system of Figure 1, with γ1 and γ2 as defined earlier in this
example. Recall that we have defined S0
as {s, t}. Then, u1(K) =
u2(K) = 4: goal ϕ4 is true in S0
, but ϕ5 is not. To see that
ϕ2
4 = A E♦p2 is true in s for instance: note that on ever path it
is always the case that there is a transition to t, in which p2 is true.
Notice that since for any goal hierarchy γi we have γ[0] = ,
then for all Kripke structures, ui (K) is well defined, with ui (K) ≥
2
CTL
∗
model checking is PSPACE-complete, and hence much
worse (under standard complexity theoretic assumptions) than
model checking CTL [8].
η δ1(K, η) δ2(K, η)
η∅ 0 0
η1 0 3
η2 3 0
η3 2 2
C D
C (2, 2) (0, 3)
D (3, 0) (0, 0)
Figure 2: Benefits of implementing a normative system η (left)
and pay-offs for the game ΣM .
0. Note that this is an ordinal utility measure: it tells us, for any
given agent, the relative utility of different Kripke structures, but
utility values are not on some standard system-wide scale. The fact
that ui (K1) > ui (K2) certainly means that i strictly prefers K1
over K2, but the fact that ui (K) > uj (K) does not mean that i
values K more highly than j . Thus, it does not make sense to
compare utility values between agents, and so for example, some system
wide measures of utility, (notably those measures that aggregate
individual utilities, such as social welfare), do not make sense when
applied in this setting. However, as we shall see shortly, other
measures - such as Pareto efficiency - can be usefully applied.
There are other representations for goals, which would allow us
to define cardinal utilities. The simplest would be to specify goals γ
for an agent as a finite, non-empty, one-to-one relation: γ ⊆ L×R.
We assume that the x values in pairs (ϕ, x) ∈ γ are specified so
that x for agent i means the same as x for agent j , and so we have
cardinal utility. We then define the utility for i of a Kripke structure
K asui (K) = max{x : (ϕ, x) ∈ γi & K |= ϕ}. The results of
this paper in fact hold irrespective of which of these representations
we actually choose; we fix upon the goal hierarchy approach in the
interests of simplicity.
Our next step is to show how, in much the same way, we can lift
the utility function from Kripke structures to normative systems.
Suppose we are given a multi-agent system M = K, γ1, . . . , γn
and an associated normative system η over K. Let for agent i,
δi (K, K ) be the difference in his utility when moving from K to
K : δi (K, K ) = ui (K )− ui (K). Then the utility of η to agent i
wrt K is δi (K, K † η). We will sometimes abuse notation and just
write δi (K, η) for this, and refer to it as the benefit for agent i of
implementing η in K. Note that this benefit can be negative.
Summarising, the utility of a normative system to an agent is the
difference between the utility of the Kripke structure in which the
normative system was implemented and the original Kripke
structure. If this value is greater than 0, then the agent would be better
off if the normative system were imposed, while if it is less than
0 then the agent would be worse off if η were imposed than in the
original system. We say η is individually rational for i wrt K if
δi (K, η) > 0, and individually rational simpliciter if η is
individually rational for every agent.
A social system now is a pair
Σ = M , η
where M is a multi-agent system, and η is a normative system over
M .
EXAMPLE 1. The table at the left hand in Figure 2 displays the
utilities δi (K, η) of implementing η in the Kripke structure of our
running example, for the normative systems η = η∅, η1, η2 and η3,
introduced before. Recall that u1(K) = u2(K) = 4.
4.2 Universal and Existential Goals
884 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Keeping in mind that a norm η restricts the possible transitions
of the model under consideration, we make the following
observation, borrowing from [15]. Some classes of goals are monotonic
or anti-monotonic with respect to adding additional constraints to
a system. Let us therefore define two fragments of the language
of CTL: the universal language Lu
with typical element μ, and the
existential fragment Le
with typical element ε.
μ ::= | p | ¬p | μ ∨ μ | A fμ | A μ | A(μ U μ)
ε ::= | p | ¬p | ε ∨ ε | E fε | E♦ε | E(ε U ε)
Let us say, for two Kripke structures K1 = S, S0
, R1, A, α, V
and K2 = S, S0
, R2, A, α, V that K1 is a subsystem of K2 and
K2 is a supersystem of K1, written K1 K2 iff R1 ⊆ R2. Note
that typically K † η K. Then we have (cf. [15]).
THEOREM 1. Suppose K1 K2, and s ∈ S. Then
∀ε ∈ Le
: K1, s |= ε ⇒ K2, s |= ε
∀μ ∈ Lu
: K2, s |= μ ⇒ K1, s |= μ
This has the following effect on imposing a new norm:
COROLLARY 1. Let K be a structure, and η a normative
system. Let γi denote a goal hierarchy for agent i.
1. Suppose agent i"s utility ui (K) is n, and γi [n] ∈ Lu
, (i.e.,
γi [n] is a universal formula). Then, for any normative system
η, δi (K, η) ≥ 0.
2. Suppose agent i"s utility ui (K † η) is n, and γi [n] is an
existential formula ε. Then, δi (K † η, K) ≥ 0.
Corollary 1"s first item says that an agent whose current
maximal goal in a system is a universal formula, need never fear the
imposition of a new norm η. The reason is that his current goal will
at least remain true (in fact a goal higher up in the hierarchy may
become true). It follows from this that an agent with only universal
goals can only gain from the imposition of normative systems η.
The opposite is true for existential goals, according to the second
item of the corollary: it can never be bad for an agent to undo a
norm η. Hence, an agent with only existential goals might well fear
any norm η.
However, these observations implicitly assume that all agents in
the system will comply with the norm. Whether they will in fact do
so, of course, is a strategic decision: it partly depends on what the
agent thinks that other agents will do. This motivates us to consider
normative system games.
5. NORMATIVE SYSTEM GAMES
We now have a principled way of talking about the utility of
normative systems for agents, and so we can start to apply the technical
apparatus of game theory to analyse them.
Suppose we have a multi-agent system M = K, γ1, . . . , γn
and a normative system η over K. It is proposed to the agents
in M that η should be imposed on K, (typically to achieve some
coordination objective). Our agent - let"s say agent i - is then faced
with a choice: should it comply with the strictures of the normative
system, or not? Note that this reasoning takes place before the agent
is in the system - it is a design time consideration.
We can understand the reasoning here as a game, as follows. A
game in strategic normal form (cf. [11, p.11]) is a structure:
G = AG, S1, . . . , Sn , U1, . . . , Un where:
• AG = {1, . . . , n} is a set of agents - the players of the game;
• Si is the set of strategies for each agent i ∈ AG (a strategy
for an agent i is nothing else than a choice between
alternative actions); and
• Ui : (S1 × · · · × Sn ) → R is the utility function for agent
i ∈ AG, which assigns a utility to every combination of
strategy choices for the agents.
Now, suppose we are given a social system Σ = M , η where
M = K, γ1, . . . , γn . Then we can associate a game - the
normative system game - GΣ with Σ, as follows. The agents AG in GΣ
are as in Σ. Each agent i has just two strategies available to it:
• C - comply (cooperate) with the normative system; and
• D - do not comply with (defect from) the normative system.
If S is a tuple of strategies, one for each agent, and x ∈ {C, D},
then we denote by AGx
S the subset of agents that play strategy x in
S. Hence, for a social system Σ = M , η , the normative system
η AGC
S only implements the restrictions for those agents that
choose to cooperate in GΣ. Note that this is the same as η AGD
S :
the normative system that excludes all the restrictions of agents that
play D in GΣ. We then define the utility functions Ui for each
i ∈ AG as:
Ui (S) = δi (K, η AGC
S ).
So, for example, if SD is a collection of strategies in which every
agent defects (i.e., does not comply with the norm), then
Ui (SD ) = δi (K, (η AGD
SD
)) = ui (K † η∅) − ui (K) = 0.
In the same way, if SC is a collection of strategies in which every
agent cooperates (i.e., complies with the norm), then
Ui (SC ) = δi (K, (η AGD
SC
)) = ui (K † (η ∅)) = ui (K † η).
We can now start to investigate some properties of normative
system games.
EXAMPLE 1. (continued) For our example system, we have
displayed the different U values for our multi agent system with the
norm η3, i.e., {(s, s), (t, t)} as the second table of Figure 2. For
instance, the pair (0, 3) in the matrix under the entry S = C, D
is obtained as follows. U1( C, D ) = δ1(K, η3 AGC
C,D ) =
u1(K † η3 AGC
C,D ) − u1(K). The first term of this is the
utility of 1 in the system K where we implement η3 for the
cooperating agent, i.e., 1, only. This means that the transitions are
R \ {(s, s)}. In this system, still ϕ1
4 = A E♦p1 is the highest
goal for agent 1. This is the same utility for 1 as in K, and hence,
δ1(K, η3 AGC
C,D ) = 0. Agent 2 of course benefits if agent 1
complies with η3 while 2 does not. His utility would be 3, since
η3 AGC
C,D is in fact η1.
5.1 Individually Rational Normative Systems
A normative system is individually rational if every agent would
fare better if the normative system were imposed than otherwise.
This is a necessary, although not sufficient condition on a norm to
expect that everybody respects it. Note that η3 of our example is
individually rational for both 1 and 2, although this is not a stable
situation: given that the other plays C, i is better of by playing
D. We can easily characterise individually rationality with respect
to the corresponding game in strategic form, as follows. Let Σ =
M , η be a social system. Then the following are equivalent:
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885
f(xk)
...
s0
s1
s2
s3
s4
s(2k−1)
s2k
t(x1)
f(x1)
t(x2)
f(x2)
t(xk)
Figure 3: The Kripke structure produced in the reduction of
Theorem 2; all transitions are associated with agent 1, the only
initial state is s0.
1. η is individually rational in M ;
2. ∀i ∈ AG, Ui (SC ) > Ui (SD ) in the game GΣ.
The decision problem associated with individually rational
normative systems is as follows:
INDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS):
Given: Multi-agent system M .
Question: Does there exist an individually rational
normative system for M ?
THEOREM 2. IRNS is NP-complete, even in one-agent systems.
PROOF. For membership of NP, guess a normative system η,
and verify that it is individually rational. Since η ⊆ R, we will be
able to guess it in nondeterministic polynomial time. To verify that
it is individually rational, we check that for all i, we have ui (K †
η) > ui (K); computing K † η is just set subtraction, so can be
done in polynomial time, while determining the value of ui (K) for
any K can be done with a polynomial number of model checking
calls, each of which requires only time polynomial in the K and γ.
Hence verifying that ui (K † η) > ui (K) requires only polynomial
time.
For NP-hardness, we reduce SAT [12, p.77]. Given a SAT instance
ϕ over Boolean variables x1, . . . , xk , we produce an instance of
IRNS as follows. First, we define a single agent A = {1}. For each
Boolean variable xi in the SAT instance, we create two Boolean
variables t(xi ) and f (xi ) in the IRNS instance. We then create a
Kripke structure Kϕ with 2k + 1 states, as shown in Figure 3: arcs
in this graph correspond to transitions in Kϕ. Let ϕ∗
be the result
of systematically substituting for every Boolean variable xi in ϕ
the CTL expression (E ft(xi )). Next, consider the following
formulae:
k^
i=1
E f(t(xi ) ∨ f (xi )) (1)
k^
i=1
¬((E ft(xi )) ∧ (E ff (xi ))) (2)
We then define the goal hierarchy for all agent 1 as follows:
γ1[0] =
γ1[1] = (1) ∧ (2) ∧ ϕ∗
We claim there is an individually rational normative system for the
instance so constructed iff ϕ is satisfiable. First, notice that any
individually rational normative system must force γ1[1] to be true,
since in the original system, we do not have γ1[1].
For the ⇒ direction, if there is an individually rational normative
system η, then we construct a satisfying assignment for ϕ by
considering the arcs that are forbidden by η: formula (1) ensures that
we must forbid an arc to either a t(xi ) or a f (xi ) state for all
variables xi , but (2) ensures that we cannot forbid arcs to both. So, if
we forbid an arc to a t(xi ) state then in the corresponding valuation
for ϕ we make xi false, while if we forbid an arc to a f (xi ) state
then we make xi true. The fact that ϕ∗
is part of the goal ensures
that the normative system is indeed a valuation for ϕ.
For ⇐, note that for any satisfying valuation for ϕ we can
construct an individually rational normative system η, as follows: if
the valuation makes xi true, we forbid the arc to the f (xi ) state,
while if the valuation makes xi false, we forbid the arc to the t(xi )
state. The resulting normative system ensures γ1[1], and is thus
individually rational.
Notice that the Kripke structure constructed in the reduction
contains just a single agent, and so the Theorem is proven.
5.2 Pareto Efficient Normative Systems
Pareto efficiency is a basic measure of how good a particular
outcome is for a group of agents [11, p.7]. Intuitively, an outcome
is Pareto efficient if there is no other outcome that makes every
agent better off. In our framework, suppose we are given a social
system Σ = M , η , and asked whether η is Pareto efficient. This
amounts to asking whether or not there is some other normative
system η such that every agent would be better off under η than
with η. If η makes every agent better off than η, then we say η
Pareto dominates η. The decision problem is as follows:
PARETO EFFICIENT NORMATIVE SYSTEM (PENS):
Given: Multi-agent system M and normative system η
over M .
Question: Is η Pareto efficient for M ?
THEOREM 3. PENS is co-NP-complete, even for one-agent
systems.
PROOF. Let M and η be as in the Theorem. We show that the
complement problem to PENS, which we refer to as PARETO
DOMINATED, is NP-complete. In this problem, we are given M and η,
and we are asked whether η is Pareto dominated, i.e., whether or not
there exists some η over M such that η makes every agent better
off than η. For membership of NP, simply guess a normative system
η , and verify that for all i ∈ A, we have ui (K † η ) > ui (K † η)
- verifying requires a polynomial number of model checking
problems, each of which takes polynomial time. Since η ⊆ R, the
normative system can be guessed in non-deterministic polynomial
time. For NP-hardness, we reduce IRNS, which we know to be
NPcomplete from Theorem 2. Given an instance M of IRNS, we let M
in the instance of PARETO DOMINATED be as in the IRNS instance,
and define the normative system for PARETO DOMINATED to be η∅,
the empty normative system. Now, it is straightforward that there
exists a normative system η which Pareto dominates η∅ in M iff
there exist an individually rational normative system in M . Since
the complement problem is NP-complete, it follows that PENS is
co-NP-complete.
886 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
η0 η1 η2 η3 η4 η5 η6 η7 η8
u1(K † η) 4 4 7 6 5 0 0 8 0
u2(K † η) 4 7 4 6 0 5 8 0 0
Table 1: Utilities for all possible norms in our example
How about Pareto efficient norms for our toy example? Settling
this question amounts to finding the dominant normative systems
among η0 = η∅, η1, η2, η3 defined before, and η4 = {(s, t)}, η5 =
{(t, s)}, η6 = {(s, s), (t, s)}, η7 = {(t, t), (s, t)} and η8 =
{(s, t), (t, s)}. The utilities for each system are given in Table 1.
From this, we infer that the Pareto efficient norms are η1, η2, η3, η6
and η7. Note that η8 prohibits the resource to be passed from one
agent to another, and this is not good for any agent (since we have
chosen S0
= {s, t}, no agent can be sure to ever get the resource,
i.e., goal ϕi
1 is not true in K † η8).
5.3 Nash Implementation Normative Systems
The most famous solution concept in game theory is of course
Nash equilibrium [11, p.14]. A collection of strategies, one for each
agent, is said to form a Nash equilibrium if no agent can benefit by
doing anything other than playing its strategy, under the
assumption that the other agents play theirs. Nash equilibria are important
because they provide stable solutions to the problem of what
strategy an agent should play. Note that in our toy example, although
η3 is individually rational for each agent, it is not a Nash
equilibrium, since given this norm, it would be beneficial for agent 1 to
deviate (and likewise for 2). In our framework, we say a social
system Σ = M , η (where η = η∅) is a Nash implementation if
SC (i.e., everyone complying with the normative system) forms a
Nash equilibrium in the game GΣ. The intuition is that if Σ is a
Nash implementation, then complying with the normative system
is a reasonable solution for all concerned: there can be no
benefit to deviating from it, indeed, there is a positive incentive for all
to comply. If Σ is not a Nash implementation, then the normative
system is unlikely to succeed, since compliance is not rational for
some agents. (Our choice of terminology is deliberately chosen to
reflect the way the term Nash implementation is used in
implementation theory, or mechanism design [11, p.185], where a game
designer seeks to achieve some outcomes by designing the rules of
the game such that these outcomes are equilibria.)
NASH IMPLEMENTATION (NI) :
Given: Multi-agent system M .
Question: Does there exist a non-empty normative
system η over M such that M , η forms a Nash
implementation?
Verifying that a particular social system forms a Nash
implementation can be done in polynomial time - it amounts to checking:
∀i ∈ A : ui (K † η) ≥ ui (K † (η {i})).
This, clearly requires only a polynomial number of model checking
calls, each of which requires only polynomial time.
THEOREM 4. The NI problem is NP-complete, even for
twoagent systems.
PROOF. For membership of NP, simply guess a normative
system η and check that it forms a Nash implementation; since η ⊆ R,
guessing can be done in non-deterministic polynomial time, and as
s(2k+1)
1
1
1
1
1
1
11
1 1
11
2
2
2
2
2
2
2
2
2
2
2
t(x1)
f(x1)
t(x2)
f(x2)
t(xk)
f(xk)
2
2
t(x1)
f(x1)
t(x2)
f(x2)
t(xk)
f(xk)
......
s0
Figure 4: Reduction for Theorem 4.
we argued above, verifying that it forms a Nash implementation
can be done in polynomial time.
For NP-hardness, we reduce SAT. Suppose we are given a SAT
instance ϕ over Boolean variables x1, . . . , xk . Then we construct an
instance of NI as follows. We create two agents, A = {1, 2}. For
each Boolean variable xi we create two Boolean variables, t(xi )
and f (xi ), and we then define a Kripke structure as shown in
Figure 4, with s0 being the only initial state; the arc labelling in
Figure 4 gives the α function, and each state is labelled with the
propositions that are true in that state. For each Boolean variable xi , we
define the formulae xi and x⊥
i as follows:
xi = E f(t(xi ) ∧ E f((E f(t(xi ))) ∧ A f(¬f (xi ))))
x⊥
i = E f(f (xi ) ∧ E f((E f(f (xi ))) ∧ A f(¬t(xi ))))
Let ϕ∗
be the formula obtained from ϕ by systematically
substituting xi for xi . Each agent has three goals: γi [0] = for both
i ∈ {1, 2}, while
γ1[1] =
k^
i=1
((E f(t(xi ))) ∧ (E f(f (xi ))))
γ2[1] = E fE f
k^
i=1
((E f(t(xi ))) ∧ (E f(f (xi ))))
and finally, for both agents, γi [2] being the conjunction of the
following formulae:
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887
k^
i=1
(xi ∨ x⊥
i ) (3)
k^
i=1
¬(xi ∧ x⊥
i ) (4)
k^
i=1
¬(E f(t(xi )) ∧ E f(f (xi ))) (5)
ϕ∗
(6)
We denote the multi-agent system so constructed by Mϕ. Now,
we prove that the SAT instance ϕ is satisfiable iff Mϕ has a Nash
implementation normative system:
For the ⇒ direction, suppose ϕ is satisfiable, and let X be a
satisfying valuation, i.e., a set of Boolean variables making ϕ true.
We can extract from X a Nash implementation normative system η
as follows: if xi ∈ X , then η includes the arc from s0 to the state
in which f (xi ) is true, and also includes the arc from s(2k + 1)
to the state in which f (xi ) is true; if xi ∈ X , then η includes the
arc from s0 to the state in which t(xi ) is true, and also includes
the arc from s(2k + 1) to the state in which t(xi ) is true. No
other arcs, apart from those so defined, as included in η. Notice
that η is individually rational for both agents: if they both comply
with the normative system, then they will have their γi [2] goals
achieved, which they do not in the basic system. To see that η
forms a Nash implementation, observe that if either agent defects
from η, then neither will have their γi [2] goals achieved: agent 1
strictly prefers (C, C) over (D, C), and agent 2 strictly prefers
(C, C) over (C, D).
For the ⇐ direction, suppose there exists a Nash implementation
normative system η, in which case η = ∅. Then ϕ is satisfiable;
for suppose not. Then the goals γi [2] are not achievable by any
normative system, (by construction). Now, since η must forbid at
least one transition, then at least one agent would fail to have its
γi [1] goal achieved if it complied, so at least one would do better
by defecting, i.e., not complying with η. But this contradicts the
assumption that η is a Nash implementation, i.e., that (C, C) forms
a Nash equilibrium.
This result is perhaps of some technical interest beyond the specific
concerns of the present paper, since it is related to two problems
that are of wider interest: the complexity of mechanism design [5],
and the complexity of computing Nash equilibria [6, 7]
5.4 Richer Goal Languages
It is interesting to consider what happens to the complexity of
the problems we consider above if we allow richer languages for
goals: in particular, CTL
∗
[9]. The main difference is that
determining ui (K) in a given multi-agent system M when such a goal
language is used involves solving a PSPACE-complete problem (since
model checking for CTL
∗
is PSPACE-complete [8]). In fact, it seems
that for each of the three problems we consider above, the
corresponding problem under the assumption of a CTL
∗
representation
for goals is also PSPACE-complete. It cannot be any easier, since
determining the utility of a particular Kripke structure involves
solving a PSPACE-complete problem. To see membership in PSPACE
we can exploit the fact that PSPACE = NPSPACE [12, p.150], and so
we can guess the desired normative system, applying a PSPACE
verification procedure to check that it has the desired properties.
6. CONCLUSIONS
Social norms are supposed to restrict our behaviour. Of course,
such a restriction does not have to be bad: the fact that an agent"s
behaviour is restricted may seem a limitation, but there may be
benefits if he can assume that others will also constrain their behaviour.
The question then, for an agent is, how to be sure that others will
comply with a norm. And, for a system designer, how to be sure
that the system will behave socially, that is, according to its norm.
Game theory is a very natural tool to analyse and answer these
questions, which involve strategic considerations, and we have
proposed a way to translate key questions concerning logic-based
normative systems to game theoretical questions. We have proposed
a logical framework to reason about such scenarios, and we have
given some computational costs for settling some of the main
questions about them. Of course, our approach is in many senses open
for extension or enrichment. An obvious issue is to consider is the
complexity of the questions we give for more practical
representations of models (cf. [1]), and to consider other classes of allowable
goals.
7. REFERENCES
[1] T. Agotnes, W. van der Hoek, J. A. Rodriguez-Aguilar,
C. Sierra, and M. Wooldridge. On the logic of normative
systems. In Proc. IJCAI-07, Hyderabad, India, 2007.
[2] R. Alur, T. A. Henzinger, and O. Kupferman.
Alternating-time temporal logic. Jnl. of the ACM,
49(5):672-713, 2002.
[3] K. Binmore. Game Theory and the Social Contract Volume
1: Playing Fair. The MIT Press: Cambridge, MA, 1994.
[4] K. Binmore. Game Theory and the Social Contract Volume
2: Just Playing. The MIT Press: Cambridge, MA, 1998.
[5] V. Conitzer and T. Sandholm. Complexity of mechanism
design. In Proc. UAI, Edmonton, Canada, 2002.
[6] V. Conitzer and T. Sandholm. Complexity results about nash
equilibria. In Proc. IJCAI-03, pp. 765-771, Acapulco,
Mexico, 2003.
[7] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The
complexity of computing a Nash equilibrium. In Proc.
STOC, Seattle, WA, 2006.
[8] E. A. Emerson. Temporal and modal logic. In Handbook of
Theor. Comp. Sci. Vol. B, pages 996-1072. Elsevier, 1990.
[9] E. A. Emerson and J. Y. Halpern. ‘Sometimes" and ‘not
never" revisited: on branching time versus linear time
temporal logic. Jnl. of the ACM, 33(1):151-178, 1986.
[10] D. Fitoussi and M. Tennenholtz. Choosing social laws for
multi-agent systems: Minimality and simplicity. Artificial
Intelligence, 119(1-2):61-101, 2000.
[11] M. J. Osborne and A. Rubinstein. A Course in Game Theory.
The MIT Press: Cambridge, MA, 1994.
[12] C. H. Papadimitriou. Computational Complexity.
Addison-Wesley: Reading, MA, 1994.
[13] Y. Shoham and M. Tennenholtz. On the synthesis of useful
social laws for artificial agent societies. In Proc. AAAI, San
Diego, CA, 1992.
[14] Y. Shoham and M. Tennenholtz. On social laws for artificial
agent societies: Off-line design. In Computational Theories
of Interaction and Agency, pages 597-618. The MIT Press:
Cambridge, MA, 1996.
[15] W. van der Hoek, M. Roberts, and M. Wooldridge. Social
laws in alternating time: Effectiveness, feasibility, and
synthesis. Synthese, 2007.
[16] M. Wooldridge and W. van der Hoek. On obligations and
normative ability. Jnl. of Appl. Logic, 3:396-420, 2005.
888 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | social law;constraint;goal;decision making;normative system game;normative system;desirable objective;complexity;kripke structure;logic;computational complexity;game;game theoretic property;ordinal utility;multi-agent system;computation tree logic;nash implementation;multiple goal of increasing priority |
train_I-49 | A Multilateral Multi-issue Negotiation Protocol | In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent. | 1. INTRODUCTION
Multi-issue negotiation protocols represent an important
field of study since negotiation problems in the real world
are often complex ones involving multiple issues. To date,
most of previous work in this area ([2, 3, 19, 13]) dealt
almost exclusively with simple negotiations involving
independent issues. However, real-world negotiation problems
involve complex dependencies between multiple issues. When
one wants to buy a car, for example, the value of a given
car is highly dependent on its price, consumption, comfort
and so on. The addition of such interdependencies greatly
complicates the agents utility functions and classical
utility functions, such as the weighted sum, are not sufficient
to model this kind of preferences. In [10, 9, 17, 14, 20], the
authors consider inter-dependencies between issues, most
often defined with boolean values, except for [9], while we can
deal with continuous and discrete dependent issues thanks
to the modelling power of the Choquet integral. In [17],
the authors deal with bilateral negotiation while we are
interested in a multilateral negotiation setting. Klein et al.
[10] present an approach similar to ours, using a mediator
too and information about the strength of the approval or
rejection that an agent makes during the negotiation. In
our protocol, we use more precise information to improve
the proposals thanks to the multi-criteria methodology and
tools used to model the preferences of our agents. Lin, in [14,
20], also presents a mediation service but using an
evolutionary algorithm to reach optimal solutions and as explained
in [4], players in the evolutionary models need to repeatedly
interact with each other until the stable state is reached.
As the population size increases, the time it takes for the
population to stabilize also increases, resulting in excessive
computation, communication, and time overheads that can
become prohibitive, and for one-to-many and many-to-many
negotiations, the overheads become higher as the number of
players increases. In [9], the authors consider a non-linear
utility function by using constraints on the domain of the
issues and a mediation service to find a combination of bids
maximizing the social welfare. Our preference model, a
nonlinear utility function too, is more complex than [9] one since
the Choquet integral takes into account the interactions and
the importance of each decision criteria/issue, not only the
dependencies between the values of the issues, to determine
the utility. We also use an iterative protocol enabling us to
find a solution even when no bid combination is possible.
In this paper, we propose a negotiation protocol suited for
multiple agents with complex preferences and taking into
account, at the same time, multiple interdependent issues and
recommendations made by the agents to improve a proposal.
Moreover, the preferences of our agents are modelled using
a multi-criteria methodology and tools enabling us to take
into account information about the improvements that can
be made to a proposal, in order to help in accelerating the
search for a consensus between the agents. Therefore, we
propose a negotiation protocol consisting of solving our
decision problem using a MAS with a multi-criteria decision
aiding modelling at the agent level and a cooperation-based
multilateral multi-issue negotiation protocol. This protocol
is studied under a non-cooperative approach and it is shown
943
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
that it has subgame perfect equilibria, provided that agents
behave rationally in the sense of von Neumann and
Morgenstern. The approach proposed in this paper has been first
introduced and presented in [8]. In this paper, we present
our first experiments, with some noteworthy results, and a
more complex multi-agent system with representatives to
enable us to have a more robust system.
In Section 2, we present our application, a crisis
management problem. Section 3 deals with the general aspect of the
proposed approach. The preference modelling is described
in sect. 4, whereas the motivations of our protocol are
considered in sect. 5 and the agent/multiagent modelling in
sect. 6. Section 7 presents the formal modelling and
properties of our protocol before presenting our first experiments
in sect. 8. Finally, in Section 9, we conclude and present
the future work.
2. CASE STUDY
This protocol is applied to a crisis management problem.
Crisis management is a relatively new field of management
and is composed of three types of activities: crisis
prevention, operational preparedness and management of declared
crisis. The crisis prevention aims to bring the risk of crisis
to an acceptable level and, when possible, avoid that the
crisis actually happens. The operational preparedness includes
strategic advanced planning, training and simulation to
ensure availability, rapid mobilisation and deployment of
resources to deal with possible emergencies. The management
of declared crisis is the response to - including the
evacuation, search and rescue - and the recovery from the crisis by
minimising the effects of the crises, limiting the impact on
the community and environment and, on a longer term, by
bringing the community"s systems back to normal. In this
paper, we focus on the response part of the management of
declared crisis activity, and particularly on the evacuation
of the injured people in disaster situations. When a crisis
is declared, the plans defined during the operational
preparedness activity are executed. For disasters, master plans
are executed. These plans are elaborated by the authorities
with the collaboration of civil protection agencies, police,
health services, non-governmental organizations, etc.
When a victim is found, several actions follow. First, a
rescue party is assigned to the victim who is examined and is
given first aid on the spot. Then, the victims can be placed
in an emergency centre on the ground called the medical
advanced post. For all victims, a sorter physician -
generally a hospital physician - examines the seriousness of their
injuries and classifies the victims by pathology. The
evacuation by emergency health transport if necessary can take
place after these clinical examinations and classifications.
Nowadays, to evacuate the injured people, the physicians
contact the emergency call centre to pass on the medical
assessments of the most urgent cases. The emergency call
centre then searches for available and appropriate spaces in
the hospitals to care for these victims. The physicians are
informed of the allocations, so they can proceed to the
evacuations choosing the emergency health transports according
to the pathologies and the transport modes provided. In
this context, we can observe that the evacuation is based
on three important elements: the examination and
classification of the victims, the search for an allocation and the
transport. In the case of the 11 March 2004 Madrid attacks,
for instance, some injured people did not receive the
appropriate health care because, during the search for space, the
emergency call centre did not consider the transport
constraints and, in particular, the traffic. Therefore, for a large
scale crisis management problem, there is a need to support
the emergency call centre and the physicians in the
dispatching to take into account the hospitals and the transport
constraints and availabilities.
3. PROPOSED APPROACH
To accept a proposal, an agent has to consider several
issues such as, in the case of the crisis management problem,
the availabilities in terms of number of beds by unit,
medical and surgical staffs, theatres and so on. Therefore, each
agent has its own preferences in correlation with its resource
constraints and other decision criteria such as, for the case
study, the level of congestion of a hospital. All the agents
also make decisions by taking into account the dependencies
between these decision criteria.
The first hypothesis of our approach is that there are
several parties involved in and impacted by the decision, and
so they have to decide together according to their own
constraints and decision criteria. Negotiation is the process by
which a group facing a conflict communicates with one
another to try and come to a mutually acceptable agreement
or decision and so, the agents have to negotiate. The
conflict we have to resolve is finding an acceptable solution for
all the parties by using a particular protocol. In our
context, multilateral negotiation is a negotiation protocol type
that is the best suited for this type of problem : this type
of protocol enables the hospitals and the physicians to
negotiate together. The negotiation also deals with multiple
issues. Moreover, an other hypothesis is that we are in a
cooperative context where all the parties have a common
objective which is to provide the best possible solution for
everyone. This implies the use of a negotiation protocol
encouraging the parties involved to cooperate as satisfying its
preferences.
Taking into account these aspects, a Multi-Agent System
(MAS) seems to be a reliable method in the case of a
distributed decision making process. Indeed, a MAS is a suitable
answer when the solution has to combine, at least,
distribution features and reasoning capabilities. Another motivation
for using MAS lies in the fact that MAS is well known for
facilitating automated negotiation at the operative decision
making level in various applications.
Therefore, our approach consists of solving a multiparty
decision problem using a MAS with
• The preferences of the agents are modelled using a
multi-criteria decision aid tool, MYRIAD, also
enabling us to consider multi-issue problems by evaluating
proposals on several criteria.
• A cooperation-based multilateral and multi-issue
negotiation protocol.
4. THE PREFERENCE MODEL
We consider a problem where an agent has several decision
criteria, a set Nk = {1, . . . , nk} of criteria for each agent k
involved in the negotiation protocol. These decision criteria
enable the agents to evaluate the set of issues that are
negotiated. The issues correspond directly or not to the decision
criteria. However, for the example of the crisis management
944 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
problem, the issues are the set of victims to dispatch
between the hospitals. These issues are translated to decision
criteria enabling the hospital to evaluate its congestion and
so to an updated number of available beds, medical teams
and so on. In order to take into account the complexity that
exists between the criteria/issues, we use a multi-criteria
decision aiding (MCDA) tool named MYRIAD [12] developed
at Thales for MCDA applications based on a two-additive
Choquet integral which is a good compromise between
versatility and ease to understand and model the interactions
between decision criteria [6].
The set of the attributes of Nk is denoted by Xk
1 , . . . , Xk
nk
.
All the attributes are made commensurate thanks to the
introduction of partial utility functions uk
i : Xk
i → [0, 1]. The
[0, 1] scale depicts the satisfaction of the agent k regarding
the values of the attributes. An option x is identified to an
element of Xk
= Xk
1 × · · · × Xk
nk
, with x = (x1, . . . , xnk ).
Then the overall assessment of x is given by
Uk(x) = Hk(uk
1 (x1), . . . , uh
nk
(xnk )) (1)
where Hk : [0, 1]nk → [0, 1] is the aggregation function. The
overall preference relation over Xk
is then
x y ⇐⇒ Uk(x) ≥ Uk(y) .
The two-additive Choquet integral is defined for
(z1, . . . , znk ) ∈ [0, 1]nk by [7]
Hk(z1, . . . , znk ) =
X
i∈Nk
0
@vk
i −
1
2
X
j=i
|Ik
i,j|
1
A zi
+
X
Ik
i,j >0
Ik
i,j zi ∧ zj +
X
Ii,j <0
|Ii,j| zi ∨ zj (2)
where vk
i is the relative importance of criterion i for agent
k and Ik
i,j is the interaction between criteria i and j, ∧ and
∨ denote the min and max functions respectively. Assume
that zi < zj. A positive interaction between criteria i and
j depicts complementarity between these criteria (positive
synergy) [7]. Hence, the lower score of z on criterion i
conceals the positive effect of the better score on criterion j to
a larger extent on the overall evaluation than the impact of
the relative importance of the criteria taken independently
of the other ones. In other words, the score of z on criterion
j is penalized by the lower score on criterion i. Conversely, a
negative interaction between criteria i and j depicts
substitutability between these criteria (negative synergy) [7]. The
score of z on criterion i is then saved by a better score on
criterion j.
In MYRIAD, we can also obtain some recommendations
corresponding to an indicator ωC (H, x) measuring the worth
to improve option x w.r.t. Hk on some criteria C ⊆ Nk as
follows
ωC (Hk, x)=
Z 1
0
Hk
`
(1 − τ)xC + τ, xNk\C
´
− Hk(x)
EC (τ, x)
dτ
where ((1−τ)xC +τ, xNk\C ) is the compound act that equals
(1 − τ)xi + τ if i ∈ C and equals xi if i ∈ Nk \ C. Moreover,
EC (τ, x) is the effort to go from the profile x to the profile
((1 − τ)xC + τ, xNk\C ). Function ωC (Hk, x) depicts the
average improvement of Hk when the criteria of coalition A
range from xC to 1C divided by the average effort needed
for this improvement. We generally assume that EC is of
order 1, that is EC (τ, x) = τ
P
i∈C (1 − xi). The expression
of ωC (Hk, x) when Hk is a Choquet integral, is given in [11].
The agent is then recommended to improve of coalition C
for which ωC (Hk, x) is maximum. This recommendation is
very useful in a negotiation protocol since it helps the agents
to know what to do if they want an offer to be accepted while
not revealing their own preference model.
5. PROTOCOL MOTIVATIONS
For multi-issue problems, there are two approaches: a
complete package approach where the issues are
negotiated simultaneously in opposition to the sequential approach
where the issues are negotiated one by one. When the issues
are dependant, then it is the best choice to bargain
simultaneously over all issues [5]. Thus, the complete package is
the adopted approach so that an offer will be on the
overall set of injured people while taking into account the other
decision criteria.
We have to consider that all the parties of the
negotiation process have to agree on the decision since they are all
involved in and impacted by this decision and so an
unanimous agreement is required in the protocol. In addition, no
party can leave the process until an agreement is reached,
i.e. a consensus achieved. This makes sense since a proposal
concerns all the parties. Moreover, we have to guarantee the
availability of the resources needed by the parties to ensure
that a proposal is realistic. To this end, the information
about these availabilities are used to determine admissible
proposals such that an offer cannot be made if one of the
parties has not enough resources to execute/achieve it. At the
beginning of the negotiation, each party provides its
maximum availabilities, this defining the constraints that have
to be satisfied for each offer submitted.
The negotiation has also to converge quickly on an
unanimous agreement. We decided to introduce in the negotiation
protocol an incentive to cooperate taking into account the
passed negotiation time. This incentive is defined on the
basis of a time dependent penalty, the discounting factor as
in [18] or a time-dependent threshold. This penalty has to
be used in the accept/reject stage of our consensus
procedure. In fact, in the case of a discounting factor, each party
will accept or reject an offer by evaluating the proposal
using its utility function deducted from the discounting factor.
In the case of a time-dependent threshold, if the evaluation
is greater or equal to this threshold, the offer is accepted,
otherwise, in the next period, its threshold is reduced.
The use of a penalty is not enough alone since it does
not help in finding a solution. Some information about the
assessments of the parties involved in the negotiation are
needed. In particular, it would be helpful to know why an
offer has been rejected and/or what can be done to make
a proposal that would be accepted. MYRIAD provides an
analysis that determines the flaws an option, here a
proposal. In particular, it gives this type of information: which
criteria of a proposal should be improved so as to reach the
highest possible overall evaluation [11]. As we use this tool
to model the parties involved in the negotiation, the
information about the criteria to improve can be used by the
mediator to elaborate the proposals. We also consider that
the dual function can be used to take into account another
type of information: on which criteria of a proposal, no
improvement is necessary so that the overall evaluation of a
proposal is still acceptable, do not decrease. Thus, all
information is a constraint to be satisfied as much as possible by
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 945
Figure 1: An illustration of some system.
the parties to make a new proposal.
We are in a cooperative context and revealing one"s
opinion on what can be improved is not prohibited, on the
contrary, it is useful and recommended here seeing that it helps
in converging on an agreement. Therefore, when one of the
parties refuses an offer, some information will be
communicated. In order to facilitate and speed up the negotiation,
we introduce a mediator. This specific entity is in charge
of making the proposals to the other parties in the system
by taking into account their public constraints (e.g. their
availabilities) and the recommendations they make. This
mediator can also be considered as the representative of
the general interest we can have, in some applications, such
as in the crisis management problem, the physician will be
the mediator and will also have some more information to
consider when making an offer (e.g. traffic state, transport
mode and time). Each party in a negotiation N, a
negotiator, can also be a mediator of another negotiation N , this
party becoming the representative of N in the negotiation
N, as illustrated by fig. 1 what can also help in reducing
the communication time.
6. AGENTIFICATION
How the problem is transposed in a MAS problem is a
very important aspect when designing such a system. The
agentification has an influence upon the systems efficiency in
solving the problem. Therefore, in this section, we describe
the elements and constraints taken into account during the
modelling phase and for the model itself. However, for this
negotiation application, the modelling is quite natural when
one observes the negotiation protocol motivations and main
properties.
First of all, it seems obvious that there should be one agent
for each player of our multilateral multi-issue negotiation
protocol. The agents have the involved parties" information
and preferences. These agents are:
• Autonomous: they decide for themselves what, when
and under what conditions actions should be
performed;
• Rational: they have a means-ends competence to fit
its decisions, according to its knowledge, preferences
and goal;
• Self-interested: they have their own interests which
may conflict with the interests of other agents.
Moreover, their preferences are modelled and a proposal
evaluated and analysed using MYRIAD. Each agent has
private information and can access public information as
knowledge.
In fact, there are two types of agents: the mediator type
for the agents corresponding to the mediator of our
negotiation protocol, the delegated physician in our application,
and the negotiator type for the agents corresponding to
the other parties, the hospitals. The main behaviours that
an agent of type mediator needs to negotiate in our protocol
are the following:
• convert_improvements: converts the information
given by the other agents involved in the negotiation
about the improvements to be done, into constraints
on the next proposal to be made;
• convert_no_decrease: converts the information given
by the other agents involved in the negotiation about
the points that should not be changed into constraints
on the next proposal to be made;
• construct_proposal: constructs a new proposal
according to the constraints obtained with
convert_improvements, convert_no_decrease and the agent
preferences;
The main behaviours that an agent of type negotiator
needs to negotiate in our protocol are the following:
• convert_proposal: converts a proposal to a MYRIAD
option of the agent according to its preferences model
and its private data;
• convert_improvements_wc: converts the agent
recommendations for the improvements of a MYRIAD
option into general information on the proposal;
• convert_no_decrease_wc: converts the agent
recommendations about the criteria that should not be
changed in the MYRIAD option into general information
on the proposal;
In addition to these behaviours, there are, for the two types
of agents, access behaviours to MYRIAD functionalities
such as the evaluation and improvement functions:
• evaluate_option: evaluates the MYRIAD option
obtained using the agent behaviour convert_proposal;
• improvements: gets the agent recommendations to
improve a proposal from the MYRIAD option;
• no_decrease: gets the agent recommendations to not
change some criteria from the MYRIAD option;
Of course, before running the system with such agents, we
must have defined each party preferences model in
MYRIAD. This model has to be part of the agent so that it could
be used to make the assessments and to retrieve the
improvements. In addition to these behaviours, the communication
acts between the agents is as follows.
1. mediator agent communication acts:
946 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
m1 m
1
1
m
inform1 m
mediator negotiator
accept−proposal
l
1
accept−proposal
m−l
reject−proposal
propose
propose
Figure 2: The protocol diagram in AUML, and
where m is the number of negotiator agents and l
is the number of agents refusing current proposal.
(a) propose: sends a message containing a proposal
to all negotiator agents;
(b) inform: sends a message to all negotiator agents
to inform them that an agreement has been
reached and containing the consensus outcome.
2. negotiator agent communication acts:
(a) accept-proposal: sends a message to the
mediator agent containing the agent recommendations
to improve the proposal and obtained with
convert_improvements_wc;
(b) reject-proposal: sends a message to the
mediator agent containing the agent recommendations
about the criteria that should not be changed and
obtained with convert_no_decrease_wc.
Such agents are interchangeable, in a case of failure, since
they all have the same properties and represent a user with
his preference model, not depending on the agent, but on the
model defined in MYRIAD. When the issues and the
decision criteria are different from each other, the information
about the criteria improvement have to be pre-processed to
give some instructions on the directions to take and about
the negotiated issues. It is the same for the evaluation of a
proposal: each agent has to convert the information about
the issues to update its private information and to obtain
the values of each attribute of the decision criteria.
7. OUR PROTOCOL
Formally, we consider negotiations where a set of
players A = {1, 2, . . . , m} and a player a are negotiating over
a set Q of size q. The player a is the protocol
mediator, the mediator agent of the agentification. The
utility/preference function of a player k ∈ A ∪ {a} is Uk,
defined using MYRIAD, as presented in Section 4, with a set
Nk of criteria, Xk
an option, and so on. An offer is a
vector P = (P1, P2, · · · , Pm), a partition of Q, in which Pk is
player k"s share of Q. We have P ∈ P where P is the set of
admissible proposals, a finite set. Note that P is determined
using all players general constraints on the proposals and Q.
Moreover, let ˜P denote a particular proposal defined as a"s
preferred proposal.
We also have the following notation: δk is the
threshold decrease factor of player k, Φk : Pk → Xk
is player
k"s function to convert a proposal to an option and Ψk is
the function indicating which points P has to be improved,
with Ψk its dual function - on which points no
improvement is necessary. Ψk is obtained using the dual function of
ωC (Hk, x):
eωC (Hk, x)=
Z 1
0
Hk(x) − Hk
`
τ xC , xNk\C
´
eEC (τ, x)
dτ
Where eEC (τ, x) is the cost/effort to go from (τxC , xNk\C )
to x.
In period t of our consensus procedure, player a proposes
an agreement P. All players k ∈ A respond to a by
accepting or rejecting P. The responses are made simultaneously.
If all players k ∈ A accept the offer, the game ends. If any
player k rejects P, then the next period t+1 begins: player a
makes another proposal P by taking into account
information provided by the players and the ones that have rejected
P apply a penalty. Therefore, our negotiation protocol can
be as follows:
Protocol P1.
• At the beginning, we set period t = 0
• a makes a proposal P ∈ P that has not been
proposed before.
• Wait that all players of A give their opinion
Yes or No to the player a. If all players
agree on P, this later is chosen. Otherwise
t is incremented and we go back to previous
point.
• If there is no more offer left from P, the
default offer ˜P will be chosen.
• The utility of players regarding a given
offer decreases over time. More precisely, the
utility of player k ∈ A at period t regarding
offer P is Uk(Φk(Pk), t) = ft(Uk(Φk(Pk))),
where one can take for instance ft(x) =
x.(δk)t
or ft(x) = x − δk.t, as penalty
function.
Lemma 1. Protocol P1 has at least one subgame perfect
equilibrium 1
.
Proof : Protocol P1 is first transformed in a game in
extensive form. To this end, one shall specify the order in which
the responders A react to the offer P of a. However the
order in which the players answer has no influence on the
course of the game and in particular on their personal
utility. Hence protocol P1 is strictly equivalent to a game in
1
A subgame perfect equilibrium is an equilibrium such that
players" strategies constitute a Nash equilibrium in every
subgame of the original game [18, 16]. A Nash equilibrium
is a set of strategies, one for each player, such that no player
has incentive to unilaterally change his/her action [15].
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947
extensive form, considering any order of the players A. This
game is clearly finite since P is finite and each offer can
only be proposed once. Finally P1 corresponds to a game
with perfect information. We end the proof by using a
classical result stating that any finite game in extensive form
with perfect information has at least one subgame perfect
equilibrium (see e.g. [16]).
Rational players (in the sense of von Neumann and
Morgenstern) involved in protocol P1 will necessarily come up
with a subgame perfect equilibrium.
Example 1. Consider an example with A = {1, 2} and
P = {P1
, P2
, P3
} where the default offer is P1
. Assume
that ft(x) = x − 0.1 t. Consider the following table giving
the utilities at t = 0.
P1
P2
P3
a 1 0.8 0.7
1 0.1 0.7 0.5
2 0.1 0.3 0.8
It is easy to see that there is one single subgame perfect
equilibrium for protocol P1 corresponding to these values. This
equilibrium consists of the following choices: first a proposes
P3
; player 1 rejects this offer; a proposes then P2
and both
players 1 and 2 accepts otherwise they are threatened to
receive the worse offer P1
for them. Finally offer P2
is
chosen. Option P1
is the best one for a but the two other players
vetoed it. It is interesting to point out that, even though a
prefers P2
to P3
, offer P3
is first proposed and this make
P2
being accepted. If a proposes P2
first, then the subgame
perfect equilibrium in this situation is P3
. To sum up, the
worse preferred options have to be proposed first in order to
get finally the best one. But this entails a waste of time.
Analysing the previous example, one sees that the game
outcome at the equilibrium is P2
that is not very attractive
for player 2. Option P3
seems more balanced since no player
judges it badly. It could be seen as a better solution as a
consensus among the agents.
In order to introduce this notion of balanceness in the
protocol, we introduce a condition under which a player will
be obliged to accept the proposal, reducing the autonomy
of the agents but for increasing rationality and cooperation.
More precisely if the utility of a player is larger than a given
threshold then acceptance is required. The threshold
decreases over time so that players have to make more and
more concession. Therefore, the protocol becomes as
follows.
Protocol P2.
• At the beginning we set period t = 0
• a makes a proposal P ∈ P that has not been
proposed before.
• Wait that all players of A give their opinion
Yes or No to the player a. A player k must
accept the offer if Uk(Φk(Pk)) ≥ ρk(t) where
ρk(t) tends to zero when t grows.
Moreover there exists T such that for all t ≥ T,
ρk(t) = 0. If all players agree on P, this
later is chosen. Otherwise t is incremented
and we go back to previous point.
• If there is no more offer left from P, the
default offer ˜P will be chosen.
One can show exactly as in Lemma 1 that protocol P2
has at least one subgame perfect equilibrium. We expect
that protocol P2 provides a solution not to far from P ,
so it favours fairness among the players. Therefore, our
cooperation-based multilateral multi-issue protocol is the
following:
Protocol P.
• At the beginning we set period t = 0
• a makes a proposal P ∈ P that has not
been proposed before, considering Ψk(Pt
)
and Ψk(Pt
) for all players k ∈ A.
• Wait that all players of A give their opinion
(Yes , Ψk(Pt
)) or (No , Ψk(Pt
)) to the
player a. A player k must accept the offer
if Uk(Φk(Pk)) ≥ ρk(t) where ρk(t) tends to
zero when t grows. Moreover there exists
T such that for all t ≥ T, ρk(t) = 0. If
all players agree on P, this later is chosen.
Otherwise t is incremented and we go back
to previous point.
• If there is no more offer left from P, the
default offer ˜P will be chosen.
8. EXPERIMENTS
We developed a MAS using the widely used JADE agent
platform [1]. This MAS is designed to be as general as
possible (e.g. a general framework to specialise according to the
application) and enable us to make some preliminary
experiments. The experiments aim at verifying that our approach
gives solutions as close as possible to the Maximin solution
and in a small number of rounds and hopefully in a short
time since our context is highly cooperative. We defined
the two types of agents and their behaviours as introduced
in section 6. The agents and their behaviours correspond
to the main classes of our prototype, NegotiatorAgent
and NegotiatorBehaviour for the negotiator agents, and
MediatorAgent and MediatorBehaviour for the mediator
agent. These classes extend JADE classes and integrate
MYRIAD into the agents, reducing the amount of
communications in the system. Some functionalities depending on
the application have to be implemented according to the
application by extending these classes. In particular, all
conversion parts of the agents have to be specified according to
the application since to convert a proposal into decision
criteria, we need to know, first, this model and the correlations
between the proposals and this model.
First, to illustrate our protocol, we present a simple
example of our dispatch problem. In this example, we have
three hospitals, H1, H2 and H3. Each hospital can receive
victims having a particular pathology in such a way that
H1 can receive patients with the pathology burn, surgery
or orthopedic, H2 can receive patients with the pathology
surgery, orthopedic or cardiology and H3 can receive
patients with the pathology burn or cardiology. All the
hospitals have similar decision criteria reflecting their preferences
on the level of congestion they can face for the overall
hospital and the different services available, as briefly explained
for hospital H1 hereafter.
For hospital H1, the preference model, fig. 3, is composed
of five criteria. These criteria correspond to the preferences
on the pathologies the hospital can treat. In the case of
948 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 3: The H1 preference model in MYRIAD.
the pathology burn, the corresponding criterion, also named
burn as shown in fig. 3, represents the preferences of H1
according to the value of Cburn which is the current capacity
of burn. Therefore, the utility function of this criterion
represents a preference such that the more there are patients of
this pathology in the hospital, the less the hospital may
satisfy them, and this with an initial capacity. In addition to
reflecting this kind of viewpoint, the aggregation function as
defined in MYRIAD introduces a veto on the criteria burn,
surgery, orthopedic and EReceipt, where EReceipt is the
criterion for the preferences about the capacity to receive a
number of patients at the same time.
In this simplified example, the physician have no
particular preferences on the dispatch and the mediator agent
chooses a proposal randomly in a subset of the set of
admissibility. This subset have to satisfy as much as possible the
recommendations made by the hospitals. To solve this
problem, for this example, we decided to solve a linear problem
with the availability constraints and the recommendations
as linear constraints on the dispatch values. The set of
admissibility is then obtained by solving this linear problem
by the use of Prolog. Moreover, only the recommendations
on how to improve a proposal are taken into account. The
problem to solve is then to dispatch to hospital H1, H2 and
H3, the set of victims composed of 5 victims with the
pathology burn, 10 with surgery, 3 with orthopedic and 7 with
cardiology. The availabilities of the hospitals are as
presented in the following table.
Available Overall burn surg. orthop. cardio.
H1 11 4 8
10H2 25 - 3 4 10
H3 7 10 - - 3
We obtain a multiagent system with the mediator agent
and three agents of type negotiator for the three hospital
in the problem. The hospitals threshold are fixed
approximatively to the level where an evaluation is considered as
good. To start, the negotiator agents send their
availabilities. The mediator agent makes a proposal chosen randomly
in admissible set obtained with these availabilities as
linear constraints. This proposal is the vector P0 = [[H1,burn,
3], [H1, surgery, 8], [H1, orthopaedic, 0], [H2, surgery, 2],
[H2, orthopaedic, 3], [H2, cardiology, 6], [H3, burn, 2], [H3,
cardiology, 1]] and the mediator sends propose(P0) to H1,
H2 and H3 for approval. Each negotiator agent evaluates
this proposal and answers back by accepting or rejecting P0:
• Agent H1 rejects this offer since its evaluation is very
far from the threshold (0.29, a bad score) and gives
a recommendation to improve burn and surgery by
sending the message
reject_proposal([burn,surgery]);
• Agent H2 accepts this offer by sending the message
accept_proposal(), the proposal evaluation being
good;
• Agent H3 accepts P0 by sending the message accept_
proposal(), the proposal evaluation being good.
Just with the recommendations provided by agent H1, the
mediator is able to make a new proposal by restricting the
value of burn and surgery. The new proposal obtained is
then P1 = [[H1,burn, 0], [H1, surgery, 8], [H1, orthopaedic,
1], [H2, surgery, 2], [H2, orthopaedic, 2], [H2, cardiology,
6], [H3, burn, 5], [H3, cardiology, 1]]. The mediator sends
propose(P1) the negotiator agents. H1, H2 and H3 answer
back by sending the message accept_proposal(), P1 being
evaluated with a high enough score to be acceptable, and
also considered as a good proposal when using the
explanation function of MYRIAD. An agreement is reached with
P1. Note that the evaluation of P1 by H3 has decreased in
comparison with P0, but not enough to be rejected and that
this solution is the Pareto one, P∗
.
Other examples have been tested with the same settings:
issues in IN, three negotiator agents and the same mediator
agent, with no preference model but selecting randomly the
proposal. We obtained solutions either equal or close to the
Maximin solution, the distance from the standard deviation
being less than 0.0829, the evaluations not far from the ones
obtained with P∗
and with less than seven proposals made.
This shows us that we are able to solve this multi-issue
multilateral negotiation problem in a simple and efficient way,
with solutions close to the Pareto solution.
9. CONCLUSION AND FUTURE WORK
This paper presents a new protocol to address
multilateral multi-issue negotiation in a cooperative context. The
first main contribution is that we take into account complex
inter-dependencies between multiple issues with the use of
a complex preference modelling. This contribution is
reinforced by the use of multi-issue negotiation in a multilateral
context. Our second contribution is the use of sharp
recommendations in the protocol to help in accelerating the search
of a consensus between the cooperative agents and in finding
an optimal solution. We have also shown that the protocol
has subgame perfect equilibria and these equilibria converge
to the usual maximum solution. Moreover, we tested this
protocol in a crisis management context where the
negotiation aim is where to evacuate a whole set of injured people
to predefined hospitals.
We have already developed a first MAS, in particular
integrating MYRIAD, to test this protocol in order to know
more about its efficiency in terms of solution quality and
quickness in finding a consensus. This prototype enabled
us to solve some examples with our approach and the
results we obtained are encouraging since we obtained quickly
good agreements, close to the Pareto solution, in the light
of the initial constraints of the problem: the availabilities.
We still have to improve our MAS by taking into account
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949
the two types of recommendations and by adding a
preference model to the mediator of our system. Moreover, a
comparative study has to be done in order to evaluate the
performance of our framework against the existing ones and
against some variations on the protocol.
10. ACKNOWLEDGEMENT
This work is partly funded by the ICIS research project
under the Dutch BSIK Program (BSIK 03024).
11. REFERENCES
[1] JADE. http://jade.tilab.com/.
[2] P. Faratin, C. Sierra, and N. R. Jennings. Using
similarity criteria to make issue trade-offs in
automated negotiations. Artificial Intelligence,
142(2):205-237, 2003.
[3] S. S. Fatima, M. Wooldridge, and N. R. Jennings.
Optimal negotiation of multiple issues in incomplete
information settings. In 3rd International Joint
Conference on Autonomous Agents and Multiagent
Systems (AAMAS"04), pages 1080-1087, New York,
USA, 2004.
[4] S. S. Fatima, M. Wooldridge, and N. R. Jennings. A
comparative study of game theoretic and evolutionary
models of bargaining for software agents. Artificial
Intelligence Review, 23:185-203, 2005.
[5] S. S. Fatima, M. Wooldridge, and N. R. Jennings. On
efficient procedures for multi-issue negotiation. In 8th
International Workshop on Agent-Mediated Electronic
Commerce(AMEC"06), pages 71-84, Hakodate, Japan,
2006.
[6] M. Grabisch. The application of fuzzy integrals in
multicriteria decision making. European J. of
Operational Research, 89:445-456, 1996.
[7] M. Grabisch, T. Murofushi, and M. Sugeno. Fuzzy
Measures and Integrals. Theory and Applications
(edited volume). Studies in Fuzziness. Physica Verlag,
2000.
[8] M. Hemaissia, A. El Fallah-Seghrouchni,
C. Labreuche, and J. Mattioli. Cooperation-based
multilateral multi-issue negotiation for crisis
management. In 2th International Workshop on
Rational, Robust and Secure Negotiation (RRS"06),
pages 77-95, Hakodate, Japan, May 2006.
[9] T. Ito, M. Klein, and H. Hattori. A negotiation
protocol for agents with nonlinear utility functions. In
AAAI, 2006.
[10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam.
Negotiating complex contracts. Group Decision and
Negotiation, 12:111-125, March 2003.
[11] C. Labreuche. Determination of the criteria to be
improved first in order to improve as much as possible
the overall evaluation. In IPMU 2004, pages 609-616,
Perugia, Italy, 2004.
[12] C. Labreuche and F. Le Hu´ed´e. MYRIAD: a tool suite
for MCDA. In EUSFLAT"05, pages 204-209,
Barcelona, Spain, 2005.
[13] R. Y. K. Lau. Towards genetically optimised
multi-agent multi-issue negotiations. In Proceedings of
the 38th Annual Hawaii International Conference on
System Sciences (HICSS"05), Big Island, Hawaii, 2005.
[14] R. J. Lin. Bilateral multi-issue contract negotiation for
task redistribution using a mediation service. In Agent
Mediated Electronic Commerce VI (AMEC"04), New
York, USA, 2004.
[15] J. F. Nash. Non cooperative games. Annals of
Mathematics, 54:286-295, 1951.
[16] G. Owen. Game Theory. Academic Press, New York,
1995.
[17] V. Robu, D. J. A. Somefun, and J. A. L. Poutr´e.
Modeling complex multi-issue negotiations using
utility graphs. In 4th International Joint Conference
on Autonomous agents and multiagent systems
(AAMAS"05), pages 280-287, 2005.
[18] A. Rubinstein. Perfect equilibrium in a bargaining
model. Econometrica, 50:97-109, jan 1982.
[19] L.-K. Soh and X. Li. Adaptive, confidence-based
multiagent negotiation strategy. In 3rd International
Joint Conference on Autonomous agents and
multiagent systems (AAMAS"04), pages 1048-1055,
Los Alamitos, CA, USA, 2004.
[20] H.-W. Tung and R. J. Lin. Automated contract
negotiation using a mediation service. In 7th IEEE
International Conference on E-Commerce Technology
(CEC"05), pages 374-377, Munich, Germany, 2005.
950 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | multi-criterion decision make;decision making;automate negotiation;multi-agent system;cooperative agent;modelling;myriad;multilateral negotiation;negotiation protocol;automated negotiation;multi-issue negotiation;crisis management;negotiation strategy |
train_I-50 | Agents, Beliefs, and Plausible Behavior in a Temporal Setting | Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae. | 1. INTRODUCTION
Notions like time, knowledge, and beliefs are very
important for analyzing the behavior of agents and multi-agent
systems. In this paper, we extend modal logics of time and
knowledge with a concept of plausible behavior: this notion
is added to the language of CTLK [19], which is a
straightforward combination of the branching-time temporal logic
CTL [4, 3] and standard epistemic logic [9, 5].
In our approach, plausibility can be seen as a temporal
property of behaviors. That is, some behaviors of the
system can be assumed plausible and others implausible, with
the underlying idea that the latter should perhaps be
ignored in practical reasoning about possible future courses
of action. Moreover, behaviors can be formally understood
as temporal paths in the Kripke structure modeling a
multiagent system. As a consequence, we obtain a language to
reason about what can (or must) plausibly happen. We
propose a particular notion of beliefs (inspired by [20, 7]),
defined in terms of epistemic relations and plausibility. The
main intuition is that beliefs are facts that an agent would
know if he assumed that only plausible things could happen.
We believe that humans use such a concept of plausibility
and practical beliefs quite often in their everyday
reasoning. Restricting one"s reasoning to plausible possibilities is
essential to make the reasoning feasible, as the space of all
possibilities is exceedingly large in real life. We investigate
some important properties of plausibility, knowledge, and
belief in this new framework. In particular, we show that
knowledge is an S5 modality, and that beliefs satisfy
axioms K45 in general, and KD45 for the class of plausibly
serial models. Finally, we show that the relationship
between knowledge and belief for plausibly serial models is
natural and reflects the initial intuition well. We also show
how plausibility assumptions can be specified in the object
language via a plausibility update operator, and we study
properties of such updates. Finally, we show that model
checking of the new logic is no more complex than model
checking CTL and CTLK.
Our ultimate goal is to come up with a logic that
allows the study of strategies, time, knowledge, and
plausible/rational behavior under both perfect and imperfect
information. As combining all these dimensions is highly
nontrivial (cf. [12, 14]) it seems reasonable to split this task.
While this paper deals with knowledge, plausibility, and
belief, the companion paper [11] proposes a general framework
for multi-agent systems that regard game-theoretical
rationality criteria like Nash equilibrium, Pareto optimality, etc.
The latter approach is based on the more powerful logic
ATL [1].
The paper is structured as follows. Firstly, we briefly
present branching-time logic with knowledge, CTLK. In
Section 3 we present our approach to plausibility and
formally define CTLK with plausibility. We also show how
582
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
temporal formulae can be used to describe plausible paths,
and we compare our logic with existing related work. In
Section 4, properties of knowledge, belief, and plausibility are
explored. Finally, we present verification complexity results
for CTLKP in Section 5.
2. BRANCHING TIME AND KNOWLEDGE
In this paper we develop a framework for agents" beliefs
about how the world can (or must) evolve. Thus, we need a
notion of time and change, plus a notion of what the agents
are supposed to know in particular situations. CTLK [19]
is a straightforward combination of the computation tree
logic CTL [4, 3] and standard epistemic logic [9, 5].
CTL includes operators for temporal properties of
systems: i.e., path quantifier E (there is a path), together
with temporal operators: f(in the next state), 2
(always from now on) and U (until).1
Every occurrence of
a temporal operator is preceded by exactly one path
quantifier in CTL (this variant of the language is sometimes called
vanilla CTL). Epistemic logic uses operators for
representing agents" knowledge: Kaϕ is read as agent a knows
that ϕ.
Let Π be a set of atomic propositions with a typical
element p, and Agt = {1, ..., k} be a set of agents with a typical
element a. The language of CTLK consists of formulae ϕ,
given as follows:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Eγ | Kaϕ
γ ::= fϕ | 2 ϕ | ϕU ϕ.
We will sometimes refer to formulae ϕ as (vanilla) state
formulae and to formulae γ as (vanilla) path formulae.
The semantics of CTLK is based on Kripke models M =
Q, R, ∼1, ..., ∼k, π , which include a nonempty set of states
Q, a state transition relation R ⊆ Q × Q, epistemic
indistinguishability relations ∼a⊆ Q × Q (one per agent), and a
valuation of propositions π : Π → P(Q). We assume that
relation R is serial and that all ∼a are equivalence relations.
A path λ in M refers to a possible behavior (or
computation) of system M, and can be represented as an infinite
sequence of states that follow relation R, that is, a sequence
q0q1q2... such that qiRqi+1 for every i = 0, 1, 2, ... We
denote the ith state in λ by λ[i]. The set of all paths in M
is denoted by ΛM (if the model is clear from context, M
will be omitted). A q-path is a path that starts from q,
i.e., λ[0] = q. A q-subpath is a sequence of states, starting
from q, which is a subpath of some path in the model, i.e.
a sequence q0q1... such that q = q0 and there are q0
, ..., qi
such that q0
...qi
q0q1... ∈ ΛM.2
The semantics of CTLK is
defined as follows:
M, q |= p iff q ∈ π(p);
M, q |= ¬ϕ iff M, q |= ϕ;
M, q |= ϕ ∧ ψ iff M, q |= ϕ and M, q |= ψ;
M, q |= E fϕ iff there is a q-path λ such that M, λ[1] |= ϕ;
M, q |= E2 ϕ iff there is a q-path λ such that M, λ[i] |= ϕ
for every i ≥ 0;
1
Additional operators A (for every path) and ♦
(sometime in the future) are defined in the usual way.
2
For CTLK models, λ is a q-subpath iff it is a q-path. It
will not always be so when plausible paths are introduced.
M, q |= EϕU ψ iff there is a q-path λ and i ≥ 0 such that
M, λ[i] |= ψ, and M, λ[j] |= ϕ for every 0 ≤ j < i;
M, q |= Kaϕ iff M, q |= ϕ for every q such that q ∼a q .
3. EXTENDING TIME AND KNOWLEDGE
WITH PLAUSIBILITY AND BELIEFS
In this section we discuss the central concept of this
paper, i.e. the concept of plausibility. First, we outline the
idea informally. Then, we extend CTLK with the notion
of plausibility by adding plausible path operators Pl a and
physical path operator Ph to the logic. Formula Pl aϕ has
the intended meaning: according to agent a, it is plausible
that ϕ holds; formula Ph ϕ reads as: ϕ holds in all
physically possible scenarios (i.e., even in implausible ones). The
plausible path operator restricts statements only to those
paths which are defined to be sensible, whereas the
physical path operator generates statements about all paths that
may theoretically occur. Furthermore, we define beliefs on
top of plausibility and knowledge, as the facts that an agent
would know if he assumed that only plausible things could
happen. Finally, we discuss related work [7, 8, 20, 18, 16],
and compare it with our approach.
3.1 The Concept of Plausibility
It is well known how knowledge (or beliefs) can be
modeled with Kripke structures. However, it is not so obvious
how we can capture knowledge and beliefs in a sensible way
in one framework. Clearly, there should be a connection
between these two notions. Our approach is to use the
notion of plausibility for this purpose. Plausibility can serve
as a primitive concept that helps to define the semantics
of beliefs, in a similar way as indistinguishability of states
(represented by relation ∼a) is the semantic concept that
underlies knowledge. In this sense, our work follows [7, 20]:
essentially, beliefs are what an agent would know if he took
only plausible options into account. In our approach,
however, plausibility is explicitly seen as a temporal property.
That is, we do not consider states (or possible worlds) to be
more plausible than others but rather define some behaviors
to be plausible, and others implausible. Moreover,
behaviors can be formally understood as temporal paths in the
Kripke structure modeling a multi-agent system.
An actual notion of plausibility (that is, a particular set of
plausible paths) can emerge in many different ways. It may
result from observations and learning; an agent can learn
from its observations and see specific patterns of events as
plausible (a lot of people wear black shoes if they wear a
suit). Knowledge exchange is another possibility (e.g., an
agent a can tell agent b that player c always bluffs when he
is smiling). Game theory, with its rationality criteria
(undominated strategies, maxmin, Nash equilibrium etc.) is
another viable source of plausibility assumptions. Last but not
least, folk knowledge can be used to establish
plausibilityrelated classifications of behavior (players normally want
to win a game, people want to live).
In any case, restricting the reasoning to plausible
possibilities can be essential if we want to make the reasoning
feasible, as the space of all possibilities (we call them physical
possibilities in the rest of the paper) is exceedingly large in
real life. Of course, this does not exclude a more extensive
analysis in special cases, e.g. when our plausibility
assumptions do not seem accurate any more, or when the cost of
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 583
inaccurate assumptions can be too high (as in the case of
high-budget business decisions). But even in these cases, we
usually do not get rid of plausibility assumptions completely
- we only revise them to make them more cautious.3
To formalize this idea, we extend models of CTLK with
sets of plausible paths and add plausibility operators Pl a,
physical paths operator Ph , and belief operators Ba to the
language of CTLK. Now, it is possible to make statements
that refer to plausible paths only, as well as statements that
regard all paths that may occur in the system.
3.2 CTLK with Plausibility
In this section, we extend the logic of CTLK with
plausibility; we call the resulting logic CTLKP. Formally, the
language of CTLKP is defined as:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Eγ | Pl aϕ | Ph ϕ | Kaϕ | Baϕ
γ ::= fϕ | 2 ϕ | ϕU ϕ.
For instance, we may claim it is plausible to assume that
a shop is closed after the opening hours, though the manager
may be physically able to open it at any time: Pl aA2 (late →
¬open) ∧ Ph E♦ (late ∧ open).
The semantics of CTLKP extends that of CTLK as
follows. Firstly, we augment the models with sets of plausible
paths. A model with plausibility is given as
M = Q, R, ∼1, ..., ∼k, Υ1, ..., Υk, π ,
where Q, R, ∼1, ..., ∼k, π is a CTLK model, and Υa ⊆ ΛM
is the set of paths in M that are plausible according to agent
a. If we want to make it clear that Υa is taken from model
M, we will write ΥM
a . It seems worth emphasizing that this
notion of plausibility is subjective and holistic. It is
subjective because Υa represents agent a"s subjective view on what
is plausible - and indeed, different agents may have
different ideas on plausibility (i.e., Υa may differ from Υb). It is
holistic because Υa represents agent a"s idea of the
plausible behavior of the whole system (including the behavior of
other agents).
Remark 1. In our models, plausibility is also global, i.e.,
plausibility sets do not depend on the state of the system.
Investigating systems, in which plausibility is relativized with
respect to states (like in [7]), might be an interesting avenue
of future work. However, such an approach - while obviously
more flexible - allows for potentially counterintuitive system
descriptions. For example, it might be the case that path λ
is plausible in q = λ[0], but the set of plausible paths in
q = λ[1] is empty. That is, by following plausible path λ we
are bound to get to an implausible situation. But then, does
it make sense to consider λ as plausible?
Secondly, we use a non-standard satisfaction relation |=P ,
which we call plausible satisfaction. Let M be a CTLKP
3
That is, when planning to open an industrial plant in the
UK, we will probably consider the possibility of our main
contractor taking her life, but we will still not take into
account the possibilities of: an invasion of UFO, England being
destroyed by a meteorite, Fidel Castro becoming the British
Prime Minister etc. Note that this is fundamentally different
from using a probabilistic model in which all these unlikely
scenarios are assigned very low probabilities: in that case,
they also have a very small influence on our final decision,
but we must process the whole space of physical possibilities
to evaluate the options.
model and P ⊆ ΛM be an arbitrary subset of paths in M
(not necessarily any ΥM
a ). |=P restricts the evaluation of
temporal formulae to the paths given in P only. The
absolute satisfaction relation |= is defined as |=ΛM .
Let on(P) be the set of all states that lie on at least one
path in P, i.e. on(P) = {q ∈ Q | ∃λ ∈ P∃i (λ[i] = q)}. Now,
the semantics of CTLKP can be given through the
following clauses:
M, q |=P p iff q ∈ π(p);
M, q |=P ¬ϕ iff M, q |=P ϕ;
M, q |=P ϕ ∧ ψ iff M, q |=P ϕ and M, q |=P ψ;
M, q |=P E fϕ iff there is a q-subpath λ ∈ P such that
M, λ[1] |=P ϕ;
M, q |=P E2 ϕ iff there is a q-subpath λ ∈ P such that
M, λ[i] |=P ϕ for every i ≥ 0;
M, q |=P EϕU ψ iff there is a q-subpath λ ∈ P and i ≥ 0
such that M, λ[i] |=P ψ, and M, λ[j] |=P ϕ for every
0 ≤ j < i;
M, q |=P Pl aϕ iff M, q |=Υa
ϕ;
M, q |=P Ph ϕ iff M, q |= ϕ;
M, q |=P Kaϕ iff M, q |= ϕ for every q such that q ∼a q ;
M, q |=P Baϕ iff for all q ∈ on(Υa) with q ∼a q , we have
that M, q |=Υa
ϕ.
One of the main reasons for using the concept of
plausibility is that we want to define agents" beliefs out of more
primitive concepts - in our case, these are plausibility and
indistinguishability - in a way analogous to [20, 7]. If an
agent knows that ϕ, he must be sure about it. However,
beliefs of an agent are not necessarily about reliable facts.
Still, they should make sense to the agent; if he believes that
ϕ, then the formula should at least hold in all futures that
he envisages as plausible. Thus, beliefs of an agent may be
seen as things known to him if he disregards all non-plausible
possibilities.
We say that ϕ is M-true (M |= ϕ) if M, q |= ϕ for all
q ∈ QM. ϕ is valid (|= ϕ) if M |= ϕ for all models M. ϕ
is M-strongly true (M |≡ ϕ) if M, q |=P ϕ for all q ∈ QM
and all P ⊆ ΛM. ϕ is strongly valid ( |≡ ϕ) if M |≡ ϕ for all
models M.
Proposition 2. Strong truth and strong validity imply
truth and validity, respectively. The reverse does not hold.
Ultimately, we are going to be interested in normal (not
strong) validity, as parameterizing the satisfaction relation
with a set P is just a technical device for propagating sets
of plausible paths Υa into the semantics of nested formulae.
The importance of strong validity, however, lies in the fact
that |≡ ϕ ↔ ψ makes ϕ and ψ completely interchangeable,
while the same is not true for normal validity.
Proposition 3. Let Φ[ϕ/ψ] denote formula Φ in which
every occurrence of ψ was replaced by ϕ. Also, let |≡ ϕ ↔ ψ.
Then for all M, q, P: M, q |=P Φ iff M, q |=P Φ[ϕ/ψ] (in
particular, M, q |= Φ iff M, q |= Φ[ϕ/ψ]).
Note that |= ϕ ↔ ψ does not even imply that M, q |= Φ iff
M, q |= Φ[ϕ/ψ].
584 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 1: Guessing Robots game
Example 1 (Guessing Robots). Consider a simple
game with two agents a and b, shown in Figure 1. First,
a chooses a real number r ∈ [0, 1] (without revealing the
number to b); then, b chooses a real number r ∈ [0, 1].
The agents win the game (and collect EUR 1, 000, 000) if
both chose 1, otherwise they lose. Formally, we model the
game with a CTLKP model M, in which the set of states
Q includes qs for the initial situation, states qr, r ∈ [0, 1],
for the situations after a has chosen number r, and final
states qw, ql for the winning and the losing situation,
respectively. The transition relation is as follows: qsRqr and
qrRql for all r ∈ [0, 1]; q1Rqw, qwRqw, and qlRql. Moreover,
π(one) = {q1} and π(win) = {qw}. Player a has perfect
information in the game (i.e., q ∼a q iff q = q ), but player
b does not distinguish between states qr (i.e., qr ∼b qr for
all r, r ∈ [0, 1]). Obviously, the only sensible thing to do
for both agents is to choose 1 (using game-theoretical
vocabulary, these strategies are strongly dominant for the
respective players). Thus, there is only one plausible course of
events if we assume that our players are rational, and hence
Υa = Υb = {qsq1qwqw . . .}.
Note that, in principle, the outcome of the game is
uncertain: M, qs |= ¬A♦ win∧¬A2 ¬win. However, assuming
rationality of the players makes it only plausible that the game
must end up with a win: M, qs |= Pla A♦ win ∧ Plb A♦ win,
and the agents believe that this will be the case: M, qs |=
BaA♦ win ∧ BbA♦ win. Note also that, in any of the states
qr, agent b believes that a (being rational) has played 1:
M, qr |= Bbone for all r ∈ [0, 1].
3.3 Defining Plausible Paths with Formulae
So far, we have assumed that sets of plausible paths are
somehow given in models. In this section we present a
dynamic approach where an actual notion of plausibility can
be specified in the object language. Note that we want to
specify (usually infinite) sets of infinite paths, and we need a
finite representation of these structures. One logical solution
is given by using path formulae γ. These formulae describe
properties of paths; therefore, a specific formula can be used
to characterize a set of paths. For instance, think about a
country in Africa where it has never snowed. Then,
plausible paths might be defined as ones in which it never snows,
i.e., all paths that satisfy 2 ¬snows. Formally, let γ be a
CTLK path formula. We define |γ|M to be the set of paths
that satisfy γ in model M:
| fϕ|M = {λ | M, λ[1] |= ϕ}
|2 ϕ|M = {λ | ∀i (M, λ[i] |= ϕ)}
|ϕ1U ϕ2|M = {λ | ∃i
`
M, λ[i] |= ϕ2 ∧
∀j(0 ≤ j < i ⇒ M, λ[j] |= ϕ1)
´
}.
Moreover, we define the plausible paths model update as
follows. Let M = Q, R, ∼1, ..., ∼k, Υ1, ..., Υk, π be a
CTLKP model, and let P ⊆ ΛM be a set of paths. Then
Ma,P
= Q, R, ∼1, ..., ∼k, Υ1, ..., Υa−1, P, Υa+1, ..., Υk, π
denotes model M with a"s set of plausible paths reset to P.
Now we can extend the language of CTLKP with
formulae (set-pla γ)ϕ with the intuitive reading: suppose that
γ exactly characterizes the set of plausible paths, then ϕ
holds, and formal semantics given below:
M, q |=P (set-pla γ)ϕ iff Ma,|γ|M , q |=P ϕ.
We observe that this update scheme is similar to the one
proposed in [13].
3.4 Comparison to Related Work
Several modal notions of plausibility were already
discussed in the existing literature [7, 8, 20, 18, 16]. In these
papers, like in ours, plausibility is used as a primitive
semantic concept that helps to define beliefs on top of agents"
knowledge. A similar idea was introduced by Moses and
Shoham in [18]. Their work preceded both [7, 8] and
[20]and although Moses and Shoham do not explicitly mention
the term plausibility, it seems appropriate to summarize
their idea first.
Moses and Shoham: Beliefs as Conditional Knowledge
In [18], beliefs are relativized with respect to a formula α
(which can be seen as a plausibility assumption expressed
in the object language). More precisely, worlds that satisfy α
can be considered as plausible. This concept is expressed via
symbols Bα
i ϕ; the index i ∈ {1, 2, 3} is used to distinguish
between three different implementations of beliefs. The first
version is given by Bα
1 ϕ ≡ K(α → ϕ).4
A drawback of
this version is that if α is false, then everything will be
believed with respect to α. The second version overcomes
this problem: Bα
2 ϕ ≡ K(α → ϕ) ∧ (K¬α → Kϕ); now ϕ is
only believed if it is known that ϕ follows from assumption
α, and ϕ must be known if assumption α is known to be false.
Finally, Bα
3 ϕ ≡ K(α → ϕ) ∧ ¬K¬α: if the assumption α is
known to be false, nothing should be believed with respect to
α. The strength of these different notions is given as follows:
Bα
3 ϕ implies Bα
2 ϕ, and Bα
2 ϕ implies Bα
1 ϕ. In this approach,
belief is strongly connected to knowledge in the sense that
belief is knowledge with respect to a given assumption.
Friedman and Halpern: Plausibility Spaces
The work of Friedman and Halpern [7] extends the concepts
of knowledge and belief with an explicit notion of
plausibility; i.e., some worlds are more plausible for an agent
than others. To implement this idea, Kripke models are
extended with function P which assigns a plausibility space
P(q, a) = (Ω(q,a), (q,a)) to every state, or more generally
every possible world q, and agent a. The plausibility space
4
Unlike in most approaches, K is interpreted over all worlds
and not only over the indistinguishable worlds.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 585
is just a partially ordered subset of states/worlds; that is,
Ω(q, a) ⊆ Q, and (q,a)⊆ Q ×Q is a reflexive and transitive
relation. Let S, T ⊆ Ω(q,a) be finite subsets of states; now,
T is defined to be plausible given S with respect to P(q, a),
denoted by S →P (q,a) T, iff all minimal points/states in
S (with respect to (q,a)) are also in T.5
Friedman and
Halpern"s view to modal plausibility is closely related to
probability and, more generally, plausibility measures.
Logics of plausibility can be seen as a qualitative description of
agents preferences/knowledge; logics of probability [6, 15],
on the other hand, offer a quantitative description.
The logic from [7] is defined by the following grammar:
ϕ ::= p | ϕ∧ϕ | ¬ϕ | Kaϕ | ϕ →a ϕ, where the semantics of
all operators except →a is given as usual, and formulae ϕ →a
ψ have the meaning that ψ is true in the most plausible
worlds in which ϕ holds. Formally, the semantics for →a
is given as: M, q |= ϕ →a ψ iff Sϕ
P (q,a) →P(q,a)
Sψ
P (q,a),
where Sϕ
(q,a) = {q ∈ Ω(q,a) | M, q |= ϕ} are the states in
Ω(q,a) that satisfy ϕ. The idea of defining beliefs is given
by the assumption that an agent believes in something if he
knows that it is true in the most plausible worlds of Ω(q,a);
formally, this can be stated as Baϕ ≡ Ka( →a ϕ).
Friedman and Halpern have shown that the KD45
axioms are valid for operator Ba if plausibility spaces satisfy
consistency (for all states q ∈ Q it holds that Ω(q,a) ⊆ { q ∈
Q | q ∼a q }) and normality (for all states q ∈ Q it holds
that Ω(q,a) = ∅).6
A temporal extension of the language
(mentioned briefly in [7], and discussed in more detail in [8])
uses the interpreted systems approach [10, 5]. A system R
is given by runs, where a run r : N → Q is a function from
time moments (modeled by N) to global states, and a time
point (r, i) is given by a time point i ∈ N and a run r. A
global state is a combination of local states, one per agent.
An interpreted system M = (R, π) is given by a system R
and a valuation of propositions π. Epistemic relations are
defined over time points, i.e., (r , m ) ∼a (r, m) iff agent
a"s local states ra(m ) and ra(m) of (r , m ) and (r, m) are
equal. Formulae are interpreted in a straightforward way
with respect to interpreted systems, e.g. M, r, m |= Kaϕ iff
M, r , m |= ϕ for all (r , m ) ∼a (r, m). Now, these are time
points that play the role of possible worlds; consequently,
plausibility spaces P(r,m,a) are assigned to each point (r, m)
and agent a.
Su et al.: KBC Logic
Su et al. [20] have developed a multi-modal,
computationally grounded logic with modalities K, B, and C (knowledge,
belief, and certainty). The computational model consists of
(global) states q = (qvis
, qinv
, qper
, Qpls
) where the
environment is divided into a visible (qvis
) and an invisible part
(qinv
), and qper
captures the agent"s perception of the visible
part of the environment. External sources may provide the
agent with information about the invisible part of a state,
which results in a set of states Qpls
that are plausible for
the agent.
Given a global state q, we additionally define V is(q) =
qvis
, Inv(q) = qinv
, Per(q) = qper
, and Pls(q) = Qpls
. The
5
When there are infinite chains . . . q3 q2 a q1, the
definition is much more sophisticated. An interested reader
is referred to [7] for more details.
6
Note that this normality is essentially seriality of states
wrt plausibility spaces.
semantics is given by an extension of interpreted systems [10,
5], here, it is called interpreted KBC systems. KBC
formulae are defined as ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Kϕ | Bϕ | Cϕ.
The epistemic relation ∼vis is captured in the following way:
(r, i) ∼vis (r , i ) iff V is(r(i)) = V is(r (i )). The semantic
clauses for belief and certainty are given below.
M, r, i |= Bϕ iff M, r , i |= ϕ for all (r , i ) with V is(r (i )) =
Per(r(i)) and Inv(r (i )) ∈ Pls(r(i))
M, r, i |= Cϕ iff M, r , i |= ϕ for all (r (i )) with V is(r (i )) =
Per(r(i))
Thus, an agent believes ϕ if, and only if, ϕ is true in all
states which look like what he sees now and seem plausible
in the current state. Certainty is stronger: if an agent is
certain about ϕ, the formula must hold in all states with
a visible part equal to the current perception, regardless of
whether the invisible part is plausible or not.
The logic does not include temporal formulae, although
it might be extended with temporal operators, as time is
already present in KBC models.
What Are the Differences to Our Logic?
In our approach, plausibility is explicitly seen as a temporal
property, i.e., it is a property of temporal paths rather than
states. In the object language, this is reflected by the fact
that plausibility assumptions are specified through path
formulae. In contrast, the approach of [18] and [20] is static:
not only the logics do not include operators for talking about
time and/or change, but these are states that are assumed
plausible or not in their semantics.
The differences to [7, 8] are more subtle. Firstly, the
framework of Friedman and Halpern is static in the sense
that plausibility is taken as a property of (abstract)
possible worlds. This formulation is flexible enough to allow for
incorporating time; still, in our approach, time is inherent
to plausibility rather than incidental.
Secondly, our framework is more computationally oriented.
The implementation of temporal plausibility in [7, 8] is based
on the interpreted systems approach with time points (r, m)
being subject to plausibility. As runs are included in time
points, they can also be defined plausible or implausible.7
However, it also means that time points serve the role of
possible worlds in the basic formulation, which yields Kripke
structures with uncountable possible world spaces in all but
the most trivial cases.
Thirdly, [7, 8] build on linear time: a run (more precisely,
a time moment (r, m)) is fixed when a formula is interpreted.
In contrast, we use branching time with explicit
quantification over temporal paths.8
We believe that branching time
is more suitable for non-deterministic domains (cf. e.g. [4]),
of which multi-agent systems are a prime example. Note
that branching time makes our notion of belief different from
Friedman and Halpern"s. Most notably, property Kϕ → Bϕ
is valid in their approach, but not in ours: an agent may
7
Friedman and Halpern even briefly mention how
plausibility of runs can be embedded in their framework.
8
To be more precise, time in [7] does implicitly branch at
epistemic states. This is because (r, m) ∼a (r , m ) iff a"s
local state corresponding to both time points is the same
(ra(m) = ra(m )). In consequence, the semantics of Kaϕ
can be read as for every run, and every moment on this
run that yields the same local state as now, ϕ holds.
586 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
know that some course of events is in principle possible,
without believing that it can really become the case (see
Section 4.2). As Proposition 13 suggests, such a subtle
distinction between knowledge and beliefs is possible in our
approach because branching time logics allow for existential
quantification over runs.
Fourthly, while Friedman and Halpern"s models are very
flexible, they also enable system descriptions that may seem
counterintuitive. Suppose that (r, m) is plausible in itself
(formally: (r, m) is minimal wrt (r,m,a)), but (r, m + 1) is
not plausible in (r, m + 1). This means that following the
plausible path makes it implausible (cf. Remark 1), which
is even stranger in the case of linear time. Combining the
argument with computational aspects, we suggest that our
approach can be more natural and straightforward for many
applications.
Last but not least, our logic provides a mechanism for
specifying (and updating) sets of plausible paths in the
object language. Thus, plausibility sets can be specified in a
succinct way, which is another feature that makes our
framework computation-friendly. The model checking results from
Section 5 are especially encouraging in this light.
4. PLAUSIBILITY, KNOWLEDGE, AND
BELIEFS IN CTLKP
In this section we study some relevant properties of
plausibility, knowledge, and beliefs; in particular, axioms KDT45
are examined. But first, we identify two important
subclasses of models with plausibility.
A CTLKP model is plausibly serial (or p-serial) for agent
a if every state of the system is part of a plausible path
according to a, i.e. on(Υa) = Q. As we will see further, a
weaker requirement is sometimes sufficient. We call a model
weakly p-serial if every state has at least one
indistinguishable counterpart which lies on a plausible path, i.e. for each
q ∈ Q there is a q ∈ Q such that q ∼a q and q ∈ on(Υa).
Obviously, p-seriality implies weak p-seriality. We get the
following characterization of both model classes.
Proposition 4. M is plausibly serial for agent a iff
formula Pl aE f is valid in M. M is weakly p-serial for agent
a iff ¬KaPl aA f⊥ is valid in M.
4.1 Axiomatic Properties
Theorem 5. Axioms K, D, 4, and 5 for knowledge are
strongly valid, and axiom T is valid. That is, modalities Ka
form system S5 in the sense of normal validity, and KD45
in the sense of strong validity.
We do not include proofs here due to lack of space. The
interested reader is referred to [2], where detailed proofs are
given.
Proposition 6. Axioms K, 4, and 5 for beliefs are
strongly valid. That is, we have:
|≡ (Baϕ ∧ Ba(ϕ → ψ)) → Baψ, |≡ (Baϕ → BaBaϕ), and
|≡ (¬Baϕ → Ba¬Baϕ).
The next proposition concerns the consistency axiom
D: Baϕ → ¬Ba¬ϕ. It is easy to see that the axiom is not
valid in general: as we have no restrictions on plausibility
sets Υa, it may be as well that Υa = ∅. In that case we have
Baϕ ∧ Ba¬ϕ for all formulae ϕ, because the set of states to
be considered becomes empty. However, it turns out that D
is valid for a very natural class of models.
Proposition 7. Axiom D for beliefs is not valid in the
class of all CTLKP models. However, it is strongly valid in
the class of weak p-serial models (and therefore also in the
class of p-serial models).
Moreover, as one may expect, beliefs do not have to be
always true.
Proposition 8. Axiom T for beliefs is not valid; i.e.,
|= (Baϕ → ϕ). The axiom is not even valid in the class of
p-serial models.
Theorem 9. Belief modalities Ba form system K45 in
the class of all models, and KD45 in the class of weakly
plausibly serial models (in the sense of both normal and
strong validity). Axiom T is not even valid for p-serial
models.
4.2 Plausibility, Knowledge, and Beliefs
First, we investigate the relationship between knowledge
and plausibility/physicality operators. Then, we look at the
interaction between knowledge and beliefs.
Proposition 10. Let ϕ be a CTLKP formula, and M
be a CTLKP model. We have the following strong validities:
(i) |≡ Pl aKaϕ ↔ Kaϕ
(ii) |≡ Ph Kaϕ ↔ KaPh ϕ and |≡ KaPh ϕ ↔ Kaϕ
We now want to examine the relationship between
knowledge and belief. For instance, if agent a believes in
something, he knows that he believes it. Or, if he knows a fact,
he also believes that he knows it. On the other hand, for
instance, an agent does not necessarily believe in all the
things he knows. For example, we may know that an
invasion from another galaxy is in principle possible (KaE♦
invasion), but if we do not take this possibility as plausible
(¬Pl aE♦ invasion), then we reject the corresponding belief
in consequence (¬BaE♦ invasion). Note that this property
reflects the strong connection between belief and plausibility
in our framework.
Proposition 11. The following formulae are strongly
valid:
(i) Baϕ → KaBaϕ, (ii) KaBaϕ → Baϕ,
(iii) Kaϕ → BaKaϕ.
The following formulae are not valid:
(iv) Baϕ → BaKaϕ, (v) Kaϕ → Baϕ
The last invalidity is especially important: it is not the
case that knowing something implies believing in it. This
emphasizes that we study a specific concept of beliefs here.
Note that its specific is not due to the plausibility-based
definition of beliefs. The reason lies rather in the fact that we
investigate knowledge, beliefs and plausibility in a temporal
framework, as Proposition 12 shows.
Proposition 12. Let ϕ be a CTLKP formula that does
not include any temporal operators. Then Kaϕ → Baϕ is
strongly valid, and in the class of p-serial models we have
even that |≡ Kaϕ ↔ Baϕ.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 587
Moreover, it is important that we use branching time with
explicit quantification over paths; this observation is
formalized in Proposition 13.
Definition 1. We define the universal sublanguage of
CTLK in a way similar to [21]:
ϕu ::= p | ¬p | ϕu ∧ ϕu | ϕu ∨ ϕu | Aγu | Kaϕu,
γu ::= fϕu | 2 ϕu | ϕuU ϕu.
We call such ϕu universal formulae, and γu universal path
formulae.
Proposition 13. Let ϕu be a universal CTLK formula.
Then |≡ Kaϕu → Baϕu.
The following two theorems characterize the relationship
between knowledge and beliefs: first for the class of p-serial
models, and then, finally, for all models.
Theorem 14. The following formulae are strongly valid
in the class of plausibly serial CTLKP models:
(i) Baϕ ↔ KaPl aϕ, (ii) Kaϕ ↔ BaPh ϕ.
Theorem 15. Formula Baϕ ↔ KaPl a(E f → ϕ) is
strongly valid.
Note that this characterization has a strong commonsense
reading: believing in ϕ is knowing that ϕ plausibly holds in
all plausibly imaginable situations.
4.3 Properties of the Update
The first notable property of plausibility update is that it
influences only formulae in which plausibility plays a role,
i.e. ones in which belief or plausibility modalities occur.
Proposition 16. Let ϕ be a CTLKP formula that does
not include operators Pl a and Ba, and γ be a CTLKP path
formula. Then, we have |≡ ϕ ↔ (set-pla γ)ϕ.
What can be said about the result of an update? At first
sight, formula (set-pla γ)Pl aAγ seems a natural
characterization; however, it is not valid. This is because, by leaving
the other (implausible) paths out of scope, we may leave out
of |γ| some paths that were needed to satisfy γ (see the
example in Section 4.2). We propose two alternative ways out:
the first one restricts the language of the update similarly
to [21]; the other refers to physical possibilities, in a way
analogous to [13].
Proposition 17. The CTLKP formula (set-pla γ)Pl aAγ
is not valid. However, we have the following validities:
(i) |≡ (set-pla γu)Pl aAγu, where γu is a universal CTLK
path formula from Definition 1.
(ii) If ϕ, ϕ1, ϕ2 are arbitrary CTLK formulae, then:
|≡ (set-pla
fϕ)Pl aA f(Ph ϕ),
|≡ (set-pla 2 ϕ)Pl aA2 (Ph ϕ), and
|≡ (set-pla ϕ1U ϕ2)Pl aA(Ph ϕ1)U (Ph ϕ2).
5. VERIFICATION OF PLAUSIBILITY,
TIME AND BELIEFS
In this section we report preliminary results on model
checking CTLKP formulae. Clearly, verifying CTLKP
properties directly against models with plausibility does not
make much sense, since these models are inherently infinite;
what we need is a finite representation of plausibility sets.
One such representation has been discussed in Section 3.3:
plausibility sets can be defined by path formulae and the
update operator (set-pla γ).
We follow this idea here, studying the complexity of model
checking CTLKP formulae against CTLK models (which
can be seen as a compact representation of CTLKP
models in which all the paths are assumed plausible), with the
underlying idea that plausibility sets, when needed, must be
defined explicitly in the object language. Below we sketch
an algorithm that model-checks CTLKP formulae in time
linear wrt the size of the model and the length of the
formula. This means that we have extended CTLK to a more
expressive language with no computational price to pay.
First of all, we get rid of the belief operators (due to
Theorem 15), replacing every occurrence of Baϕ with KaPl a(E f
→ ϕ). Now, let −→γ = γ1, ..., γk be a vector of vanilla
path formulae (one per agent), with the initial vector −→γ0 =
, ..., , and −→γ [γ /a] denoting vector −→γ , in which −→γ [a]
is replaced with γ . Additionally, we define −→γ [0] = . We
translate the resulting CTLKP formulae to ones without
plausibility via function tr(ϕ) = tr−→γ0,0(ϕ), defined as
follows:
tr−→γ ,i(p) = p,
tr−→γ ,i(ϕ1 ∧ ϕ2) = tr−→γ ,i(ϕ1) ∧ tr−→γ ,i(ϕ2),
tr−→γ ,i(¬ϕ) = ¬tr−→γ ,i(ϕ),
tr−→γ ,i(Kaϕ) = Ka tr−→γ ,0(ϕ),
tr−→γ ,i(Pla ϕ) = tr−→γ ,a(ϕ),
tr−→γ ,i((set-pla γ )ϕ) = tr−→γ [γ /a],i(ϕ),
tr−→γ ,i(Ph ϕ) = tr−→γ ,0(ϕ),
tr−→γ ,i( fϕ) = ftr−→γ ,i(ϕ),
tr−→γ ,i(2 ϕ) = 2 tr−→γ ,i(ϕ),
tr−→γ ,i(ϕ1U ϕ2) = tr−→γ ,i(ϕ1)U tr−→γ ,i(ϕ2),
tr−→γ ,i(Eγ ) = E(−→γ [i] ∧ tr−→γ ,i(γ )).
Note that the resulting sentences belong to the logic of
CTLK+, that is CTL+ (where each path quantifier can be
followed by a Boolean combination of vanilla path
formulae)9
with epistemic modalities. The following proposition
justifies the translation.
Proposition 18. For any CTLKP formula ϕ without
Ba, we have that M, q |=CTLKP ϕ iff M, q |=CTLK+ tr(ϕ).
In general, model checking CTL+ (and also CTLK+)
is ΔP
2 -complete. However, in our case, the Boolean
combinations of path subformulae are always conjunctions of at
most two non-negated elements, which allows us to propose
the following model checking algorithm. First, subformulae
are evaluated recursively: for every subformula ψ of ϕ, the
set of states in M that satisfy ψ is computed and labeled
with a new proposition pψ. Now, it is enough to define
checking M, q |= ϕ for ϕ in which all (state) subformulae
are propositions, with the following cases:
Case M, q |= E(2 p ∧ γ): If M, q |= p, then return no.
Otherwise, remove from M all the states that do not satisfy
p (yielding a sparser model M ), and check the CTL
formula Eγ in M , q with any CTL model-checker.
Case M, q |= E( fp ∧ γ): Create M by adding a copy q of
state q, in which only the transitions to states
satisfying p are kept (i.e., M, q |= r iff M, q |= r; and q Rq
iff qRq and M, q |= p). Then, check Eγ in M , q .
9
For the semantics of CTL+, and discussion of model
checking complexity, cf. [17].
588 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Case M, q |= E(p1U p2 ∧ p3U p4): Note that this is
equivalent to checking E(p1 ∧ p3)U (p2 ∧ Ep3U p4) ∨ E(p1 ∧
p3)U (p4 ∧ Ep1U p2), which is a CTL formula.
Other cases: The above cases cover all possible formulas
that begin with a path quantifier. For other cases,
standard CTLK model checking can be used.
Theorem 19. Model checking CTLKP against CTLK
models is PTIME-complete, and can be done in time O(ml),
where m is the number of transitions in the model, and l is
the length of the formula to be checked. That is, the
complexity is no worse than for CTLK itself.
6. CONCLUSIONS
In this paper a notion of plausible behavior is considered,
with the underlying idea that implausible options should be
usually ignored in practical reasoning about possible future
courses of action. We add the new notion of plausibility to
the logic of CTLK [19], and obtain a language which
enables reasoning about what can (or must) plausibly happen.
As a technical device to define the semantics of the resulting
logic, we use a non-standard satisfaction relation |=P that
allows to propagate the current set of plausible paths into
subformulae. Furthermore, we propose a non-standard
notion of beliefs, defined in terms of indistinguishability and
plausibility. We also propose how plausibility assumptions
can be specified in the object language via a plausibility
update operator (in a way similar to [13]).
We use this new framework to investigate some important
properties of plausibility, knowledge, beliefs, and updates.
In particular, we show that knowledge is an S5 modality,
and that beliefs satisfy axioms K45 in general, and KD45
for the class of plausibly serial models. We also prove that
believing in ϕ is knowing that ϕ plausibly holds in all
plausibly possible situations. That is, the relationship between
knowledge and beliefs is very natural and reflects the
initial intuition precisely. Moreover, the model checking
results from Section 5 show that verification for CTLKP is
no more complex than for CTL and CTLK.
We would like to stress that we do not see this contribution
as a mere technical exercise in formal logic. Human agents
use a similar concept of plausibility and practical beliefs
in their everyday reasoning in order to reduce the search
space and make the reasoning feasible. As a consequence, we
suggest that the framework we propose may prove suitable
for modeling, design, and analysis resource-bounded agents
in general.
We would like to thank Juergen Dix for fruitful
discussions, useful comments and improvements.
7. REFERENCES
[1] R. Alur, T. A. Henzinger, and O. Kupferman.
Alternating-time Temporal Logic. Journal of the
ACM, 49:672-713, 2002.
[2] N. Bulling and W. Jamroga. Agents, beliefs and
plausible behavior in a temporal setting. Technical
Report IfI-06-05, Clausthal Univ. of Technology, 2006.
[3] E. A. Emerson. Temporal and modal logic. In J. van
Leeuwen, editor, Handbook of Theoretical Computer
Science, volume B, pages 995-1072. Elsevier, 1990.
[4] E.A. Emerson and J.Y. Halpern. sometimes and
not never revisited: On branching versus linear time
temporal logic. Journal of the ACM, 33(1):151-178,
1986.
[5] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.
Reasoning about Knowledge. MIT Press: Cambridge,
MA, 1995.
[6] R. Fagin and J.Y. Halpern. Reasoning about
knowledge and probability. Journal of ACM,
41(2):340-367, 1994.
[7] N. Friedman and J.Y. Halpern. A knowledge-based
framework for belief change, Part I: Foundations. In
Proceedings of TARK, pages 44-64, 1994.
[8] N. Friedman and J.Y. Halpern. A knowledge-based
framework for belief change, Part II: Revision and
update. In Proceedings of KR"94, 1994.
[9] J.Y. Halpern. Reasoning about knowledge: a survey.
In Handbook of Logic in Artificial Intelligence and
Logic Programming. Vol. 4: Epistemic and Temporal
Reasoning, pages 1-34. Oxford University Press,
Oxford, 1995.
[10] J.Y. Halpern and R. Fagin. Modelling knowledge and
action in distributed systems. Distributed Computing,
3(4):159-177, 1989.
[11] W. Jamroga and N. Bulling. A general framework for
reasoning about rational agents. In Proceedings of
AAMAS"07, 2007. Short paper.
[12] W. Jamroga and W. van der Hoek. Agents that know
how to play. Fundamenta Informaticae,
63(2-3):185-219, 2004.
[13] W. Jamroga, W. van der Hoek, and M. Wooldridge.
Intentions and strategies in game-like scenarios. In
Progress in Artificial Intelligence: Proceedings of
EPIA 2005, volume 3808 of LNAI, pages 512-523.
Springer Verlag, 2005.
[14] W. Jamroga and Thomas ˚Agotnes. Constructive
knowledge: What agents can achieve under incomplete
information. Technical Report IfI-05-10, Clausthal
University of Technology, 2005.
[15] B.P. Kooi. Probabilistic dynamic epistemic logic.
Journal of Logic, Language and Information,
12(4):381-408, 2003.
[16] P. Lamarre and Y. Shoham. Knowledge, certainty,
belief, and conditionalisation (abbreviated version). In
Proceedings of KR"94, pages 415-424, 1994.
[17] F. Laroussinie, N. Markey, and Ph. Schnoebelen.
Model checking CTL+ and FCTL is hard. In
Proceedings of FoSSaCS"01, volume 2030 of LNCS,
pages 318-331. Springer, 2001.
[18] Y. Moses and Y. Shoham. Belief as defeasible
knowledge. Artificial Intelligence, 64(2):299-321, 1993.
[19] W. Penczek and A. Lomuscio. Verifying epistemic
properties of multi-agent systems via bounded model
checking. In Proceedings of AAMAS"03, pages
209-216, New York, NY, USA, 2003. ACM Press.
[20] K. Su, A. Sattar, G. Governatori, and Q. Chen. A
computationally grounded logic of knowledge, belief
and certainty. In Proceedings of AAMAS"05, pages
149-156. ACM Press, 2005.
[21] W. van der Hoek, M. Roberts, and M. Wooldridge.
Social laws in alternating time: Effectiveness,
feasibility and synthesis. Synthese, 2005.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 589 | belief notion;notion of plausibility;semantics;notion of belief;belief;plausibility notion;plausibility;reasoning;indistinguishability;logic;temporal logic;plausibility update operator;framework;multi-agent system;computation tree logic |
train_I-51 | Learning and Joint Deliberation through Argumentation in Multi-Agent Systems | In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance. | 1. INTRODUCTION
Argumentation frameworks for multi-agent systems can be used
for different purposes like joint deliberation, persuasion,
negotiation, and conflict resolution. In this paper we will present an
argumentation framework for learning agents, and show that it can be
used for two purposes: (1) joint deliberation, and (2) learning from
communication.
Argumentation-based joint deliberation involves discussion over
the outcome of a particular situation or the appropriate course of
action for a particular situation. Learning agents are capable of
learning from experience, in the sense that past examples (situations and
their outcomes) are used to predict the outcome for the situation
at hand. However, since individual agents experience may be
limited, individual knowledge and prediction accuracy is also limited.
Thus, learning agents that are capable of arguing their individual
predictions with other agents may reach better prediction accuracy
after such an argumentation process.
Most existing argumentation frameworks for multi-agent
systems are based on deductive logic or some other deductive logic
formalism specifically designed to support argumentation, such as
default logic [3]). Usually, an argument is seen as a logical
statement, while a counterargument is an argument offered in opposition
to another argument [4, 13]; agents use a preference relation to
resolve conflicting arguments. However, logic-based argumentation
frameworks assume agents with preloaded knowledge and
preference relation. In this paper, we focus on an Argumentation-based
Multi-Agent Learning (AMAL) framework where both knowledge
and preference relation are learned from experience. Thus, we
consider a scenario with agents that (1) work in the same domain using
a shared ontology, (2) are capable of learning from examples, and
(3) communicate using an argumentative framework.
Having learning capabilities allows agents effectively use a
specific form of counterargument, namely the use of
counterexamples. Counterexamples offer the possibility of agents learning
during the argumentation process. Moreover, learning agents allow
techniques that use learnt experience to generate adequate
arguments and counterarguments. Specifically, we will need to address
two issues: (1) how to define a technique to generate arguments
and counterarguments from examples, and (2)how to define a
preference relation over two conflicting arguments that have been
induced from examples.
This paper presents a case-based approach to address both
issues. The agents use case-based reasoning (CBR) [1] to learn from
past cases (where a case is a situation and its outcome) in order
to predict the outcome of a new situation. We propose an
argumentation protocol inside the AMAL framework at supports agents
in reaching a joint prediction over a specific situation or problem
- moreover, the reasoning needed to support the argumentation
process will also be based on cases. In particular, we present two
case-based measures, one for generating the arguments and
counterarguments adequate to a particular situation and another for
determining preference relation among arguments. Finally, we
evaluate (1) if argumentation between learning agents can produce a
joint prediction that improves over individual learning performance
and (2) if learning from the counterexamples conveyed during the
argumentation process increases the individual performance with
precisely those cases being used while arguing among them.
The paper is structured as follows. Section 2 discusses the
relation among argumentation, collaboration and learning. Then
Section 3 introduces our multi-agent CBR (MAC) framework and the
notion of justified prediction. After that, Section 4 formally
defines our argumentation framework. Sections 5 and 6 present our
case-based preference relation and argument generation policies
respectively. Later, Section 7 presents the argumentation protocol in
our AMAL framework. After that, Section 8 presents an
exemplification of the argumentation framework. Finally, Section 9 presents
an empirical evaluation of our two main hypotheses. The paper
closes with related work and conclusions sections.
2. ARGUMENTATION,COLLABORATION
AND LEARNING
Both learning and collaboration are ways in which an agent can
improve individual performance. In fact, there is a clear parallelism
between learning and collaboration in multi-agent systems, since
both are ways in which agents can deal with their shortcomings.
Let us show which are the main motivations that an agent can have
to learn or to collaborate.
• Motivations to learn:
- Increase quality of prediction,
- Increase efficiency,
- Increase the range of solvable problems.
• Motivations to collaborate:
- Increase quality of prediction,
- Increase efficiency,
- Increase the range of solvable problems,
- Increase the range of accessible resources.
Looking at the above lists of motivation, we can easily see that
learning and collaboration are very related in multi-agent systems.
In fact, with the exception of the last item in the motivations to
collaborate list, they are two extremes of a continuum of strategies
to improve performance. An agent may choose to increase
performance by learning, by collaborating, or by finding an intermediate
point that combines learning and collaboration in order to improve
performance.
In this paper we will propose AMAL, an argumentation
framework for learning agents, and will also also show how AMAL can be
used both for learning from communication and for solving
problems in a collaborative way:
• Agents can solve problems in a collaborative way via
engaging an argumentation process about the prediction for the
situation at hand. Using this collaboration, the prediction
can be done in a more informed way, since the information
known by several agents has been taken into account.
• Agents can also learn from communication with other agents
by engaging an argumentation process. Agents that engage
in such argumentation processes can learn from the
arguments and counterexamples received from other agents, and
use this information for predicting the outcomes of future
situations.
In the rest of this paper we will propose an argumentation
framework and show how it can be used both for learning and for solving
problems in a collaborative way.
3. MULTI-AGENT CBR SYSTEMS
A Multi-Agent Case Based Reasoning System (MAC) M =
{(A1, C1), ..., (An, Cn)} is a multi-agent system composed of A =
{Ai, ..., An}, a set of CBR agents, where each agent Ai ∈ A
possesses an individual case base Ci. Each individual agent Ai
in a MAC is completely autonomous and each agent Ai has
access only to its individual and private case base Ci. A case base
Ci = {c1, ..., cm} is a collection of cases. Agents in a MAC
system are able to individually solve problems, but they can also
collaborate with other agents to solve problems.
In this framework, we will restrict ourselves to analytical tasks,
i.e. tasks like classification, where the solution of a problem is
achieved by selecting a solution class from an enumerated set of
solution classes. In the following we will note the set of all the
solution classes by S = {S1, ..., SK }. Therefore, a case c = P, S is
a tuple containing a case description P and a solution class S ∈ S.
In the following, we will use the terms problem and case
description indistinctly. Moreover, we will use the dot notation to refer to
elements inside a tuple; e.g., to refer to the solution class of a case
c, we will write c.S.
Therefore, we say a group of agents perform joint deliberation,
when they collaborate to find a joint solution by means of an
argumentation process. However, in order to do so, an agent has to
be able to justify its prediction to the other agents (i.e. generate an
argument for its predicted solution that can be examined and
critiqued by the other agents). The next section addresses this issue.
3.1 Justified Predictions
Both expert systems and CBR systems may have an explanation
component [14] in charge of justifying why the system has
provided a specific answer to the user. The line of reasoning of the
system can then be examined by a human expert, thus increasing
the reliability of the system.
Most of the existing work on explanation generation focuses on
generating explanations to be provided to the user. However, in our
approach we use explanations (or justifications) as a tool for
improving communication and coordination among agents. We are
interested in justifications since they can be used as arguments.
For that purpose, we will benefit from the ability of some machine
learning methods to provide justifications.
A justification built by a CBR method after determining that the
solution of a particular problem P was Sk is a description that
contains the relevant information from the problem P that the CBR
method has considered to predict Sk as the solution of P. In
particular, CBR methods work by retrieving similar cases to the problem
at hand, and then reusing their solutions for the current problem,
expecting that since the problem and the cases are similar, the
solutions will also be similar. Thus, if a CBR method has retrieved a set
of cases C1, ..., Cn to solve a particular problem P the justification
built will contain the relevant information from the problem P that
made the CBR system retrieve that particular set of cases, i.e. it
will contain the relevant information that P and C1, ..., Cn have in
common.
For example, Figure 1 shows a justification build by a CBR
system for a toy problem (in the following sections we will show
justifications for real problems). In the figure, a problem has two
attributes (Traffic_light, and Cars_passing), the retrieval mechanism
of the CBR system notices that by considering only the attribute
Traffic_light, it can retrieve two cases that predict the same
solution: wait. Thus, since only this attribute has been used, it is the
only one appearing in the justification. The values of the rest of
attributes are irrelevant, since whatever their value the solution class
would have been the same.
976 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Problem
Traffic_light: red
Cars_passing: no
Case 1
Traffic_light: red
Cars_passing: no
Solution: wait
Case 3
Traffic_light: red
Cars_passing: yes
Solution: wait
Case 4
Traffic_light: green
Cars_passing: yes
Solution: wait
Case 2
Traffic_light: green
Cars_passing: no
Solution: cross
Retrieved
cases
Solution: wait
Justification
Traffic_light: red
Figure 1: An example of justification generation in a CBR system. Notice that, since the only relevant feature to decide is Traffic_light
(the only one used to retrieve cases), it is the only one appearing in the justification.
In general, the meaning of a justification is that all (or most of)
the cases in the case base of an agent that satisfy the justification
(i.e. all the cases that are subsumed by the justification) belong to
the predicted solution class. In the rest of the paper, we will use
to denote the subsumption relation. In our work, we use LID [2], a
CBR method capable of building symbolic justifications such as the
one exemplified in Figure 1. When an agent provides a justification
for a prediction, the agent generates a justified prediction:
DEFINITION 3.1. A Justified Prediction is a tuple J = A, P,
S, D where agent A considers S the correct solution for problem
P, and that prediction is justified a symbolic description D such
that J.D J.P.
Justifications can have many uses for CBR systems [8, 9]. In this
paper, we are going to use justifications as arguments, in order to
allow learning agents to engage in argumentation processes.
4. ARGUMENTS AND
COUNTERARGUMENTS
For our purposes an argument α generated by an agent A is
composed of a statement S and some evidence D supporting S as
correct. In the remainder of this section we will see how this
general definition of argument can be instantiated in specific kind of
arguments that the agents can generate. In the context of MAC
systems, agents argue about predictions for new problems and can
provide two kinds of information: a) specific cases P, S , and b)
justified predictions: A, P, S, D . Using this information, we can
define three types of arguments: justified predictions,
counterarguments, and counterexamples.
A justified prediction α is generated by an agent Ai to argue that
Ai believes that the correct solution for a given problem P is α.S,
and the evidence provided is the justification α.D. In the
example depicted in Figure 1, an agent Ai may generate the argument
α = Ai, P, Wait, (Traffic_light = red) , meaning that the agent Ai
believes that the correct solution for P is Wait because the attribute
Traffic_light equals red.
A counterargument β is an argument offered in opposition to
another argument α. In our framework, a counterargument
consists of a justified prediction Aj, P, S , D generated by an agent
Aj with the intention to rebut an argument α generated by another
agent Ai, that endorses a solution class S different from that of
α.S for the problem at hand and justifies this with a justification
D . In the example in Figure 1, if an agent generates the argument
α = Ai, P, Walk, (Cars_passing = no) , an agent that thinks that
the correct solution is Wait might answer with the counterargument
β = Aj, P, Wait, (Cars_passing = no ∧ Traffic_light = red) ,
meaning that, although there are no cars passing, the traffic light is red,
and the street cannot be crossed.
A counterexample c is a case that contradicts an argument α.
Thus a counterexample is also a counterargument, one that states
that a specific argument α is not always true, and the evidence
provided is the case c. Specifically, for a case c to be a
counterexample of an argument α, the following conditions have to be met:
α.D c and α.S = c.S, i.e. the case must satisfy the justification
α.D and the solution of c must be different than the predicted by
α.
By exchanging arguments and counterarguments (including
counterexamples), agents can argue about the correct solution of a given
problem, i.e. they can engage a joint deliberation process.
However, in order to do so, they need a specific interaction protocol, a
preference relation between contradicting arguments, and a
decision policy to generate counterarguments (including
counterexamples). In the following sections we will present these elements.
5. PREFERENCE RELATION
A specific argument provided by an agent might not be consistent
with the information known to other agents (or even to some of the
information known by the agent that has generated the justification
due to noise in training data). For that reason, we are going to
define a preference relation over contradicting justified predictions
based on cases. Basically, we will define a confidence measure for
each justified prediction (that takes into account the cases owned by
each agent), and the justified prediction with the highest confidence
will be the preferred one.
The idea behind case-based confidence is to count how many of
the cases in an individual case base endorse a justified prediction,
and how many of them are counterexamples of it. The more the
endorsing cases, the higher the confidence; and the more the
counterexamples, the lower the confidence. Specifically, to assess the
confidence of a justified prediction α, an agent obtains the set of
cases in its individual case base that are subsumed by α.D. With
them, an agent Ai obtains the Y (aye) and N (nay) values:
• Y Ai
α = |{c ∈ Ci| α.D c.P ∧ α.S = c.S}| is the number
of cases in the agent"s case base subsumed by the justification
α.D that belong to the solution class α.S,
• NAi
α = |{c ∈ Ci| α.D c.P ∧ α.S = c.S}| is the number
of cases in the agent"s case base subsumed by justification
α.D that do not belong to that solution class.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977
+ +
+
+
+
+
-
- +
Figure 2: Confidence of arguments is evaluated by contrasting them against the case bases of the agents.
An agent estimates the confidence of an argument as:
CAi (α) =
Y Ai
α
1 + Y Ai
α + NAi
α
i.e. the confidence on a justified prediction is the number of
endorsing cases divided by the number of endorsing cases plus
counterexamples. Notice that we add 1 to the denominator, this is to avoid
giving excessively high confidences to justified predictions whose
confidence has been computed using a small number of cases.
Notice that this correction follows the same idea than the Laplace
correction to estimate probabilities. Figure 2 illustrates the individual
evaluation of the confidence of an argument, in particular, three
endorsing cases and one counterexample are found in the case base
of agents Ai, giving an estimated confidence of 0.6
Moreover, we can also define the joint confidence of an argument
α as the confidence computed using the cases present in the case
bases of all the agents in the group:
C(α) = i Y Ai
α
1 + i Y Ai
α + NAi
α
Notice that, to collaboratively compute the joint confidence, the
agents only have to make public the aye and nay values locally
computed for a given argument.
In our framework, agents use this joint confidence as the
preference relation: a justified prediction α is preferred over another one
β if C(α) ≥ C(β).
6. GENERATION OF ARGUMENTS
In our framework, arguments are generated by the agents from
cases, using learning methods. Any learning method able to
provide a justified prediction can be used to generate arguments. For
instance, decision trees and LID [2] are suitable learning methods.
Specifically, in the experiments reported in this paper agents use
LID. Thus, when an agent wants to generate an argument
endorsing that a specific solution class is the correct solution for a problem
P, it generates a justified prediction as explained in Section 3.1.
For instance, Figure 3 shows a real justification generated by
LID after solving a problem P in the domain of marine sponges
identification. In particular, Figure 3 shows how when an agent
receives a new problem to solve (in this case, a new sponge to
determine its order), the agent uses LID to generate an argument
(consisting on a justified prediction) using the cases in the case
base of the agent. The justification shown in Figure 3 can be
interpreted saying that the predicted solution is hadromerida
because the smooth form of the megascleres of the spiculate
skeleton of the sponge is of type tylostyle, the spikulate skeleton of the
sponge has no uniform length, and there is no gemmules in the
external features of the sponge. Thus, the argument generated will
be α = A1, P, hadromerida, D1 .
6.1 Generation of Counterarguments
As previously stated, agents may try to rebut arguments by
generating counterargument or by finding counterexamples. Let us
explain how they can be generated.
An agent Ai wants to generate a counterargument β to rebut an
argument α when α is in contradiction with the local case base of
Ai. Moreover, while generating such counterargument β, Ai
expects that β is preferred over α. For that purpose, we will present
a specific policy to generate counterarguments based on the
specificity criterion [10].
The specificity criterion is widely used in deductive frameworks
for argumentation, and states that between two conflicting
arguments, the most specific should be preferred since it is, in
principle, more informed. Thus, counterarguments generated based on
the specificity criterion are expected to be preferable (since they are
more informed) to the arguments they try to rebut. However, there
is no guarantee that such counterarguments will always win, since,
as we have stated in Section 5, agents in our framework use a
preference relation based on joint confidence. Moreover, one may think
that it would be better that the agents generate counterarguments
based on the joint confidence preference relation; however it is not
obvious how to generate counterarguments based on joint
confidence in an efficient way, since collaboration is required in order to
evaluate joint confidence. Thus, the agent generating the
counterargument should constantly communicate with the other agents at
each step of the induction algorithm used to generate
counterarguments (presently one of our future research lines).
Thus, in our framework, when an agent wants to generate a
counterargument β to an argument α, β has to be more specific than α
(i.e. α.D < β.D).
The generation of counterarguments using the specificity
criterion imposes some restrictions over the learning method, although
LID or ID3 can be easily adapted for this task. For instance, LID is
an algorithm that generates a description starting from scratch and
heuristically adding features to that term. Thus, at every step, the
description is made more specific than in the previous step, and the
number of cases that are subsumed by that description is reduced.
When the description covers only (or almost only) cases of a
single solution class LID terminates and predicts that solution class.
To generate a counterargument to an argument α LID just has to
use as starting point the description α.D instead of starting from
scratch. In this way, the justification provided by LID will always
be subsumed by α.D, and thus the resulting counterargument will
be more specific than α. However, notice that LID may sometimes
not be able to generate counterarguments, since LID may not be
able to specialize the description α.D any further, or because the
agent Ai has no case inCi that is subsumed by α.D. Figure 4 shows
how an agent A2 that disagreed with the argument shown in
Figure 3, generates a counterargument using LID. Moreover, Figure 4
shows the generation of a counterargument β1
2 for the argument α0
1
(in Figure 3) that is a specialization of α0
1.
978 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Solution: hadromerida
Justification: D1
Sponge
Spikulate
skeleton
External
features
External features
Gemmules: no
Spikulate Skeleton
Megascleres
Uniform length: no
Megascleres
Smooth form: tylostyle
Case Base
of A1
LID
New
sponge
P
Figure 3: Example of a real justification generated by LID in the marine sponges data set.
Specifically, in our experiments, when an agent Ai wants to rebut
an argument α, uses the following policy:
1. Agent Ai uses LID to try to find a counterargument β more
specific than α; if found, β is sent to the other agent as a
counterargument of α.
2. If not found, then Ai searches for a counterexample c ∈ Ci
of α. If a case c is found, then c is sent to the other agent as
a counterexample of α.
3. If no counterexamples are found, then Ai cannot rebut the
argument α.
7. ARGUMENTATION-BASED
MULTI-AGENT LEARNING
The interaction protocol of AMAL allows a group of agents A1,
..., An to deliberate about the correct solution of a problem P by
means of an argumentation process. If the argumentation process
arrives to a consensual solution, the joint deliberation ends;
otherwise a weighted vote is used to determine the joint solution.
Moreover, AMAL also allows the agents to learn from the
counterexamples received from other agents.
The AMAL protocol consists on a series of rounds. In the initial
round, each agent states which is its individual prediction for P.
Then, at each round an agent can try to rebut the prediction made
by any of the other agents. The protocol uses a token passing
mechanism so that agents (one at a time) can send counterarguments or
counterexamples if they disagree with the prediction made by any
other agent. Specifically, each agent is allowed to send one
counterargument or counterexample each time he gets the token (notice
that this restriction is just to simplify the protocol, and that it does
not restrict the number of counterargument an agent can sent, since
they can be delayed for subsequent rounds). When an agent
receives a counterargument or counterexample, it informs the other
agents if it accepts the counterargument (and changes its
prediction) or not. Moreover, agents have also the opportunity to answer
to counterarguments when they receive the token, by trying to
generate a counterargument to the counterargument.
When all the agents have had the token once, the token returns
to the first agent, and so on. If at any time in the protocol, all the
agents agree or during the last n rounds no agent has generated
any counterargument, the protocol ends. Moreover, if at the end of
the argumentation the agents have not reached an agreement, then
a voting mechanism that uses the confidence of each prediction as
weights is used to decide the final solution (Thus, AMAL follows
the same mechanism as human committees, first each individual
member of a committee exposes his arguments and discuses those
of the other members (joint deliberation), and if no consensus is
reached, then a voting mechanism is required).
At each iteration, agents can use the following performatives:
• assert(α): the justified prediction held during the next round
will be α. An agent can only hold a single prediction at each
round, thus is multiple asserts are send, only the last one is
considered as the currently held prediction.
• rebut(β, α): the agent has found a counterargument β to the
prediction α.
We will define Ht = αt
1, ..., αt
n as the predictions that each
of the n agents hold at a round t. Moreover, we will also define
contradict(αt
i) = {α ∈ Ht|α.S = αt
i.S} as the set of
contradicting arguments for an agent Ai in a round t, i.e. the set of
arguments at round t that support a different solution class than αt
i.
The protocol is initiated because one of the agents receives a
problem P to be solved. After that, the agent informs all the other
agents about the problem P to solve, and the protocol starts:
1. At round t = 0, each one of the agents individually solves P,
and builds a justified prediction using its own CBR method.
Then, each agent Ai sends the performative assert(α0
i ) to
the other agents. Thus, the agents know H0 = α0
i , ..., α0
n .
Once all the predictions have been sent the token is given to
the first agent A1.
2. At each round t (other than 0), the agents check whether their
arguments in Ht agree. If they do, the protocol moves to step
5. Moreover, if during the last n rounds no agent has sent any
counterexample or counterargument, the protocol also moves
to step 5. Otherwise, the agent Ai owner of the token tries
to generate a counterargument for each of the opposing
arguments in contradict(αt
i) ⊆ Ht (see Section 6.1). Then, the
counterargument βt
i against the prediction αt
j with the
lowest confidence C(αt
j) is selected (since αt
j is the prediction
more likely to be successfully rebutted).
• If βt
i is a counterargument, then, Ai locally compares
αt
i with βt
i by assessing their confidence against its
individual case base Ci (see Section 5) (notice that Ai is
comparing its previous argument with the
counterargument that Ai itself has just generated and that is about
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979
Sponge
Spikulate
skeleton
External
features
External features
Gemmules: no
Growing:
Spikulate Skeleton
Megascleres
Uniform length: no
Megascleres
Smooth form: tylostyle
Growing
Grow: massive
Case Base
of A2
LID
Solution: astrophorida
Justification: D2
Figure 4: Generation of a counterargument using LID in the sponges data set.
to send to Aj). If CAi (βt
i ) > CAi (αt
i), then Ai
considers that βt
i is stronger than its previous argument,
changes its argument to βt
i by sending assert(βt
i ) to
the rest of the agents (the intuition behind this is that
since a counterargument is also an argument, Ai checks
if the newly counterargument is a better argument than
the one he was previously holding) and rebut(βt
i ,
αt
j) to Aj. Otherwise (i.e. CAi (βt
i ) ≤ CAi (αt
i)), Ai
will send only rebut(βt
i , αt
j) to Aj. In any of the two
situations the protocol moves to step 3.
• If βt
i is a counterexample c, then Ai sends rebut(c, αt
j)
to Aj. The protocol moves to step 4.
• If Ai cannot generate any counterargument or
counterexample, the token is sent to the next agent, a new
round t + 1 starts, and the protocol moves to state 2.
3. The agent Aj that has received the counterargument βt
i ,
locally compares it against its own argument, αt
j, by locally
assessing their confidence. If CAj (βt
i ) > CAj (αt
j), then
Aj will accept the counterargument as stronger than its own
argument, and it will send assert(βt
i ) to the other agents.
Otherwise (i.e. CAj (βt
i ) ≤ CAj (αt
j)), Aj will not accept
the counterargument, and will inform the other agents
accordingly. Any of the two situations start a new round t + 1,
Ai sends the token to the next agent, and the protocol moves
back to state 2.
4. The agent Aj that has received the counterexample c retains
it into its case base and generates a new argument αt+1
j that
takes into account c, and informs the rest of the agents by
sending assert(αt+1
j ) to all of them. Then, Ai sends the
token to the next agent, a new round t + 1 starts, and the
protocol moves back to step 2.
5. The protocol ends yielding a joint prediction, as follows: if
the arguments in Ht agree then their prediction is the joint
prediction, otherwise a voting mechanism is used to decide
the joint prediction. The voting mechanism uses the joint
confidence measure as the voting weights, as follows:
S = arg max
Sk∈S
αi∈Ht|αi.S=Sk
C(αi)
Moreover, in order to avoid infinite iterations, if an agent sends
twice the same argument or counterargument to the same agent, the
message is not considered.
8. EXEMPLIFICATION
Let us consider a system composed of three agents A1, A2 and
A3. One of the agents, A1 receives a problem P to solve, and
decides to use AMAL to solve it. For that reason, invites A2 and A3 to
take part in the argumentation process. They accept the invitation,
and the argumentation protocol starts.
Initially, each agent generates its individual prediction for P, and
broadcasts it to the other agents. Thus, all of them can compute
H0 = α0
1, α0
2, α0
3 . In particular, in this example:
• α0
1 = A1, P, hadromerida, D1
• α0
2 = A2, P, astrophorida, D2
• α0
3 = A3, P, axinellida, D3
A1 starts owning the token and tries to generate
counterarguments for α0
2 and α0
3, but does not succeed, however it has one
counterexample c13 for α0
3. Thus, A1 sends the the message rebut(
c13, α0
3) to A3. A3 incorporates c13 into its case base and tries to
solve the problem P again, now taking c13 into consideration. A3
comes up with the justified prediction α1
3 = A3, P, hadromerida,
D4 , and broadcasts it to the rest of the agents with the message
assert(α1
3). Thus, all of them know the new H1 = α0
1, α0
2, α1
3 .
Round 1 starts and A2 gets the token. A2 tries to generate
counterarguments for α0
1 and α1
3 and only succeeds to generate a
counterargument β1
2 = A2, P, astrophorida, D5 against α1
3. The
counterargument is sent to A3 with the message rebut(β1
2 , α1
3).
Agent A3 receives the counterargument and assesses its local
confidence. The result is that the individual confidence of the
counterargument β1
2 is lower than the local confidence of α1
3. Therefore, A3
does not accept the counterargument, and thus H2 = α0
1, α0
2, α1
3 .
Round 2 starts and A3 gets the token. A3 generates a
counterargument β2
3 = A3, P, hadromerida, D6 for α0
2 and sends it to
A2 with the message rebut(β2
3 , α0
2). Agent A2 receives the
counterargument and assesses its local confidence. The result is that the
local confidence of the counterargument β2
3 is higher than the local
confidence of α0
2. Therefore, A2 accepts the counterargument and
informs the rest of the agents with the message assert(β2
3 ). After
that, H3 = α0
1, β2
3 , α1
3 .
At Round 3, since all the agents agree (all the justified
predictions in H3 predict hadromerida as the solution class) The
protocol ends, and A1 (the agent that received the problem) considers
hadromerida as the joint solution for the problem P.
9. EXPERIMENTAL EVALUATION
980 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
SPONGE
75
77
79
81
83
85
87
89
91
2 3 4 5
AMAL
Voting
Individual
SOYBEAN
55
60
65
70
75
80
85
90
2 3 4 5
AMAL
Voting
Individual
Figure 5: Individual and joint accuracy for 2 to 5 agents.
In this section we empirically evaluate the AMAL argumentation
framework. We have made experiments in two different data sets:
soybean (from the UCI machine learning repository) and sponge (a
relational data set). The soybean data set has 307 examples and 19
solution classes, while the sponge data set has 280 examples and 3
solution classes. In an experimental run, the data set is divided in 2
sets: the training set and the test set. The training set examples are
distributed among 5 different agents without replication, i.e. there
is no example shared by two agents. In the testing stage, problems
in the test set arrive randomly to one of the agents, and their goal is
to predict the correct solution.
The experiments are designed to test two hypotheses: (H1) that
argumentation is a useful framework for joint deliberation and can
improve over other typical methods such as voting; and (H2) that
learning from communication improves the individual performance
of a learning agent participating in an argumentation process.
Moreover, we also expect that the improvement achieved from
argumentation will increase as the number of agents participating in the
argumentation increases (since more information will be taken into
account).
Concerning H1 (argumentation is a useful framework for joint
deliberation), we ran 4 experiments, using 2, 3, 4, and 5 agents
respectively (in all experiments each agent has a 20% of the training
data, since the training is always distributed among 5 agents).
Figure 5 shows the result of those experiments in the sponge and
soybean data sets. Classification accuracy is plotted in the
vertical axis, and in the horizontal axis the number of agents that took
part in the argumentation processes is shown. For each number of
agents, three bars are shown: individual, Voting, and AMAL. The
individual bar shows the average accuracy of individual agents
predictions; the voting bar shows the average accuracy of the joint
prediction achieved by voting but without any argumentation; and
finally the AMAL bar shows the average accuracy of the joint
prediction using argumentation. The results shown are the average of
5 10-fold cross validation runs.
Figure 5 shows that collaboration (voting and AMAL)
outperforms individual problem solving. Moreover, as we expected, the
accuracy improves as more agents collaborate, since more
information is taken into account. We can also see that AMAL always
outperforms standard voting, proving that joint decisions are based
on better information as provided by the argumentation process.
For instance, the joint accuracy for 2 agents in the sponge data
set is of 87.57% for AMAL and 86.57% for voting (while individual
accuracy is just 80.07%). Moreover, the improvement achieved by
AMAL over Voting is even larger in the soybean data set. The
reason is that the soybean data set is more difficult (in the sense that
agents need more data to produce good predictions). These
experimental results show that AMAL effectively exploits the opportunity
for improvement: the accuracy is higher only because more agents
have changed their opinion during argumentation (otherwise they
would achieve the same result as Voting).
Concerning H2 (learning from communication in argumentation
processes improves individual prediction ), we ran the following
experiment: initially, we distributed a 25% of the training set among
the five agents; after that, the rest of the cases in the training set is
sent to the agents one by one; when an agent receives a new
training case, it has several options: the agent can discard it, the agent
can retain it, or the agent can use it for engaging an argumentation
process. Figure 6 shows the result of that experiment for the two
data sets. Figure 6 contains three plots, where NL (not learning)
shows accuracy of an agent with no learning at all; L (learning),
shows the evolution of the individual classification accuracy when
agents learn by retaining the training cases they individually
receive (notice that when all the training cases have been retained at
100%, the accuracy should be equal to that of Figure 5 for
individual agents); and finally LFC (learning from communication) shows
the evolution of the individual classification accuracy of learning
agents that also learn by retaining those counterexamples received
during argumentation (i.e. they learn both from training examples
and counterexamples).
Figure 6 shows that if an agent Ai learns also from
communication, Ai can significantly improve its individual performance with
just a small number of additional cases (those selected as relevant
counterexamples for Ai during argumentation). For instance, in
the soybean data set, individual agents have achieved an accuracy
of 70.62% when they also learn from communication versus an
accuracy of 59.93% when they only learn from their individual
experience. The number of cases learnt from communication depends
on the properties of the data set: in the sponges data set, agents
have retained only very few additional cases, and significantly
improved individual accuracy; namely they retain 59.96 cases in
average (compared to the 50.4 cases retained if they do not learn from
communication). In the soybean data set more counterexamples are
learnt to significantly improve individual accuracy, namely they
retain 87.16 cases in average (compared to 55.27 cases retained if
they do not learn from communication). Finally, the fact that both
data sets show a significant improvement points out the adaptive
nature of the argumentation-based approach to learning from
communication: the useful cases are selected as counterexamples (and
no more than those needed), and they have the intended effect.
10. RELATED WORK
Concerning CBR in a multi-agent setting, the first research was
on negotiated case retrieval [11] among groups of agents. Our
work on multi-agent case-based learning started in 1999 [6]; later
Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR
approach (CCBR) for planning. Finally, another interesting
approach is multi-case-base reasoning (MCBR) [5], that deals with
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981
SPONGE
60
65
70
75
80
85
25% 40% 55% 70% 85% 100%
LFC
L
NL
SOYBEAN
20
30
40
50
60
70
80
90
25% 40% 55% 70% 85% 100%
LFC
L
NL
Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents.
distributed systems where there are several case bases available for
the same task and addresses the problems of cross-case base
adaptation. The main difference is that our MAC approach is a way to
distribute the Reuse process of CBR (using a voting system) while
Retrieve is performed individually by each agent; the other
multiagent CBR approaches, however, focus on distributing the Retrieve
process.
Research on MAS argumentation focus on several issues like a)
logics, protocols and languages that support argumentation, b)
argument selection and c) argument interpretation. Approaches for
logic and languages that support argumentation include defeasible
logic [4] and BDI models [13]. Although argument selection is a
key aspect of automated argumentation (see [12] and [13]), most
research has been focused on preference relations among arguments.
In our framework we have addressed both argument selection and
preference relations using a case-based approach.
11. CONCLUSIONS AND FUTURE WORK
In this paper we have presented an argumentation-based
framework for multi-agent learning. Specifically, we have presented
AMAL, a framework that allows a group of learning agents to
argue about the solution of a given problem and we have shown how
the learning capabilities can be used to generate arguments and
counterarguments. The experimental evaluation shows that the
increased amount of information provided to the agents by the
argumentation process increases their predictive accuracy, and specially
when an adequate number of agents take part in the argumentation.
The main contributions of this work are: a) an argumentation
framework for learning agents; b) a case-based preference relation
over arguments, based on computing an overall confidence
estimation of arguments; c) a case-based policy to generate
counterarguments and select counterexamples; and d) an argumentation-based
approach for learning from communication.
Finally, in the experiments presented here a learning agent would
retain all counterexamples submitted by the other agent; however,
this is a very simple case retention policy, and we will like to
experiment with more informed policies - with the goal that individual
learning agents could significantly improve using only a small set
of cases proposed by other agents. Finally, our approach is focused
on lazy learning, and future works aims at incorporating eager
inductive learning inside the argumentative framework for learning
from communication.
12. REFERENCES
[1] Agnar Aamodt and Enric Plaza. Case-based reasoning:
Foundational issues, methodological variations, and system
approaches. Artificial Intelligence Communications,
7(1):39-59, 1994.
[2] E. Armengol and E. Plaza. Lazy induction of descriptions for
relational case-based learning. In ECML"2001, pages 13-24,
2001.
[3] Gerhard Brewka. Dynamic argument systems: A formal
model of argumentation processes based on situation
calculus. Journal of Logic and Computation, 11(2):257-282,
2001.
[4] Carlos I. Chesñevar and Guillermo R. Simari. Formalizing
Defeasible Argumentation using Labelled Deductive
Systems. Journal of Computer Science & Technology,
1(4):18-33, 2000.
[5] D. Leake and R. Sooriamurthi. Automatically selecting
strategies for multi-case-base reasoning. In S. Craw and
A. Preece, editors, ECCBR"2002, pages 204-219, Berlin,
2002. Springer Verlag.
[6] Francisco J. Martín, Enric Plaza, and Josep-Lluis Arcos.
Knowledge and experience reuse through communications
among competent (peer) agents. International Journal of
Software Engineering and Knowledge Engineering,
9(3):319-341, 1999.
[7] Lorraine McGinty and Barry Smyth. Collaborative
case-based reasoning: Applications in personalized route
planning. In I. Watson and Q. Yang, editors, ICCBR, number
2080 in LNAI, pages 362-376. Springer-Verlag, 2001.
[8] Santi Ontañón and Enric Plaza. Justification-based
multiagent learning. In ICML"2003, pages 576-583. Morgan
Kaufmann, 2003.
[9] Enric Plaza, Eva Armengol, and Santiago Ontañón. The
explanatory power of symbolic similarity in case-based
reasoning. Artificial Intelligence Review, 24(2):145-161,
2005.
[10] David Poole. On the comparison of theories: Preferring the
most specific explanation. In IJCAI-85, pages 144-147,
1985.
[11] M V Nagendra Prassad, Victor R Lesser, and Susan Lander.
Retrieval and reasoning in distributed case bases. Technical
report, UMass Computer Science Department, 1995.
[12] K. Sycara S. Kraus and A. Evenchik. Reaching agreements
through argumentation: a logical model and implementation.
Artificial Intelligence Journal, 104:1-69, 1998.
[13] N. R. Jennings S. Parsons, C. Sierra. Agents that reason and
negotiate by arguing. Journal of Logic and Computation,
8:261-292, 1998.
[14] Bruce A. Wooley. Explanation component of software
systems. ACM CrossRoads, 5.1, 1998.
982 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | group;case-base reason;learning agent;multi-agent learn;case-based policy;argumentation framework;predictive accuracy;argumentation;learning from communication;argumentation protocol;multi-agent system;collaboration;joint deliberation |
train_I-52 | A Unified and General Framework for Argumentation-based Negotiation | This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue. | 1. INTRODUCTION
Roughly speaking, negotiation is a process aiming at
finding some compromise or consensus between two or several
agents about some matters of collective agreement, such
as pricing products, allocating resources, or choosing
candidates. Negotiation models have been proposed for the
design of systems able to bargain in an optimal way with
other agents for example, buying or selling products in
ecommerce.
Different approaches to automated negotiation have been
investigated, including game-theoretic approaches (which
usually assume complete information and unlimited
computation capabilities) [11], heuristic-based approaches which try
to cope with these limitations [6], and argumentation-based
approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the
importance of exchanging information and explanations between
negotiating agents in order to mutually influence their
behaviors (e.g. an agent may concede a goal having a small
priority), and consequently the outcome of the dialogue.
Indeed, the two first types of settings do not allow for the
addition of information or for exchanging opinions about offers.
Integrating argumentation theory in negotiation provides a
good means for supplying additional information and also
helps agents to convince each other by adequate arguments
during a negotiation dialogue. Indeed, an offer supported
by a good argument has a better chance to be accepted by
an agent, and can also make him reveal his goals or give
up some of them. The basic idea behind an
argumentationbased approach is that by exchanging arguments, the
theories of the agents (i.e. their mental states) may evolve, and
consequently, the status of offers may change. For instance,
an agent may reject an offer because it is not acceptable for
it. However, the agent may change its mind if it receives a
strong argument in favor of this offer.
Several proposals have been made in the literature for
modeling such an approach. However, the work is still
preliminary. Some researchers have mainly focused on relating
argumentation with protocols. They have shown how and
when arguments in favor of offers can be computed and
exchanged. Others have emphasized on the decision making
problem. In [3, 7], the authors argued that selecting an offer
to propose at a given step of the dialogue is a decision
making problem. They have thus proposed an
argumentationbased decision model, and have shown how such a model
can be related to the dialogue protocol.
In most existing works, there is no deep formal analysis
of the role of argumentation in negotiation dialogues. It is
not clear how argumentation can influence the outcome of
the dialogue. Moreover, basic concepts in negotiation such
as agreement (i.e. optimal solutions, or compromise) and
concession are neither defined nor studied.
This paper aims to propose a unified and general framework
for argumentation-based negotiation, in which the role of
argumentation is formally analyzed, and where the existing
systems can be restated. In this framework, a negotiation
dialogue takes place between two agents on a set O of offers,
whose structure is not known. The goal of a negotiation is to
find among elements of O, an offer that satisfies more or less
967
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
the preferences of both agents. Each agent is supposed to
have a theory represented in an abstract way. A theory
consists of a set A of arguments whose structure and origin are
not known, a function specifying for each possible offer in O,
the arguments of A that support it, a non specified conflict
relation among the arguments, and finally a preference
relation between the arguments. The status of each argument is
defined using Dung"s acceptability semantics. Consequently,
the set of offers is partitioned into four subsets: acceptable,
rejected, negotiable and non-supported offers. We show how
an agent"s theory may evolve during a negotiation dialogue.
We define formally the notions of concession, compromise,
and optimal solution. Then, we propose a protocol that
allows agents i) to exchange offers and arguments, and ii) to
make concessions when necessary. We show that dialogues
generated under such a protocol terminate, and even reach
optimal solutions when they exist.
This paper is organized as follows: Section 2 introduces the
logical language that is used in the rest of the paper.
Section 3 defines the agents as well as their theories. In section
4, we study the properties of these agents" theories.
Section 5 defines formally an argumentation-based negotiation,
shows how the theories of agents may evolve during a
dialogue, and how this evolution may influence the outcome of
the dialogue. Two kinds of outcomes: optimal solution and
compromise are defined, and we show when such outcomes
are reached. Section 6 illustrates our general framework
through some examples. Section 7 compares our formalism
with existing ones. Section 8 concludes and presents some
perspectives. Due to lack of space, the proofs are not
included. These last are in a technical report that we will
make available online at some later time.
2. THE LOGICAL LANGUAGE
In what follows, L will denote a logical language, and ≡
is an equivalence relation associated with it.
From L, a set O = {o1, . . . , on} of n offers is identified, such
that oi, oj ∈ O such that oi ≡ oj. This means that the
offers are different. Offers correspond to the different
alternatives that can be exchanged during a negotiation dialogue.
For instance, if the agents try to decide the place of their
next meeting, then the set O will contain different towns.
Different arguments can be built from L. The set Args(L)
will contain all those arguments. By argument, we mean a
reason in believing or of doing something. In [3], it has been
argued that the selection of the best offer to propose at a
given step of the dialogue is a decision problem. In [4], it has
been shown that in an argumentation-based approach for
decision making, two kinds of arguments are distinguished:
arguments supporting choices (or decisions), and arguments
supporting beliefs. Moreover, it has been acknowledged that
the two categories of arguments are formally defined in
different ways, and they play different roles. Indeed, an
argument in favor of a decision, built both on an agent"s
beliefs and goals, tries to justify the choice; whereas an
argument in favor of a belief, built only from beliefs, tries
to destroy the decision arguments, in particular the beliefs
part of those decision arguments. Consequently, in a
negotiation dialogue, those two kinds of arguments are generally
exchanged between agents. In what follows, the set Args(L)
is then divided into two subsets: a subset Argso(L) of
arguments supporting offers, and a subset Argsb(L) of arguments
supporting beliefs. Thus, Args(L) = Argso(L) ∪ Argsb(L).
As in [5], in what follows, we consider that the structure of
the arguments is not known.
Since the knowledge bases from which arguments are built
may be inconsistent, the arguments may be conflicting too.
In what follows, those conflicts will be captured by the
relation RL, thus RL ⊆ Args(L) × Args(L). Three assumptions
are made on this relation: First the arguments supporting
different offers are conflicting. The idea behind this
assumption is that since offers are exclusive, an agent has to choose
only one at a given step of the dialogue. Note that, the
relation RL is not necessarily symmetric between the
arguments of Argsb(L). The second hypothesis says that
arguments supporting the same offer are also conflicting. The
idea here is to return the strongest argument among these
arguments. The third condition does not allow an argument
in favor of an offer to attack an argument supporting a
belief. This avoids wishful thinking. Formally:
Definition 1. RL ⊆ Args(L) × Args(L) is a conflict
relation among arguments such that:
• ∀a, a ∈ Argso(L), s.t. a = a , a RL a
• a ∈ Argso(L) and a ∈ Argsb(L) such that a RL a
Note that the relation RL is not symmetric. This is due
to the fact that arguments of Argsb(L) may be conflicting
but not necessarily in a symmetric way. In what follows, we
assume that the set Args(L) of arguments is finite, and each
argument is attacked by a finite number of arguments.
3. NEGOTIATING AGENTS THEORIES AND
REASONING MODELS
In this section we define formally the negotiating agents,
i.e. their theories, as well as the reasoning model used by
those agents in a negotiation dialogue.
3.1 Negotiating agents theories
Agents involved in a negotiation dialogue, called
negotiating agents, are supposed to have theories. In this paper, the
theory of an agent will not refer, as usual, to its mental states
(i.e. its beliefs, desires and intentions). However, it will be
encoded in a more abstract way in terms of the arguments
owned by the agent, a conflict relation among those
arguments, a preference relation between the arguments, and a
function that specifies which arguments support offers of the
set O. We assume that an agent is aware of all the
arguments of the set Args(L). The agent is even able to express
a preference between any pair of arguments. This does not
mean that the agent will use all the arguments of Args(L),
but it encodes the fact that when an agent receives an
argument from another agent, it can interpret it correctly, and it
can also compare it with its own arguments. Similarly, each
agent is supposed to be aware of the conflicts between
arguments. This also allows us to encode the fact that an agent
can recognize whether the received argument is in conflict
or not with its arguments. However, in its theory, only the
conflicts between its own arguments are considered.
Definition 2 (Negotiating agent theory). Let O
be a set of n offers. A negotiating agent theory is a tuple
A, F, , R, Def such that:
• A ⊆ Args(L).
968 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
• F: O → 2A
s.t ∀i, j with i = j, F(oi) ∩ F(oj) = ∅.
Let AO = ∪F(oi) with i = 1, . . . , n.
• ⊆ Args(L) × Args(L) is a partial preorder denoting
a preference relation between arguments.
• R ⊆ RL such that R ⊆ A × A
• Def ⊆ A × A such that ∀ a, b ∈ A, a defeats b, denoted
a Def b iff:
- a R b, and
- not (b a)
The function F returns the arguments supporting offers in
O. In [4], it has been argued that any decision may have
arguments supporting it, called arguments PRO, and
arguments against it, called arguments CONS. Moreover, these
two types of arguments are not necessarily conflicting. For
simplicity reasons, in this paper we consider only arguments
PRO. Moreover, we assume that an argument cannot
support two distinct offers. However, it may be the case that an
offer is not supported at all by arguments, thus F(oi) may
be empty.
Example 1. Let O = {o1, o2, o3} be a set of offers. The
following theory is the theory of agent i:
• A = {a1, a2, a3, a4}
• F(o1) = {a1}, F(o2) = {a2}, F(o3) = ∅. Thus, Ao =
{a1, a2}
• = {(a1, a2), (a2, a1), (a3, a2), (a4, a3)}
• R = {a1, a2), (a2, a1), (a3, a2), (a4, a3)}
• Def = {(a4, a3), (a3, a2)}
From the above definition of agent theory, the following hold:
Property 1.
• Def ⊆ R
• ∀a, a ∈ F(oi), a R a
3.2 The reasoning model
From the theory of an agent, one can define the
argumentation system used by that agent for reasoning about the
offers and the arguments, i.e. for computing the status of
the different offers and arguments.
Definition 3 (Argumentation system). Let A, F,
, R, Def be the theory of an agent. The argumentation
system of that agent is the pair A, Def .
In [5], different acceptability semantics have been introduced
for computing the status of arguments. These are based
on two basic concepts, defence and conflict-free, defined as
follows:
Definition 4 (Defence/conflict-free). Let S ⊆ A.
• S defends an argument a iff each argument that defeats
a is defeated by some argument in S.
• S is conflict-free iff there exist no a, a in S such that
a Def a .
Definition 5 (Acceptability semantics). Let S be
a conflict-free set of arguments, and let T : 2A
→ 2A
be a
function such that T (S) = {a | a is defended by S}.
• S is a complete extension iff S = T (S).
• S is a preferred extension iff S is a maximal (w.r.t set
⊆) complete extension.
• S is a grounded extension iff it is the smallest (w.r.t
set ⊆) complete extension.
Let E1, . . . , Ex denote the different extensions under a given
semantics.
Note that there is only one grounded extension. It
contains all the arguments that are not defeated, and those
arguments that are defended directly or indirectly by
nondefeated arguments.
Theorem 1. Let A, Def the argumentation system
defined as shown above.
1. It may have x ≥ 1 preferred extensions.
2. The grounded extensions is S = i≥1
T (∅).
Note that when the grounded extension (or the preferred
extension) is empty, this means that there is no acceptable
offer for the negotiating agent.
Example 2. In example 1, there is one preferred
extension, E = {a1, a2, a4}.
Now that the acceptability semantics is defined, we are ready
to define the status of any argument.
Definition 6 (Argument status). Let A, Def be an
argumentation system, and E1, . . . , Ex its extensions under a
given semantics. Let a ∈ A.
1. a is accepted iff a ∈ Ei, ∀Ei with i = 1, . . . , x.
2. a is rejected iff Ei such that a ∈ Ei.
3. a is undecided iff a is neither accepted nor rejected.
This means that a is in some extensions and not in
others.
Note that A = {a|a is accepted} ∪ {a|a is rejected} ∪ {a|a
is undecided}.
Example 3. In example 1, the arguments a1, a2 and a4
are accepted, whereas the argument a3 is rejected.
As said before, agents use argumentation systems for
reasoning about offers. In a negotiation dialogue, agents propose
and accept offers that are acceptable for them, and reject
bad ones. In what follows, we will define the status of an
offer. According to the status of arguments, one can define
four statuses of the offers as follows:
Definition 7 (Offers status). Let o ∈ O.
• The offer o is acceptable for the negotiating agent iff
∃ a ∈ F(o) such that a is accepted. Oa = {oi ∈ O,
such that oi is acceptable}.
• The offer o is rejected for the negotiating agent iff ∀
a ∈ F(o), a is rejected. Or = {oi ∈ O, such that oi is
rejected}.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969
• The offer o is negotiable iff ∀ a ∈ F(o), a is undecided.
On = {oi ∈ O, such that oi is negotiable}.
• The offer o is non-supported iff it is neither
acceptable, nor rejected or negotiable. Ons = {oi ∈ O, such
that oi is non-supported offers}.
Example 4. In example 1, the two offers o1 and o2 are
acceptable since they are supported by accepted arguments,
whereas the offer o3 is non-supported since it has no
argument in its favor.
From the above definitions, the following results hold:
Property 2. Let o ∈ O.
• O = Oa ∪ Or ∪ On ∪ Ons.
• The set Oa may contain more than one offer.
From the above partition of the set O of offers, a preference
relation between offers is defined. Let Ox and Oy be two
subsets of O. Ox Oy means that any offer in Ox is
preferred to any offer in the set Oy. We can write also for two
offers oi, oj, oi oj iff oi ∈ Ox, oj ∈ Oy and Ox Oy.
Definition 8 (Preference between offers). Let O
be a set of offers, and Oa, Or, On, Ons its partition. Oa
On Ons Or.
Example 5. In example 1, we have o1 o3, and o2 o3.
However, o1 and o2 are indifferent.
4. THE STRUCTURE OF NEGOTIATION
THEORIES
In this section, we study the properties of the system
developed above. We first show that in the particular case
where A = AO (ie. all of the agent"s arguments refer to
offers), the corresponding argumentation system will return
at least one non-empty preferred extension.
Theorem 2. Let A, Def an argumentation system such
that A = AO. Then the system returns at least one
extension E, such that |E| ≥ 1.
We now present some results that demonstrate the
importance of indifference in negotiating agents, and more
specifically its relation to acceptable outcomes. We first show that
the set Oa may contain several offers when their
corresponding accepted arguments are indifferent w.r.t the preference
relation .
Theorem 3. Let o1, o2 ∈ O. o1, o2 ∈ Oa iff ∃ a1 ∈
F(o1), ∃ a2 ∈ F(o2), such that a1 and a2 are accepted and
are indifferent w.r.t (i.e. a b and b a).
We now study acyclic preference relations that are defined
formally as follows.
Definition 9 (Acyclic relation). A relation R on
a set A is acyclic if there is no sequence a1, a2, . . . , an ∈ A,
with n > 1, such that (ai, ai+1) ∈ R and (an, a1) ∈ R, with
1 ≤ i < n.
Note that acyclicity prohibits pairs of arguments a, b such
that a b and b a, ie., an acyclic preference relation
disallows indifference.
Theorem 4. Let A be a set of arguments, R the
attacking relation of A defined as R ⊆ A × A, and an acyclic
relation on A. Then for any pair of arguments a, b ∈ A,
such that (a, b) ∈ R, either (a, b) ∈ Def or (b, a) ∈ Def (or
both).
The previous result is used in the proof of the following
theorem that states that acyclic preference relations
sanction extensions that support exactly one offer.
Theorem 5. Let A be a set of arguments, and an
acyclic relation on A. If E is an extension of <A, Def>,
then |E ∩ AO| = 1.
An immediate consequence of the above is the following.
Property 3. Let A be a set of arguments such that A =
AO. If the relation on A is acyclic, then each extension
Ei of <A, Def>, |Ei| = 1.
Another direct consequence of the above theorem is that
in acyclic preference relations, arguments that support offers
can participate in only one preferred extension.
Theorem 6. Let A be a set of arguments, and an
acyclic relation on A. Then the preferred extensions of
A, Def are pairwise disjoint w.r.t arguments of AO.
Using the above results we can prove the main theorem of
this section that states that negotiating agents with acyclic
preference relations do not have acceptable offers.
Theorem 7. Let A, F, R, , Def be a negotiating
agent such that A = AO and is an acyclic relation. Then
the set of accepted arguments w.r.t A, Def is emtpy.
Consequently, the set of acceptable offers, Oa is empty as well.
5. ARGUMENTATION-BASED NEGOTIATION
In this section, we define formally a protocol that
generates argumentation-based negotiation dialogues between
two negotiating agents P and C. The two agents
negotiate about an object whose possible values belong to a set
O. This set O is supposed to be known and the same for
both agents. For simplicity reasons, we assume that this
set does not change during the dialogue. The agents are
equipped with theories denoted respectively AP
, FP
, P
,
RP
, DefP
, and AC
, FC
, C
, RC
, DefC
. Note that the
two theories may be different in the sense that the agents
may have different sets of arguments, and different
preference relations. Worst yet, they may have different
arguments in favor of the same offers. Moreover, these theories
may evolve during the dialogue.
5.1 Evolution of the theories
Before defining formally the evolution of an agent"s theory,
let us first introduce the notion of dialogue moves, or moves
for short.
Definition 10 (Move). A move is a tuple mi = pi,
ai, oi, ti such that:
• pi ∈ {P, C}
• ai ∈ Args(L) ∪ θ1
1
In what follows θ denotes the fact that no argument, or no
offer is given
970 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
• oi ∈ O ∪ θ
• ti ∈ N∗
is the target of the move, such that ti < i
The function Player (resp. Argument, Offer, Target)
returns the player of the move (i.e. pi) (resp. the argument
of a move, i.e ai, the offer oi, and the target of the move,
ti). Let M denote the set of all the moves that can be built
from {P, C}, Arg(L), O .
Note that the set M is finite since Arg(L) and O are
assumed to be finite. Let us now see how an agent"s theory
evolves and why. The idea is that if an agent receives an
argument from another agent, it will add the new argument
to its theory. Moreover, since an argument may bring new
information for the agent, thus new arguments can emerge.
Let us take the following example:
Example 6. Suppose that an agent P has the following
propositional knowledge base: ΣP = {x, y → z}. From this
base one cannot deduce z. Let"s assume that this agent
receives the following argument {a, a → y} that justifies y.
It is clear that now P can build an argument, say {a, a →
y, y → z} in favor of z.
In a similar way, if a received argument is in conflict with the
arguments of the agent i, then those conflicts are also added
to its relation Ri
. Note that new conflicts may arise between
the original arguments of the agent and the ones that emerge
after adding the received arguments to its theory. Those new
conflicts should also be considered. As a direct consequence
of the evolution of the sets Ai
and Ri
, the defeat relation
Defi
is also updated.
The initial theory of an agent i, (i.e. its theory before the
dialogue starts), is denoted by Ai
0, Fi
0, i
0, Ri
0, Defi
0 , with
i ∈ {P, C}. Besides, in this paper, we suppose that the
preference relation i
of an agent does not change during
the dialogue.
Definition 11 (Theory evolution). Let m1, . . ., mt,
. . ., mj be a sequence of moves. The theory of an agent i at
a step t > 0 is: Ai
t, Fi
t , i
t, Ri
t, Defi
t such that:
• Ai
t = Ai
0 ∪ {ai, i = 1, . . . , t, ai = Argument(mi)} ∪
A with A ⊆ Args(L)
• Fi
t = O → 2Ai
t
• i
t = i
0
• Ri
t = Ri
0 ∪ {(ai, aj) | ai = Argument(mi),
aj = Argument(mj), i, j ≤ t, and ai RL aj} ∪ R with
R ⊆ RL
• Defi
t ⊆ Ai
t × Ai
t
The above definition captures the monotonic aspect of an
argument. Indeed, an argument cannot be removed.
However, its status may change. An argument that is accepted
at step t of the dialogue by an agent may become rejected
at step t + i. Consequently, the status of offers also change.
Thus, the sets Oa, Or, On, and Ons may change from one
step of the dialogue to another. That means for example
that some offers could move from the set Oa to the set Or
and vice-versa. Note that in the definition of Rt, the
relation RL is used to denote a conflict between exchanged
arguments. The reason is that, such a conflict may not be
in the set Ri
of the agent i. Thus, in order to recognize
such conflicts, we have supposed that the set RL is known
to the agents. This allows us to capture the situation where
an agent is able to prove an argument that it was unable
to prove before, by incorporating in its beliefs some
information conveyed through the exchange of arguments with
another agent. This, unknown at the beginning of the
dialogue argument, could give to this agent the possibility to
defeat an argument that it could not by using its initial
arguments. This could even lead to a change of the status of
these initial arguments and this change would lead to the
one of the associated offers" status.
In what follows, Oi
t,x denotes the set of offers of type x,
where x ∈ {a, n, r, ns}, of the agent i at step t of the
dialogue. In some places, we can use for short the notation Oi
t
to denote the partition of the set O at step t for agent i.
Note that we have: not(Oi
t,x ⊆ Oi
t+1,x).
5.2 The notion of agreement
As said in the introduction, negotiation is a process aiming
at finding an agreement about some matters. By agreement,
one means a solution that satisfies to the largest possible
extent the preferences of both agents. In case there is no such
solution, we say that the negotiation fails. In what follows,
we will discuss the different kinds of solutions that may be
reached in a negotiation. The first one is the optimal
solution. An optimal solution is the best offer for both agents.
Formally:
Definition 12 (Optimal solution). Let O be a set
of offers, and o ∈ O. The offer o is an optimal solution at
a step t ≥ 0 iff o ∈ OP
t,a ∩ OC
t,a
Such a solution does not always exist since agents may have
conflicting preferences. Thus, agents make concessions by
proposing/accepting less preferred offers.
Definition 13 (Concession). Let o ∈ O be an offer.
The offer o is a concession for an agent i iff o ∈ Oi
x such
that ∃Oi
y = ∅, and Oi
y Oi
x.
During a negotiation dialogue, agents exchange first their
most preferred offers, and if these last are rejected, they
make concessions. In this case, we say that their best offers
are no longer defendable. In an argumentation setting, this
means that the agent has already presented all its arguments
supporting its best offers, and it has no counter argument
against the ones presented by the other agent. Formally:
Definition 14 (Defendable offer). Let Ai
t, Fi
t , i
t,
Ri
t, Defi
t be the theory of agent i at a step t > 0 of the
dialogue. Let o ∈ O such that ∃j ≤ t with Player(mj) = i and
offer(mj) = o. The offer o is defendable by the agent i iff:
• ∃a ∈ Fi
t (o), and k ≤ t s.t. Argument(mk) = a, or
• ∃a ∈ At
\Fi
t (o) s.t. a Defi
t b with
- Argument(mk) = b, k ≤ t, and Player(mk) = i
- l ≤ t, Argument(ml) = a
The offer o is said non-defendable otherwise and NDi
t is the
set of non-defendable offers of agent i at a step t.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971
5.3 Negotiation dialogue
Now that we have shown how the theories of the agents
evolve during a dialogue, we are ready to define formally
an argumentation-based negotiation dialogue. For that
purpose, we need to define first the notion of a legal
continuation.
Definition 15 (Legal move). A move m is a legal
continuation of a sequence of moves m1, . . . , ml iff j, k < l,
such that:
• Offer(mj) = Offer(mk), and
• Player(mj) = Player(mk)
The idea here is that if the two agents present the same
offer, then the dialogue should terminate, and there is no
longer possible continuation of the dialogue.
Definition 16 (Argumentation-based negotiation).
An argumentation-based negotiation dialogue d between two
agents P and C is a non-empty sequence of moves m1, . . . , ml
such that:
• pi = P iff i is even, and pi = C iff i is odd
• Player(m1) = P, Argument(m1) = θ, Offer(m1) = θ,
and Target(m1) = 02
• ∀ mi, if Offer(mi) = θ, then Offer(mi) oj, ∀ oj ∈
O\(O
Player(mi)
i,r ∪ ND
Player(mi)
i )
• ∀i = 1, . . . , l, mi is a legal continuation of m1, . . . , mi−1
• Target(mi) = mj such that j < i and Player(mi) =
Player(mj)
• If Argument(mi) = θ, then:
- if Offer(mi) = θ then Argument(mi) ∈ F(Offer(mi))
- if Offer(mi) = θ then Argument(mi) Def
Player(mi)
i
Argument(Target(mi))
• i, j ≤ l such that mi = mj
• m ∈ M such that m is a legal continuation of m1, . . . , ml
Let D be the set of all possible dialogues.
The first condition says that the two agents take turn. The
second condition says that agent P starts the negotiation
dialogue by presenting an offer. Note that, in the first turn,
we suppose that the agent does not present an argument.
This assumption is made for strategical purposes. Indeed,
arguments are exchanged as soon as a conflict appears. The
third condition ensures that agents exchange their best
offers, but never the rejected ones. This condition takes also
into account the concessions that an agent will have to make
if it was established that a concession is the only option for
it at the current state of the dialogue. Of course, as we
have shown in a previous section, an agent may have several
good or acceptable offers. In this case, the agent chooses
one of them randomly. The fourth condition ensures that
the moves are legal. This condition allows to terminate the
dialogue as soon as an offer is presented by both agents.
The fifth condition allows agents to backtrack. The sixth
2
The first move has no target.
condition says that an agent may send arguments in favor
of offers, and in this case the offer should be stated in the
same move. An agent can also send arguments in order to
defeat arguments of the other agent. The next condition
prevents repeating the same move. This is useful for
avoiding loops. The last condition ensures that all the possible
legal moves have been presented.
The outcome of a negotiation dialogue is computed as
follows:
Definition 17 (Dialogue outcome). Let d = m1, . . .,
ml be a argumentation-based negotiation dialogue. The
outcome of this dialogue, denoted Outcome, is Outcome(d) =
Offer(ml) iff ∃j < l s.t. Offer(ml) = Offer(mj), and
Player(ml) = Player(mj). Otherwise, Outcome(d) = θ.
Note that when Outcome(d) = θ, the negotiation fails, and
no agreement is reached by the two agents. However, if
Outcome(d) = θ, the negotiation succeeds, and a solution
that is either optimal or a compromise is found.
Theorem 8. ∀di ∈ D, the argumentation-based
negotiation di terminates.
The above result is of great importance, since it shows that
the proposed protocol avoids loops, and dialogues terminate.
Another important result shows that the proposed protocol
ensures to reach an optimal solution if it exists. Formally:
Theorem 9 (Completeness). Let d = m1, . . . , ml be
a argumentation-based negotiation dialogue. If ∃t ≤ l such
that OP
t,a ∩ OC
t,a = ∅, then Outcome(d) ∈ OP
t,a ∩ OC
t,a.
We show also that the proposed dialogue protocol is sound
in the sense that, if a dialogue returns a solution, then that
solution is for sure a compromise. In other words, that
solution is a common agreement at a given step of the
dialogue. We show also that if the negotiation fails, then there
is no possible solution.
Theorem 10 (Soundness). Let d = m1, . . . , ml be a
argumentation-based negotiation dialogue.
1. If Outcome(d) = o, (o = θ), then ∃t ≤ l such that o ∈
OP
t,x ∩ OC
t,y, with x, y ∈ {a, n, ns}.
2. If Outcome(d) = θ, then ∀t ≤ l, OP
t,x ∩ OC
t,y = ∅, ∀
x, y ∈ {a, n, ns}.
A direct consequence of the above theorem is the following:
Property 4. Let d = m1, . . . , ml be a
argumentationbased negotiation dialogue. If Outcome(d) = θ, then ∀t ≤ l,
• OP
t,r = OC
t,a ∪ OC
t,n ∪ OC
t,ns, and
• OC
t,r = OP
t,a ∪ OP
t,n ∪ OP
t,ns.
6. ILLUSTRATIVE EXAMPLES
In this section we will present some examples in order to
illustrate our general framework.
Example 7 (No argumentation). Let O = {o1, o2}
be the set of all possible offers. Let P and C be two agents,
equipped with the same theory: A, F, , R, Def such that
A = ∅, F(o1) = F(o2) = ∅, = ∅, R = ∅, Def = ∅. In
this case, it is clear that the two offers o1 and o2 are
nonsupported. The proposed protocol (see Definition 16) will
generate one of the following dialogues:
972 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
P: m1 = P, θ, o1, 0
C: m2 = C, θ, o1, 1
This dialogue ends with o1 as a compromise. Note that this
solution is not considered as optimal since it is not an
acceptable offer for the agents.
P: m1 = P, θ, o1, 0
C: m2 = C, θ, o2, 1
P: m3 = P, θ, o2, 2
This dialogue ends with o2 as a compromise.
P: m1 = P, θ, o2, 0
C: m2 = C, θ, o2, 1
This dialogue also ends with o2 as a compromise. The last
possible dialgue is the following that ends with o1 as a
compromise.
P: m1 = P, θ, o2, 0
C: m2 = C, θ, o1, 1
P: m3 = P, θ, o1, 2
Note that in the above example, since there is no exchange
of arguments, the theories of both agents do not change. Let
us now consider the following example.
Example 8 (Static theories). Let O = {o1, o2} be
the set of all possible offers. The theory of agent P is AP
,
FP
, P
, RP
, DefP
such that: AP
= {a1, a2}, FP
(o1) =
{a1}, FP
(o2) = {a2}, P
= {(a1, a2)}, RP
= {(a1, a2), (a2, a1)},
DefP
= {a1, a2}. The argumentation system AP
, DefP
of
this agent will return a1 as an accepted argument, and a2 as
a rejected one. Consequently, the offer o1 is acceptable and
o2 is rejected.
The theory of agent C is AC
, FC
, C
, RC
, DefC
such
that: AC
= {a1, a2}, FC
(o1) = {a1}, FC
(o2) = {a2}, C
=
{(a2, a1)}, RC
= {(a1, a2), (a2, a1)}, DefC
= {a2, a1}. The
argumentation system AC
, DefC
of this agent will return
a2 as an accepted argument, and a1 as a rejected one.
Consequently, the offer o2 is acceptable and o1 is rejected.
The only possible dialogues that may take place between
the two agents are the following:
P: m1 = P, θ, o1, 0
C: m2 = C, θ, o2, 1
P: m3 = P, a1, o1, 2
C: m4 = C, a2, o2, 3
The second possible dialogue is the following:
P: m1 = P, θ, o1, 0
C: m2 = C, a2, o2, 1
P: m3 = P, a1, o1, 2
C: m4 = C, θ, o2, 3
Both dialogues end with failure. Note that in both dialogues,
the theories of both agents do not change. The reason is that
the exchanged arguments are already known to both agents.
The negotiation fails because the agents have conflicting
preferences.
Let us now consider an example in which argumentation will
allow agents to reach an agreement.
Example 9 (Dynamic theories). Let O = {o1, o2} be
the set of all possible offers. The theory of agent P is AP
,
FP
, P
, RP
, DefP
such that: AP
= {a1, a2}, FP
(o1)
= {a1}, FP
(o2) = {a2}, P
= {(a1, a2), (a3, a1)}, RP
=
{(a1, a2), (a2, a1)}, DefP
= {(a1, a2)}. The argumentation
system AP
, DefP
of this agent will return a1 as an accepted
argument, and a2 as a rejected one. Consequently, the offer
o1 is acceptable and o2 is rejected.
The theory of agent C is AC
, FC
, C
, RC
, DefC
such
that: AC
= {a1, a2, a3}, FC
(o1) = {a1}, FC
(o2) = {a2},
C
= {(a1, a2), (a3, a1)}, RC
= {(a1, a2), (a2, a1), (a3, a1)},
DefC
= {(a1, a2), (a3, a1)}. The argumentation system
AC
, DefC
of this agent will return a3 and a2 as accepted
arguments, and a1 as a rejected one. Consequently, the offer
o2 is acceptable and o1 is rejected.
The following dialogue may take place between the two
agents:
P: m1 = P, θ, o1, 0
C: m2 = C, θ, o2, 1
P: m3 = P, a1, o1, 2
C: m4 = C, a3, θ, 3
C: m5 = P, θ, o2, 4
At step 4 of the dialogue, the agent P receives the
argument a3 from P. Thus, its theory evolves as follows: AP
= {a1, a2, a3}, RP
= {(a1, a2), (a2, a1), (a3, a1)}, DefP
=
{(a1, a2), (a3, a1)}. At this step, the argument a1 which was
accepted will become rejected, and the argument a2 which
was at the beginning of the dialogue rejected will become
accepted. Thus, the offer o2 will be acceptable for the agent,
whereas o1 will become rejected. At this step 4, the offer o2
is acceptable for both agents, thus it is an optimal solution.
The dialogue ends by returning this offer as an outcome.
7. RELATED WORK
Argumentation has been integrated in negotiation
dialogues at the early nineties by Sycara [12]. In that work, the
author has emphasized the advantages of using
argumentation in negotiation dialogues, and a specific framework has
been introduced. In [8], the different types of arguments
that are used in a negotiation dialogue, such as threats and
rewards, have been discussed. Moreover, a particular
framework for negotiation have been proposed. In [9, 13],
different other frameworks have been proposed. Even if all these
frameworks are based on different logics, and use different
definitions of arguments, they all have at their heart an
exchange of offers and arguments. However, none of those
proposals explain when arguments can be used within a
negotiation, and how they should be dealt with by the agent
that receives them. Thus the protocol for handling
arguments was missing. Another limitation of the above
frameworks is the fact that the argumentation frameworks they
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973
use are quite poor, since they use a very simple
acceptability semantics. In [2] a negotiation framework that fills the
gap has been suggested. A protocol that handles the
arguments was proposed. However, the notion of concession
is not modeled in that framework, and it is not clear what
is the status of the outcome of the dialogue. Moreover, it
is not clear how an agent chooses the offer to propose at a
given step of the dialogue. In [1, 7], the authors have
focused mainly on this decision problem. They have proposed
an argumentation-based decision framework that is used by
agents in order to choose the offer to propose or to accept
during the dialogue. In that work, agents are supposed to
have a beliefs base and a goals base.
Our framework is more general since it does not impose
any specific structure for the arguments, the offers, or the
beliefs. The negotiation protocol is general as well. Thus this
framework can be instantiated in different ways by creating,
in such manner, different specific argumentation-based
negotiation frameworks, all of them respecting the same
properties. Our framework is also a unified one because frameworks
like the ones presented above can be represented within this
framework. For example the decision making mechanism
proposed in [7] for the evaluation of arguments and
therefore of offers, which is based on a priority relation between
mutually attacked arguments, can be captured by the
relation defeat proposed in our framework. This relation takes
simultaneously into account the attacking and preference
relations that may exist between two arguments.
8. CONCLUSIONS AND FUTURE WORK
In this paper we have presented a unified and general
framework for argumentation-based negotiation. Like any
other argumentation-based negotiation framework, as it is
evoked in (e.g. [10]), our framework has all the advantages
that argumentation-based negotiation approaches present
when related to the negotiation approaches based either on
game theoretic models (see e.g. [11]) or heuristics ([6]). This
work is a first attempt to formally define the role of
argumentation in the negotiation process. More precisely, for the
first time, it formally establishes the link that exists between
the status of the arguments and the offers they support, it
defines the notion of concession and shows how it influences
the evolution of the negotiation, it determines how the
theories of agents evolve during the dialogue and performs an
analysis of the negotiation outcomes. It is also the first time
where a study of the formal properties of the negotiation
theories of the agents as well as of an argumentative
negotiation dialogue is presented.
Our future work concerns several points. A first point is
to relax the assumption that the set of possible offers is the
same to both agents. Indeed, it is more natural to assume
that agents may have different sets of offers. During a
negotiation dialogue, these sets will evolve. Arguments in favor
of the new offers may be built from the agent theory. Thus,
the set of offers will be part of the agent theory. Another
possible extension of this work would be to allow agents
to handle both arguments PRO and CONS offers. This is
more akin to the way human take decisions. Considering
both types of arguments will refine the evaluation of the
offers status. In the proposed model, a preference relation
between offers is defined on the basis of the partition of the
set of offers. This preference relation can be refined. For
instance, among the acceptable offers, one may prefer the
offer that is supported by the strongest argument. In [4],
different criteria have been proposed for comparing decisions.
Our framework can thus be extended by integrating those
criteria. Another interesting point to investigate is that of
considering negotiation dialogues between two agents with
different profiles. By profile, we mean the criterion used by
an agent to compare its offers.
9. REFERENCES
[1] L. Amgoud, S. Belabbes, and H. Prade. Towards a
formal framework for the search of a consensus
between autonomous agents. In Proceedings of the 4th
International Joint Conference on Autonomous Agents
and Multi-Agents systems, pages 537-543, 2005.
[2] L. Amgoud, S. Parsons, and N. Maudet. Arguments,
dialogue, and negotiation. In Proceedings of the 14th
European Conference on Artificial Intelligence, 2000.
[3] L. Amgoud and H. Prade. Reaching agreement
through argumentation: A possibilistic approach. In 9
th International Conference on the Principles of
Knowledge Representation and Reasoning, KR"2004,
2004.
[4] L. Amgoud and H. Prade. Explaining qualitative
decision under uncertainty by argumentation. In 21st
National Conference on Artificial Intelligence,
AAAI"06, pages 16 - 20, 2006.
[5] P. M. Dung. On the acceptability of arguments and its
fundamental role in nonmonotonic reasoning, logic
programming and n-person games. Artificial
Intelligence, 77:321-357, 1995.
[6] N. R. Jennings, P. Faratin, A. R. Lumuscio,
S. Parsons, and C. Sierra. Automated negotiation:
Prospects, methods and challenges. International
Journal of Group Decision and Negotiation, 2001.
[7] A. Kakas and P. Moraitis. Adaptive agent negotiation
via argumentation. In Proceedings of the 5th
International Joint Conference on Autonomous Agents
and Multi-Agents systems, pages 384-391, 2006.
[8] S. Kraus, K. Sycara, and A. Evenchik. Reaching
agreements through argumentation: a logical model
and implementation. Artificial Intelligence, 104:1-69,
1998.
[9] S. Parsons and N. R. Jennings. Negotiation through
argumentation-a preliminary report. In Proceedings
of the 2nd International Conference on Multi Agent
Systems, pages 267-274, 1996.
[10] I. Rahwan, S. D. Ramchurn, N. R. Jennings,
P. McBurney, S. Parsons, and E. Sonenberg.
Argumentation-based negotiation. Knowledge
Engineering Review, 18 (4):343-375, 2003.
[11] J. Rosenschein and G. Zlotkin. Rules of Encounter:
Designing Conventions for Automated Negotiation
Among Computers,. MIT Press, Cambridge,
Massachusetts, 1994., 1994.
[12] K. Sycara. Persuasive argumentation in negotiation.
Theory and Decision, 28:203-242, 1990.
[13] F. Tohm´e. Negotiation and defeasible reasons for
choice. In Proceedings of the Stanford Spring
Symposium on Qualitative Preferences in Deliberation
and Practical Reasoning, pages 95-102, 1997.
974 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | outcome;belief;agent;argumentation-based negotiation;concession notion;argument;information;argumentation;decision making mechanism;negotiation;framework;solution;theory;notion of concession |
train_I-53 | A Randomized Method for the Shapley Value for the Voting Game | The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P-complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%. | 1. INTRODUCTION
Coalition formation, a key form of interaction in multi-agent
systems, is the process of joining together two or more agents so as
to achieve goals that individuals on their own cannot, or to achieve
them more efficiently [1, 11, 14, 13]. Often, in such situations,
there is more than one possible coalition and a player"s payoff
depends on which one it joins. Given this, a key problem is to ensure
that none of the parties in a coalition has any incentive to break
away from it and join another coalition (i.e., the coalitions should
be stable). However, in many cases there may be more than one
solution (i.e., a stable coalition). In such cases, it becomes difficult
to select a single solution from among the possible ones, especially
if the parties are self-interested (i.e., they have different preferences
over stable coalitions).
In this context, cooperative game theory deals with the
problem of coalition formation and offers a number of solution
concepts that possess desirable properties like stability, fair division
of joint gains, and uniqueness [16, 14]. Cooperative game theory
differs from its non-cooperative counterpart in that for the former
the players are allowed to form binding agreements and so there is
a strong incentive to work together to receive the largest total
payoff. Also, unlike non-cooperative game theory, cooperative game
theory does not specify a game through a description of the
strategic environment (including the order of players" moves and the set
of actions at each move) and the resulting payoffs, but, instead, it
reduces this collection of data to the coalitional form where each
coalition is represented by a single real number: there are no
actions, moves, or individual payoffs. The chief advantage of this
approach, at least in multiple-player environments, is its practical
usefulness. Thus, many more real-life situations fit more easily into
a coalitional form game, whose structure is more tractable than that
of a non-cooperative game, whether that be in normal or extensive
form and it is for this reason that we focus on such forms in this
paper.
Given these observations, a number of multiagent systems
researchers have used and extended cooperative game-theoretic
solutions to facilitate automated coalition formation [20, 21, 18].
Moreover, in this work, one of the most extensively studied solution
concepts is the Shapley value [19]. A player"s Shapley value gives an
indication of its prospects of playing the game - the higher the
Shapley value, the better its prospects. The main advantage of the
Shapley value is that it provides a solution that is both unique and
fair (see Section 2.1 for a discussion of the property of fairness).
However, while these are both desirable properties, the Shapley
value has one major drawback: for many coalition games, it
cannot be determined in polynomial time. For instance, finding this
value for the weighted voting game is, in general, #P-complete [6].
A problem is #P-hard if solving it is as hard as counting
satisfying assignments of propositional logic formulae [15, p442]. Since
#P-completeness thus subsumes NP-completeness, this implies that
computing the Shapley value for the weighted voting game will be
intractable in general. In other words, it is practically infeasible to
try to compute the exact Shapley value. However, the voting game
has practical relevance to multi-agent systems as it is an important
means of reaching consensus between multiple agents. Hence, our
objective is to overcome the computational complexity of finding
the Shapley value for this game. Specifically, we first show that
there are some specific voting games for which the exact value can
959
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
be computed in polynomial time. By identifying such games, we
show, for the first tme, when it is feasible to find the exact value and
when it is not. For the computationally complex voting games, we
present a new randomised method, along the lines of Monte-Carlo
simulation, for computing the approximate Shapley value.
The computational complexity of such games has typically been
tackled using two main approaches. The first is to use
generating functions [3]. This method trades time complexity for
storage space. The second uses an approximation technique based on
Monte Carlo simulation [12, 7]. However the method we propose is
more general than either of these (see Section 6 for details).
Moreover, no work has previously analysed the approximation error. The
approximation error relates to how close the approximate is to the
true Shapley value. Specifically, it is the difference between the true
and the approximate Shapley value. It is important to determine
this error because the performance of an approximation method is
evaluated in terms of two criteria: its time complexity, and its
approximation error. Thus, our contribution lies in also in providing,
for the first time, an analysis of the percentage error in the
approximate Shapley value. This analysis is carried out empirically.
Our experiments show that the error is always less than 20%,
and in most cases it is under 5%. Finally, our method has time
complexity linear in the number of players and it does not require
any arrays (i.e., it is economical in terms of both computing time
and storage space). Given this, and the fact that software agents
have limited computational resources and therefore cannot
compute the true Shapley value, our results are especially relevant to
such resource bounded agents.
The rest of the paper is organised as follows. Section 2 defines
the Shapley value and describes the weighted voting game. In
Section 3 we describe voting games whose Shapley value can be found
in polynomial time. In Section 4, we present a randomized method
for finding the approximate Shapley value and analyse its
performance in Section 5. Section 6 discusses related literature. Finally,
Section 7 concludes.
2. BACKGROUND
We begin by introducing coalition games and the Shapley value and
then define the weighted voting game. A coalition game is a game
where groups of players (coalitions) may enforce cooperative
behaviour between their members. Hence the game is a competition
between coalitions of players, rather than between individual
players.
Depending on how the players measure utility, coalition game
theory is split into two parts. If the players measure utility or the
payoff in the same units and there is a means of exchange of utility
such as side payments, we say the game has transferable utility;
otherwise it has non-transferable utility. More formally, a coalition
game with transferable utility, N, v , consists of:
1. a finite set (N = {1, 2, . . . , n}) of players and
2. a function (v) that associates with every non-empty subset S
of N (i.e., a coalition) a real number v(S) (the worth of S).
For each coalition S, the number v(S) is the total payoff that is
available for division among the members of S (i.e., the set of joint
actions that coalition S can take consists of all possible divisions
of v(S) among the members of S). Coalition games with
nontransferable payoffs differ from ones with transferable payoffs in
the following way. For the former, each coalition is associated with
a set of payoff vectors that is not necessarily the set of all possible
divisions of some fixed amount. The focus of this paper is on the
weighted voting game (described in Section 2.1) which is a game
with transferable payoffs.
Thus, in either case, the players will only join a coalition if they
expect to gain from it. Here, the players are allowed to form
binding agreements, and so there is strong incentive to work together to
receive the largest total payoff. The problem then is how to split the
total payoff between or among the players. In this context, Shapley
[19] constructed a solution using an axiomatic approach. Shapley
defined a value for games to be a function that assigns to a game
(N, v), a number ϕi(N, v) for each i in N. This function satisfies
three axioms [17]:
1. Symmetry. This axiom requires that the names of players
play no role in determining the value.
2. Carrier. This axiom requires that the sum of ϕi(N, v) for all
players i in any carrier C equal v(C). A carrier C is a subset
of N such that v(S) = v(S ∩ C) for any subset of players
S ⊂ N.
3. Additivity. This axiom specifies how the values of different
games must be related to one another. It requires that for
any games ϕi(N, v) and ϕi(N, v ), ϕi(N, v)+ϕi(N, v ) =
ϕi(N, v + v ) for all i in N.
Shapley showed that there is a unique function that satisfies these
three axioms.
Shapley viewed this value as an index for measuring the power
of players in a game. Like a price index or other market indices, the
value uses averages (or weighted averages in some of its
generalizations) to aggregate the power of players in their various cooperation
opportunities. Alternatively, one can think of the Shapley value as
a measure of the utility of risk neutral players in a game.
We first introduce some notation and then define the Shapley
value. Let S denote the set N − {i} and fi : S → 2N−{i}
be
a random variable that takes its values in the set of all subsets of
N − {i}, and has the probability distribution function (g) defined
as:
g(fi(S) = S) =
|S|!(n − |S| − 1)!
n!
The random variable fi is interpreted as the random choice of a
coalition that player i joins. Then, a player"s Shapley value is
defined in terms of its marginal contribution. Thus, the marginal
contribution of player i to coalition S with i /∈ S is a function Δiv that
is defined as follows:
Δiv(S) = v(S ∪ {i}) − v(S)
Thus a player"s marginal contribution to a coalition S is the
increase in the value of S as a result of i joining it.
DEFINITION 1. The Shapley value (ϕi) of the game N, v for
player i is the expectation (E) of its marginal contribution to a
coalition that is chosen randomly:
ϕi(N, v) = E[Δiv ◦ fi] (1)
The Shapley value is interpreted as follows. Suppose that all
the players are arranged in some order, all orderings being equally
likely. Then ϕi(N, v) is the expected marginal contribution, over
all orderings, of player i to the set of players who precede him.
The method for finding a player"s Shapley value depends on the
definition of the value function (v). This function is different for
different games, but here we focus specifically on the weighted
voting game for the reasons outlined in Section 1.
960 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
2.1 The weighted voting game
We adopt the definition of the voting game given in [6]. Thus, there
is a set of n players that may, for example, represent shareholders
in a company or members in a parliament. The weighted voting
game is then a game G = N, v in which:
v(S) =
j
1 if w(S) ≥ q
0 otherwise
for some q ∈ IR+ and wi ∈ IRN
+ , where:
w(S) =
X
i∈S
wi
for any coalition S. Thus wi is the number of votes that player i
has and q is the number of votes needed to win the game (i.e., the
quota).
Note that for this game (denoted q; w1, . . . , wn ), a player"s
marginal contribution is either zero or one. This is because the
value of any coalition is either zero or one. A coalition with value
zero is called a losing coalition and with value one a winning
coalition. If a player"s entry to a coalition changes it from losing to
winning, then the player"s marginal contribution for that coalition
is one; otherwise it is zero.
The main advantage of the Shapley value is that it gives a
solution that is both unique and fair. The property of uniqueness
is desirable because it leaves no ambiguity. The property of
fairness relates to how the gains from cooperation are split between
the members of a coalition. In this case, a player"s Shapley value
is proportional to the contribution it makes as a member of a
coalition; the more contribution it makes, the higher its value. Thus,
from a player"s perspective, both uniqueness and fairness are
desirable properties.
3. VOTING GAMES WITH POLYNOMIAL
TIME SOLUTIONS
Here we describe those voting games for which the Shapley value
can be determined in polynomial time. This is achieved using the
direct enumeration approach (i.e., listing all possible coalitions and
finding a player"s marginal contribution to each of them). We
characterise such games in terms of the number of players and their
weights.
3.1 All players have equal weight
Consider the game q; j, . . . , j with m parties. Each party has j
votes. If q ≤ j, then there would be no need for the players to form
a coalition. On the other hand, if q = mj (m = |N| is the number
of players), only the grand coalition is possible. The interesting
games are those for which the quota (q) satisfies the constraint:
(j + 1) ≤ q ≤ j(m − 1). For these games, the value of a coalition
is one if the weight of the coalition is greater than or equal to q,
otherwise it is zero.
Let ϕ denote the Shapley value for a player. Consider any one
player. This player can join a coalition as the ith member where
1 ≤ i ≤ m. However, the marginal contribution of the player is 1
only if it joins a coalition as the q/j th member. In all other cases,
its marginal contribution is zero. Thus, the Shapley value for each
player ϕ = 1/m. Since ϕ requires one division operation, it can
be found in constant time (i.e., O(1)).
3.2 A single large party
Consider a game in which there are two types of players: large
(with weight wl > ws) and small (with weight ws). There is one
large player and m small ones. The quota for this game is q; i.e., we
have a game of the form q; wl, ws, ws, . . . , ws . The total number
of players is (m + 1). The value of a coalition is one if the weight
of the coalition is greater than or equal to q, otherwise it is zero.
Let ϕl denote the Shapley value for the large player and ϕs that for
each small player.
We first consider ws = 1 and then ws > 1. The smallest
possible value for q is wl + 1. This is because, if q ≤ wl, then the
large party can win the election on its own without the need for
a coalition. Thus, the quota for the game satisfies the constraint
wl + 1 ≤ q ≤ m + wl − 1. Also, the lower and upper limits for
wl are 2 and (q − 1) respectively. The lower limit is 2 because
the weight of the large party has to be greater than each small one.
Furthermore, the weight of the large party cannot be greater than
q, since in that case, there would be no need for the large party
to form a coalition. Recall that for our voting game, a player"s
marginal contribution to a coalition can only be zero or one.
Consider the large party. This party can join a coalition as the
ith member where 1 ≤ i ≤ (m + 1). However, the marginal
contribution of the large party is one if it joins a coalition as the
ith member where (q − wl) ≤ i < q. In all the remaining cases,
its marginal contribution is zero. Thus, out of the total (m + 1)
possible cases, its marginal contribution is one in wl cases. Hence,
the Shapley value of the large party is: ϕl = wl/(m + 1). In the
same way, we obtain the Shapley value of the large party for the
general case where ws > 1 as:
ϕl = wl/ws /(m + 1)
Now consider a small player. We know that the sum of the
Shapley values of all the m+1 players is one. Also, since the small
parties have equal weights, their Shapley values are the same. Hence,
we get:
ϕs =
1 − ϕl
m
Thus, both ϕl and ϕs can be computed in constant time. This
is because both require a constant number of basic operations
(addition, subtraction, multiplication, and division). In the same way,
the Shapley value for a voting game with a single large party and
multiple small parties can be determined in constant time.
3.3 Multiple large and small parties
We now consider a voting game that has two player types: large
and small (as in Section 3.2), but now there are multiple large and
multiple small parties. The set of parties consists of ml large
parties and ms small parties. The weight of each large party is wl and
that of each small one is ws, where ws < wl. We show the
computational tractability for this game by considering the following four
possible scenarios:
S1 q ≤ mlwl and q ≤ msws
S2 q ≤ mlwl and q ≥ msws
S3 q ≥ mlwl and q ≥ msws
S4 q ≥ mlwl and q ≤ msws
For the first scenario, consider a large player. In order to determine
the Shapley value for this player, we need to consider the number
of all possible coalitions that give it a marginal contribution of one.
It is possible for the marginal contribution of this player to be one if
it joins a coalition in which the number of large players is between
zero and (q −1)/wl. In other words, there are (q −1)/wl +1 such
cases and we now consider each of them.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961
Consider a coalition such that when the large player joins in,
there are i large players and (q − iwl − 1)/ws small players
already in it, and the remaining players join after the large player.
Such a coalition gives the large player unit marginal contribution.
Let C2
l (i, q) denote the number of all such coalitions. To begin,
consider the case i = 0:
C2
l (0, q) = C
„
ms,
q − 1
ws
«
× FACTORIAL
„
q − 1
ws
«
×
FACTORIAL
„
ml + ms −
q − 1
ws
− 1
«
where C(y, x) denotes the number of possible combinations of x
items from a set of y items. For i = 1, we get:
C2
l (1, q) = C(ml, 1) × C
„
ms,
q − wl − 1
ws
«
×
FACTORIAL
„
q − wl − 1
ws
«
×
FACTORIAL
„
ml + ms −
q − wl − 1
ws
− 1
«
In general, for i > 1, we get:
C2
l (i, q) = C(ml, i) × C
„
ms,
q − iwl − 1
ws
«
×
FACTORIAL
„
q − iwl − 1
ws
«
×
FACTORIAL
„
ml + ms −
q − wl − 1
ws
− 1
«
Thus the large player"s Shapley value is:
ϕl =
q−1
wlX
i=0
C2
l (i, q)/FACTORIAL(ml + ms)
For a given i, the time to find C2
l (i, q) is O(T) where
T = (mlms(q − iwl − 1)(ml + ms))/ws
Hence, the time to find the Shapley value is O(T q/wl).
In the same way, a small player"s Shapley value is:
ϕs =
q−1
wsX
i=0
C2
s (i, q)/FACTORIAL(ml + ms)
and can be found in time O(T q/ws). Likewise, the remaining three
scenarios (S2 to S4) can be shown to have the same time
complexity.
3.4 Three player types
We now consider a voting game that has three player types: 1, 2,
and 3. The set of parties consists of m1 players of type 1 (each
with weight w1), m2 players of type 2 (each with weight w2), and
m3 players of type 3 (each with weight w3).
For this voting game, consider a player of type 1. It is possible
for the marginal contribution of this player to be one if it joins a
coalition in which the number of type 1 players is between zero
and (q − 1)/w1. In other words, there are (q − 1)/w1 + 1 such
cases and we now consider each of them.
Consider a coalition such that when the type 1 player joins in,
there are i type 1 players already in it. The remaining players join
after the type 1 player. Let C3
l (i, q) denote the number of all such
coalitions that give a marginal contribution of one to the type 1
player where:
C3
1 (i, q) =
q−1
w1X
i=0
q−iw1−1
w2X
j=0
C2
1 (j, q − iw1)
Therefore the Shapley value of the type 1 player is:
ϕ1 =
q−1
w1X
i=0
C3
1 (i, q)/FACTORIAL(m1 + m2 + m3)
The time complexity of finding this value is O(T q2
/w1w2) where:
T = (
3Y
i=1
mi)(q − iwl − 1)(
3X
i=1
mi)/(w2 + w3)
Likewise, for the other two player types (2 and 3).
Thus, we have identified games for which the exact Shapley
value can be easily determined. However, the computational
complexity of the above direct enumeration method increases with the
number of player types. For a voting game with more than three
player types, the time complexity of the above method is a
polynomial of degree four or more. To deal with such situations, therefore,
the following section presents a faster randomised method for
finding the approximate Shapley value.
4. FINDING THE APPROXIMATE SHAPLEY
VALUE
We first give a brief introduction to randomized algorithms and
then present our randomized method for finding the approximate
Shapley value. Randomized algorithms are the most commonly
used approach for finding approximate solutions to
computationally hard problems. A randomized algorithm is an algorithm that,
during some of its steps, performs random choices [2]. The
random steps performed by the algorithm imply that by executing the
algorithm several times with the same input we are not guaranteed
to find the same solution. Now, since such algorithms generate
approximate solutions, their performance is evaluated in terms of two
criteria: their time complexity, and their error of approximation.
The approximation error refers to the difference between the
exact solution and its approximation. Against this background, we
present a randomized method for finding the approximate Shapley
value and empirically evaluate its error.
We first describe the general voting game and then present our
randomized algorithm. In its general form, a voting game has more
than two types of players. Let wi denote the weight of player
i. Thus, for m players and for quota q the game is of the form
q; w1, w2, . . . , wm . The weights are specified in terms of a
probability distribution function. For such a game, we want to find the
approximate Shapley value.
We let P denote a population of players. The players" weights
in this population are defined by a probability distribution function.
Irrespective of the actual probability distribution function, let μ be
the mean weight for the population of players and ν the variance in
the players" weights. From this population of players we randomly
draw samples and find the sum of the players" weights in the sample
using the following rule from Sampling Theory (see [8] p425):
If w1, w2, . . . , wn is a random sample of size n drawn
from any distribution with mean μ and variance ν, then
the sample sum has an approximate Normal
distribution with mean nμ and variance ν
n
(the larger the n the
better the approximation).
962 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
R-SHAPLEYVALUE (P, μ, ν, q, wi)
P: Population of players
μ: Mean weight of the population P
ν: Variance in the weights for poulation P
q: Quota for the voting game
wi: Player i"s weight
1. Ti ← 0; a ← q − wi; b ← q −
2. For X from 1 to m repeatedly do the following
2.1. Select a random sample SX of size X from the
population P
2.2. Evaluate expected marginal contribution (ΔX
i )
of player i to SX as:
ΔX
i ← 1√
(2πν/X)
R b
a
e−X
(x−Xμ)2
2ν dx
2.3. Ti ← Ti + ΔX
i
3. Evaluate Shapley value of player i as:
ϕi ← Ti/m
Table 1: Randomized algorithm to find the Shapley value for
player i.
We know from Definition 1, that the Shapley value for a player is
the expectation (E) of its marginal contribution to a coalition that is
chosen randomly. We use this rule to determine the Shapley value
as follows.
For player i with weight wi, let ϕi denote the Shapley value. Let
X denote the size of a random sample drawn from a population
in which the individual player weights have any distribution. The
marginal contribution of player i to this random sample is one if the
total weight of the X players in the sample is greater than or equal
to a = q −wi but less than b = q − (where is an inifinitesimally
small quantity). Otherwise, its marginal contribution is zero. Thus,
the expected marginal contribution of player i (denoted ΔX
i ) to the
sample coalition is the area under the curve defined by N(Xμ, ν
X
)
in the interval [a, b]. This area is shown as the region B in Figure 1
(the dotted line in the figure is Xμ). Hence we get:
ΔX
i =
1
p
(2πν/X)
Z b
a
e−X
(x−Xμ)2
2ν dx (2)
and the Shapley value is:
ϕi =
1
m
mX
X=1
ΔX
i (3)
The above steps are described in Table 1. In more detail, Step
1 does the initialization. In Step 2, we vary X between 1 and m
and repeatedly do the following. In Step 2.1, we randomly select a
sample SX of size X from the population P. Player i"s marginal
contribution to the random coalition SX is found in Step 2.2. The
average marginal contribution is found in Step 3 - and this is the
Shapley value for player i.
THEOREM 1. The time complexity of the proposed randomized
method is linear in the number of players.
PROOF. As per Equation 3, ΔX
i must be computed m times.
This is done in the for loop of Step 2 in Table 1. Hence, the time
complexity of computing a player"s Shapley value is O(m).
The following section analyses the approximation error for the
proposed method.
5. PERFORMANCE OF THE RANDOMIZED
METHOD
We first derive the formula for measuring the error in the
approximate Shapley value and then conduct experiments for evaluating
this error in a wide range of settings. However, before doing so, we
introduce the idea of error.
The concept of error relates to a measurement made of a
quantity which has an accepted value [22, 4]. Obviously, it cannot be
determined exactly how far off a measurement is from the accepted
value; if this could be done, it would be possible to just give a more
accurate, corrected value. Thus, error has to do with uncertainty in
measurements that nothing can be done about. If a measurement is
repeated, the values obtained will differ and none of the results can
be preferred over the others. However, although it is not possible
to do anything about such error, it can be characterized.
As described in Section 4, we make measurements on samples
that are drawn randomly from a given population (P) of players.
Now, there are statistical errors associated with sampling which
are unavoidable and must be lived with. Hence, if the result of a
measurement is to have meaning it cannot consist of the measured
value alone. An indication of how accurate the result is must be
included also. Thus, the result of any physical measurement has
two essential components:
1. a numerical value giving the best estimate possible of the
quantity measured, and
2. the degree of uncertainty associated with this estimated value.
For example, if the estimate of a quantity is x and the uncertainty
is e(x) the quantity would lie in x ± e(x).
For sampling experiments, the standard error is by far the most
common way of characterising uncertainty [22]. Given this, the
following section defines this error and uses it to evaluate the
performance of the proposed randomized method.
5.1 Approximation error
The accuracy of the above randomized method depends on its
sampling error which is defined as follows [22, 4]:
DEFINITION 2. The sampling error (or standard error) is
defined as the standard deviation for a set of measurements divided
by the square root of the number of measurements.
To this end, let e(σX
) be the sampling error in the sum of the
weights for a sample of size X drawn from the distribution N(Xμ, ν
X
)
where:
e(σX
) =
p
(ν/X)/
p
(X)
=
p
(ν)/X (4)
Let e(ΔX
i ) denote the error in the marginal contribution for player
i (given in Equation 2). This error is obtained by propagating the
error in Equation 4 to Equation 2. In Equation 2, a and b are the
lower and upper limits for the sum of the players" weights for a
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963
b
C
B
a − e (σX )
a
A
)X(σeb+ Sum of weights
Z1 Z2
Figure 1: A normal distribution for the sum of players" weights
in a coalition of size X.
40
60
80
100
0
50
100
0
5
10
15
20
25
QuotaWeight
PercentageerrorintheShapleyvalue
Figure 2: Performance of the randomized method for m = 10
players.
coalition of size X. Since the error in this sum is e(σX
), the actual
values of a and b lie in the interval a ± e(σX
) and b ± e(σX
)
respectively. Hence, the error in Equation 2 is either the probability
that the sum lies between the limits a − e(σX
) and a (i.e., the area
under the curve defined by N(Xμ, ν
X
) between a − e(σX
) and a,
which is the shaded region A in Figure 1) or the probability that the
sum of weights lies between the limits b and b+e(σX
) (i.e., the area
under the curve defined by N(Xμ, ν
X
) between b and b + e(σX
),
which is the shaded region C in Figure 1). More specifically, the
error is the maximum of these two probabilities:
e(ΔX
i ) =
1
p
(2πν/X)
× MAX
„Z a
a−e(σX )
e−X
(x−Xμ)2
2ν dx,
Z b+e(σX
)
b
e−X
(x−Xμ)2
2ν dx
«
On the basis of the above error, we find the error in the Shapley
value by using the following standard error propagation rules [22]:
R1 If x and y are two random variables with errors e(x) and e(y)
respectively, then the error in the random variable z = x + y
is given by:
e(z) = e(x) + e(y)
R2 If x is a random variable with error e(x) and z = kx where
0
100
200
300
400
500
0
100
200
300
400
500
0
5
10
15
20
25
QuotaWeight
PercentageerrorintheShapleyvalue
Figure 3: Performance of the randomized method for m = 50
players.
the constant k has no error, then the error in z is:
e(z) = |k|e(x)
Using the above rules, the error in the Shapley value (given in
Equation 3) is obtained by propagating the error in Equation 4 to
all coalitions between the sizes X = 1 and X = m. Let e(ϕi)
denote this error where:
e(ϕi) =
1
m
mX
X=1
e(ΔX
i )
We analyze the performance of our method in terms of the
percentage error PE in the approximate Shapley value which is defined
as follows:
PE = 100 × e(ϕi)/ϕi (5)
5.2 Experimental Results
We now compute the percentage error in the Shapley value using
the above equation for PE. Since this error depends on the
parameters of the voting game, we evaluate it in a range of settings by
systematically varying the parameters of the voting game.
In particular, we conduct experiments in the following setting.
For a player with weight w, the percentage error in a player"s
Shapley value depends on the following five parameters (see Equation 3):
1. The number of parties (m).
2. The mean weight (μ).
3. The variance in the player"s weights (ν).
4. The quota for the voting game (q).
5. The given player"s weight (w).
We fix μ = 10 and ν = 1. This is because, for the normal
distribution, μ = 10 ensures that for almost all the players the
weight is positive, and ν = 1 is used most commonly in statistical
experiments (ν can be higher or lower but PE is increasing in
νsee Equations 4 and 5). We then vary m, q, and w as follows. We
vary m between 5 and 100 (since beyond 100 we found that the
error is close to zero), for each m we vary q between 4μ and mμ
(we impose these limits because they ensure that the size of the
964 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
0
200
400
600
800
1000
0
200
400
600
800
1000
0
5
10
15
20
25
QuotaWeight
PercentageerrorintheShapleyvalue
Figure 4: Performance of the randomized method for m = 100
players.
winning coalition is more than one and less than m - see Section 3
for details), and for each q, we vary w between 1 and q−1 (because
a winning coalition must contain at least two players). The results
of these experiments are shown in Figures 2, 3, and 4. As seen in
the figures, the maximum PE is around 20% and in most cases it is
below 5%.
We now analyse the effect of the three parameters: w, q, and m
on the percentage error in more detail.
- Effect of w. The PE depends on e(σX
) because, in
Equation 5, the limits of integration depend on e(σX
). The
interval over which the first integration in Equation 5 is done is
a − a + e(σX
) = e(σX
), and the interval over which the
second one is done is b + e(σX
) − b = e(σX
). Thus, the
interval is the same for both integrations and it is independent
of wi. Note that each of the two functions that are integrated
in Equation 5 are the same as the function that is integrated
in Equation 2. Only the limits of the integration are different.
Also, the interval over which the integration for the marginal
contribution of Equation 2 is done is b − a = wi − (see
Figure 1). The error in the marginal contribution is either the
area of the shaded region A (between a − e(σX
) and a) in
Figure 1, or the shaded area C (between b and b + e(σX
)).
As per Equation 5, it is the maximum of these two areas.
Since e(σX
) is independent of wi, as wi increases, e(σX
)
remains unchanged. However, the area of the unshaded
region B increases. Hence, as wi increases, the error in the
marginal contribution decreases and PE also decreases.
- Effect of q. For a given q, the Shapley value for player i is
as given in Equation 3. We know that, for a sample of size
X, the sum of the players" weights is distributed normally
with mean Xμ and variance ν/X. Since 99% of a normal
distribution lies within two standard deviations of its mean
[8], player i"s marginal contribution to a sample of size X is
almost zero if:
a < Xμ + 2
p
ν/X or b > Xμ − 2
p
ν/X
This is because the three regions A, B, and C (in Figure 1)
lie either to the right of Z2 or to the left of Z1. However,
player i"s marginal contribution is greater than zero for those
X for which the following constraint is satisfied:
Xμ − 2
p
ν/X < a < b < Xμ + 2
p
ν/X
For this constraint, the three regions A, B, and C lie
somewhere between Z1 and Z2. Since a = q −wi and b = q − ,
Equation 6 can also be written as:
Xμ − 2
p
ν/X < q − wi < q − < Xμ + 2
p
ν/X
The smallest X that satisfies the constraint in Equation 6
strictly increases with q. As X increases, the error in sum
of weights in a sample (i.e., e(σX
) =
p
(ν)/X) decreases.
Consequently, the error in a player"s marginal contribution
(see Equation 5) also decreases. This implies that as q
increases, the error in the marginal contribution (and
consequently the error in the Shapley value) decreases.
- Effect of m. It is clear from Equation 4 that the error e(σX
)
is highest for X = 1 and it decreases with X. Hence, for
small m, e(σ1
) has a significant effect on PE. But as m
increases, the effect of e(σ1
) on PE decreases and, as a result,
PE decreases.
6. RELATED WORK
In order to overcome the computational complexity of finding the
Shapley value, two main approaches have been proposed in the
literature. One approach is to use generating functions [3]. This
method is an exact procedure that overcomes the problem of time
complexity, but its storage requirements are substantial - it requires
huge arrays. It also has the limitation (not shared by other
approaches) that it can only be applied to games with integer weights
and quotas.
The other method uses an approximation technique based on
Monte Carlo simulation. In [12], for instance, the Shapley value is
computed by considering a random sample from a large population
of players. The method we propose differs from this in that they
define the Shapley value by treating a player"s number of swings (if a
player can change a losing coalition to a winning one, then, for the
player, the coalition is counted as a swing) as a random variable,
while we treat the players" weights as random variables. In [12],
however, the question remains how to get the number of swings
from the definition of a voting game and what is the time
complexity of doing this. Since the voting game is defined in terms of the
players" weights and the number of swings are obtained from these
weights, our method corresponds more closely to the definition of
the voting game. Our method also differs from [7] in that while [7]
presents a method for the case where all the players" weights are
distributed normally, our method applies to any type of distribution
for these weights. Thus, as stated in Section 1, our method is more
general than [3, 12, 7]. Also, unlike all the above mentioned work,
we provide an analysis of the performance of our method in terms
of the percentage error in the approximate Shapley value.
A method for finding the Shapley value was also proposed in
[5]. This method gives the exact Shapley value, but its time
complexity is exponential. Furthermore, the method can be used only
if the game is represented in a specific form (viz., the multi-issue
representation), not otherwise. Finally, [9, 10] present a
polynomial time method for finding the Shapley value. This method can
be used if the coalition game is represented as a marginal
contribution net. Furthermore, they assume that the Shapley value of
a component of a given coalition game is given by an oracle, and
on the basis of this assumption aggregate these values to find the
value for the overall game. In contrast, our method is independent
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965
of the representation and gives an approximate Shapley value in
linear time, without the need for an oracle.
7. CONCLUSIONS AND FUTURE WORK
Coalition formation is an important form of interaction in
multiagent systems. An important issue in such work is for the agents to
decide how to split the gains from cooperation between the
members of a coalition. In this context, cooperative game theory offers
a solution concept called the Shapley value. The main advantage of
the Shapley value is that it provides a solution that is both unique
and fair. However, its main problem is that, for many coalition
games, the Shapley value cannot be determined in polynomial time.
In particular, the problem of finding this value for the voting game
is #P-complete. Although this problem is, in general #P-complete,
we show that there are some specific voting games for which the
Shapley value can be determined in polynomial time and
characterise such games. By doing so, we have shown when it is
computationally feasible to find the exact Shapley value. For other complex
voting games, we presented a new randomized method for
determining the approximate Shapley value. The time complexity of the
proposed method is linear in the number of players. We analysed
the performance of this method in terms of the percentage error in
the approximate Shapley value.
Our experiments show that the percentage error in the Shapley
value is at most 20. Furthermore, in most cases, the error is less
than 5%. Finally, we analyse the effect of the different parameters
of the voting game on this error. Our study shows that the error
decreases as
1. a player"s weight increases,
2. the quota increases, and
3. the number of players increases.
Given the fact that software agents have limited computational
resources and therefore cannot compute the true Shapley value, our
results are especially relevant to such resource bounded agents. In
future, we will explore the problem of determining the Shapley
value for other commonly occurring coalition games like the
production economy and the market economy.
8. REFERENCES
[1] R. Aumann. Acceptable points in general cooperative
n-person games. In Contributions to theTheory of Games
volume IV. Princeton University Press, 1959.
[2] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann,
A. Marchetti-Spaccamela, and M. Protasi. Complexity and
approximation: Combinatorial optimization problems and
their approximability properties. Springer, 2003.
[3] J. M. Bilbao, J. R. Fernandez, A. J. Losada, and J. J. Lopez.
Generating functions for computing power indices
efficiently. Top 8, 2:191-213, 2000.
[4] P. Bork, H. Grote, D. Notz, and M. Regler. Data Analysis
Techniques in High Energy Physics Experiments. Cambridge
University Press, 1993.
[5] V. Conitzer and T. Sandholm. Computing Shapley values,
manipulating value division schemes, and checking core
membership in multi-issue domains. In Proceedings of the
National Conference on Artificial Intelligence, pages
219-225, San Jose, California, 2004.
[6] X. Deng and C. H. Papadimitriou. On the complexity of
cooperative solution concepts. Mathematics of Operations
Research, 19(2):257-266, 1994.
[7] S. S. Fatima, M. Wooldridge, and N. R. Jennings. An
analysis of the shapley value and its uncertainty for the
voting game. In Proc. 7th Int. Workshop on Agent Mediated
Electronic Commerce, pages 39-52, 2005.
[8] A. Francis. Advanced Level Statistics. Stanley Thornes
Publishers, 1979.
[9] S. Ieong and Y. Shoham. Marginal contribution nets: A
compact representation scheme for coalitional games. In
Proceedings of the Sixth ACM Conference on Electronic
Commerce, pages 193-202, Vancouver, Canada, 2005.
[10] S. Ieong and Y. Shoham. Multi-attribute coalition games. In
Proceedings of the Seventh ACM Conference on Electronic
Commerce, pages 170-179, Ann Arbor, Michigan, 2006.
[11] J. P. Kahan and A. Rapoport. Theories of Coalition
Formation. Lawrence Erlbaum Associates Publishers, 1984.
[12] I. Mann and L. S. Shapley. Values for large games iv:
Evaluating the electoral college exactly. Technical report,
The RAND Corporation, Santa Monica, 1962.
[13] A. MasColell, M. Whinston, and J. R. Green.
Microeconomic Theory. Oxford University Press, 1995.
[14] M. J. Osborne and A. Rubinstein. A Course in Game Theory.
The MIT Press, 1994.
[15] C. H. Papadimitriou. Computational Complexity. Addison
Wesley Longman, 1994.
[16] A. Rapoport. N-person Game Theory : Concepts and
Applications. Dover Publications, Mineola, NY, 2001.
[17] A. E. Roth. Introduction to the shapley value. In A. E. Roth,
editor, The Shapley value, pages 1-27. University of
Cambridge Press, Cambridge, 1988.
[18] T. Sandholm and V. Lesser. Coalitions among
computationally bounded agents. Artificial Intelligence
Journal, 94(1):99-137, 1997.
[19] L. S. Shapley. A value for n person games. In A. E. Roth,
editor, The Shapley value, pages 31-40. University of
Cambridge Press, Cambridge, 1988.
[20] O. Shehory and S. Kraus. A kernel-oriented model for
coalition-formation in general environments: Implemetation
and results. In In Proceedings of the National Conference on
Artificial Intelligence (AAAI-96), pages 131-140, 1996.
[21] O. Shehory and S. Kraus. Methods for task allocation via
agent coalition formation. Artificial Intelligence Journal,
101(2):165-200, 1998.
[22] J. R. Taylor. An introduction to error analysis: The study of
uncertainties in physical measurements. University Science
Books, 1982.
966 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | mean of reaching consensus;approximation;unique and fair solution;polynomial time;reaching consensus mean;generating function;coalition formation;shapley value;game-theory;cooperative game theory;interaction;multi-agent system;randomised method |
train_I-54 | Approximate and Online Multi-Issue Negotiation | This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. | 1. INTRODUCTION
Negotiation is a key form of interaction in multiagent systems. It
is a process in which disputing agents decide how to divide the
gains from cooperation. Since this decision is made jointly by the
agents themselves [20, 19, 13, 15], each party can only obtain what
the other is prepared to allow them. Now, the simplest form of
negotiation involves two agents and a single issue. For example,
consider a scenario in which a buyer and a seller negotiate on the
price of a good. To begin, the two agents are likely to differ on the
price at which they believe the trade should take place, but through
a process of joint decision-making they either arrive at a price that
is mutually acceptable or they fail to reach an agreement. Since
agents are likely to begin with different prices, one or both of them
must move toward the other, through a series of offers and counter
offers, in order to obtain a mutually acceptable outcome. However,
before the agents can actually perform such negotiations, they must
decide the rules for making offers and counter offers. That is, they
must set the negotiation protocol [20]. On the basis of this protocol,
each agent chooses its strategy (i.e., what offers it should make
during the course of negotiation). Given this context, this work
focuses on competitive scenarios with self-interested agents. For
such cases, each participant defines its strategy so as to maximise
its individual utility.
However, in most bilateral negotiations, the parties involved need
to settle more than one issue. For this case, the issues may be
divisible or indivisible [4]. For the former, the problem for the agents
is to decide how to split each issue between themselves [21]. For
the latter, the individual issues cannot be divided. An issue, in its
entirety, must be allocated to either of the two agents. Since the
agents value different issues differently, they must come to terms
about who will take which issue. To date, most of the existing
work on multi-issue negotiation has focussed on the former case
[7, 2, 5, 23, 11, 6]. However, in many real-world settings, the
issues are indivisible. Hence, our focus here is on negotiation for
indivisible issues. Such negotiations are very common in
multiagent systems. For example, consider the case of task allocation
between two agents. There is a set of tasks to be carried out and
different agents have different preferences for the tasks. The tasks
cannot be partitioned; a task must be carried out by one agent. The
problem then is for the agents to negotiate about who will carry out
which task.
A key problem in the study of multi-issue negotiation is to
determine the equilibrium strategies. An equally important problem,
especially in the context of software agents, is to find the time
complexity of computing the equilibrium offers. However, such
computational issues have so far received little attention. As we will
show, this is mainly due to the fact that existing work (describe in
Section 5) has mostly focused on negotiation for divisible issues
951
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
and finding the equilibrium for this case is computationally easier
than that for the case of indivisible issues. Our primary objective is,
therefore, to answer the computational questions for the latter case
for the types of situations that are commonly faced by agents in
real-world contexts. Thus, we consider negotiations in which there
is incomplete information and time constraints. Incompleteness of
information on the part of negotiators is a common feature of most
practical negotiations. Also, agents typically have time constraints
in the form of both deadlines and discount factors. Deadlines are an
essential element since negotiation cannot go on indefinitely, rather
it must end within a reasonable time limit. Likewise, discount
factors are essential since the goods may be perishable or their value
may decline due to inflation. Moreover, the strategic behaviour of
agents with deadlines and discount factors differs from those
without (see [21] for single issue bargaining without deadlines and [23,
13] for bargaining with deadlines and discount factors in the
context of divisible issues).
Given this, we consider indivisible issues and first analyze the
strategic behaviour of agents to obtain the equilibrium strategies
for the case where all the issues for negotiation are known a priori
to both agents. For this case, we show that the problem of finding
the equilibrium offers is NP-hard, even in a complete information
setting. Then, in order to overcome the problem of time
complexity, we present strategies that are approximately optimal but
computationally efficient, and show that they form an equilibrium. We
also analyze the relative error (i.e., the difference between the true
optimum and the approximate). The time complexity of the
approximate equilibrium strategies is O(nm/ 2
) where n is the
negotiation deadline and the relative error. Finally, we extend the
analysis to online negotiation where different issues become
available at different time points and the agents are uncertain about their
valuations for these issues. Specifically, we show that an
approximate equilibrium exists for online negotiation and show that the
expected difference between the optimum and the approximate is
O(
√
m) . These approximate strategies also have polynomial time
complexity.
In so doing, our contribution lies in analyzing the computational
complexity of the above multi-issue negotiation problem, and
finding the approximate and online equilibria. No previous work has
determined these equilibria. Since software agents have limited
computational resources, our results are especially relevant to such
resource bounded agents.
The remainder of the paper is organised as follows. We begin by
giving a brief overview of single-issue negotiation in Section 2. In
Section 3, we obtain the equilibrium for multi-issue negotiation and
show that finding equilibrium offers is an NP-hard problem. We
then present an approximate equilibrium and evaluate its
approximation error. Section 4 analyzes online multi-issue negotiation.
Section 5 discusses the related literature and Section 6 concludes.
2. SINGLE-ISSUE NEGOTIATION
We adopt the single issue model of [27] because this is a model
where, during negotiation, the parties are allowed to make offers
from a set of discrete offers. Since our focus is on indivisible issues
(i.e., parties are allowed to make one of two possible offers: zero
or one), our scenario fits in well with [27]. Hence we use this basic
single issue model and extend it to multiple issues. Before doing
so, we give an overview of this model and its equilibrium strategies.
There are two strategic agents: a and b. Each agent has time
constraints in the form of deadlines and discount factors. The two
agents negotiate over a single indivisible issue (i). This issue is a
‘pie" of size 1 and the agents want to determine who gets the pie.
There is a deadline (i.e., a number of rounds by which negotiation
must end). Let n ∈ N+
denote this deadline. The agents use an
alternating offers protocol (as the one of Rubinstein [18]), which
proceeds through a series of time periods. One of the agents, say
a, starts negotiation in the first time period (i.e., t = 1) by making
an offer (xi = 0 or 1) to b. Agent b can either accept or reject the
offer. If it accepts, negotiation ends in an agreement with a getting
xi and b getting yi = 1 − xi. Otherwise, negotiation proceeds to
the next time period, in which agent b makes a counter-offer. This
process of making offers continues until one of the agents either
accepts an offer or quits negotiation (resulting in a conflict). Thus,
there are three possible actions an agent can take during any time
period: accept the last offer, make a new counter-offer, or quit the
negotiation.
An essential feature of negotiations involving alternating offers
is that the agents" utilities decrease with time [21]. Specifically,
the decrease occurs at each step of offer and counteroffer. This
decrease is represented with a discount factor denoted 0 < δi ≤ 1
for both1
agents.
Let [xt
i, yt
i ] denote the offer made at time period t where xt
i and
yt
i denote the share for agent a and b respectively. Then, for a given
pie, the set of possible offers is:
{[xt
i, yt
i ] : xt
i = 0 or 1, yt
i = 0 or 1, and xt
i + yt
i = 1}
At time t, if a and b receive a share of xt
i and yt
i respectively, then
their utilities are:
ua
i (xt
i, t) =
j
xt
i × δt−1
if t ≤ n
0 otherwise
ub
i (yt
i , t) =
j
yt
i × δt−1
if t ≤ n
0 otherwise
The conflict utility (i.e., the utility received in the event that no deal
is struck) is zero for both agents.
For the above setting, the agents reason as follows in order to
determine what to offer at t = 1. We let A(1) (B(1)) denote a"s
(b"s) equilibrium offer for the first time period. Let agent a denote
the first mover (i.e., at t = 1, a proposes to b who should get the
pie). To begin, consider the case where the deadline for both agents
is n = 1. If b accepts, the division occurs as agreed; if not, neither
agent gets anything (since n = 1 is the deadline). Here, a is in a
powerful position and is able to propose to keep 100 percent of the
pie and give nothing to b 2
. Since the deadline is n = 1, b accepts
this offer and agreement takes place in the first time period.
Now, consider the case where the deadline is n = 2. In order to
decide what to offer in the first round, a looks ahead to t = 2 and
reasons backwards. Agent a reasons that if negotiation proceeds
to the second round, b will take 100 percent of the pie by offering
[0, 1] and leave nothing for a. Thus, in the first time period, if a
offers b anything less than the whole pie, b will reject the offer.
Hence, during the first time period, agent a offers [0, 1]. Agent b
accepts this and an agreement occurs in the first time period.
In general, if the deadline is n, negotiation proceeds as follows.
As before, agent a decides what to offer in the first round by
looking ahead as far as t = n and then reasoning backwards. Agent a"s
1
Having a different discount factor for different agents only makes
the presentation more involved without leading to any changes in
the analysis of the strategic behaviour of the agents or the time
complexity of finding the equilibrium offers. Hence we have a single
discount factor for both agents.
2
It is possible that b may reject such a proposal. However,
irrespective of whether b accepts or rejects the proposal, it gets zero utility
(because the deadline is n = 1). Thus, we assume that b accepts
a"s offer.
952 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
offer for t = 1 depends on who the offering agent is for the last
time period. This, in turn, depends on whether n is odd or even.
Since a makes an offer at t = 1 and the agents use the alternating
offers protocol, the offering agent for the last time period is b if n
is even and it is a if n is odd. Thus, depending on whether n is odd
or even, a makes the following offer at t = 1:
A(1) =
j
OFFER [1, 0] IF ODD n
ACCEPT IF b"s TURN
B(1) =
j
OFFER [0, 1] IF EVEN n
ACCEPT IF a"s TURN
Agent b accepts this offer and negotiation ends in the first time
period. Note that the equilibrium outcome depends on who makes
the first move. Since we have two agents and either of them could
move first, we get two possible equilibrium outcomes.
On the basis of the above equilibrium for single-issue
negotiation with complete information, we first obtain the equilibrium for
multiple issues and then show that computing these offers is a hard
problem. We then present a time efficient approximate equilibrium.
3. MULTI-ISSUE NEGOTIATION
We first analyse the complete information setting. This section
forms the base which we extend to the case of information
uncertainty in Section 4.
Here a and b negotiate over m > 1 indivisible issues. These
issues are m distinct pies and the agents want to determine how
to distribute the pies between themselves. Let S = {1, 2, . . . , m}
denote the set of m pies. As before, each pie is of size 1. Let the
discount factor for issue c, where 1 ≤ c ≤ m, be 0 < δc ≤ 1.
For each issue, let n denote each agent"s deadline. In the offer for
time period t (where 1 ≤ t ≤ n), agent a"s (b"s) share for each of
the m issues is now represented as an m element vector xt
∈ Bm
(yt
∈ Bm
) where B denotes the set {0, 1}. Thus, if agent a"s share
for issue c at time t is xt
c, then agent b"s share is yt
c = (1−xt
c). The
shares for a and b are together represented as the package [xt
, yt
].
As is traditional in multi-issue utility theory, we define an agent"s
cumulative utility using the standard additive form [12]. The
functions Ua
: Bm
× Bm
× N+
→ R and Ub
: Bm
× Bm
× N+
→ R
give the cumulative utilities for a and b respectively at time t. These
are defined as follows:
Ua
([xt
, yt
], t) =
(
Σm
c=1ka
c ua
c (xt
c, t) if t ≤ n
0 otherwise
(1)
Ub
([xt
, yt
], t) =
(
Σm
c=1kb
cub
c(yt
c, t) if t ≤ n
0 otherwise
(2)
where ka
∈ Nm
+ denotes an m element vector of constants for
agent a and kb
∈ Nm
+ that for b. Here N+ denotes the set of
positive integers. These vectors indicate how the agents value different
issues. For example, if ka
c > ka
c+1, then agent a values issue c
more than issue c + 1. Likewise for agent b. In other words, the m
issues are perfect substitutes (i.e., all that matters to an agent is its
total utility for all the m issues and not that for any subset of them).
In all the settings we study, the issues will be perfect substitutes.
To begin each agent has complete information about all negotiation
parameters (i.e., n, m, ka
c , kb
c, and δc for 1 ≤ c ≤ m).
Now, multi-issue negotiation can be done using different
procedures. Broadly speaking, there are three key procedures for
negotiating multiple issues [19]:
1. the package deal procedure where all the issues are settled
together as a bundle,
2. the sequential procedure where the issues are discussed one
after another, and
3. the simultaneous procedure where the issues are discussed in
parallel.
Between these three procedures, the package deal is known to
generate Pareto optimal outcomes [19, 6]. Hence we adopt it here. We
first give a brief description of the procedure and then determine
the equilibrium strategies for it.
3.1 The package deal procedure
In this procedure, the agents use the same protocol as for
singleissue negotiation (described in Section 2). However, an offer for the
package deal includes a proposal for each issue under negotiation.
Thus, for m issues, an offer includes m divisions, one for each
issue. Agents are allowed to either accept a complete offer (i.e., all
m issues) or reject a complete offer. An agreement can therefore
take place either on all m issues or on none of them.
As per the single-issue negotiation, an agent decides what to
offer by looking ahead and reasoning backwards. However, since an
offer for the package deal includes a share for all the m issues, the
agents can now make tradeoffs across the issues in order to
maximise their cumulative utilities.
For 1 ≤ c ≤ m, the equilibrium offer for issue c at time t is
denoted as [at
c, bt
c] where at
c and bt
c denote the shares for agent a
and b respectively. We denote the equilibrium package at time t
as [at
, bt
] where at
∈ Bm
(bt
∈ Bm
) is an m element vector
that denotes a"s (b"s) share for each of the m issues. Also, for
1 ≤ c ≤ m, δc is the discount factor for issue c. The symbols 0
and 1 denote m element vectors of zeroes and ones respectively.
Note that for 1 ≤ t ≤ n, at
c + bt
c = 1 (i.e., the sum of the agents"
shares (at time t) for each pie is one). Finally, for time period t (for
1 ≤ t ≤ n) we let A(t) (respectively B(t)) denote the equilibrium
strategy for agent a (respectively b).
3.2 Equilibrium strategies
As mentioned in Section 1, the package deal allows agents to make
tradeoffs. We let TRADEOFFA (TRADEOFFB) denote agent a"s (b"s)
function for making tradeoffs. We let P denote a set of parameters
to the procedure TRADEOFFA (TRADEOFFB) where P = {ka
, kb
, δ, m}.
Given this, the following theorem characterises the equilibrium for
the package deal procedure.
THEOREM 1. For the package deal procedure, the following
strategies form a Nash equilibrium. The equilibrium strategy for
t = n is:
A(n) =
j
OFFER [1, 0] IF a"s TURN
ACCEPT IF b"s TURN
B(n) =
j
OFFER [0, 1] IF b"s TURN
ACCEPT IF a"s TURN
For all preceding time periods t < n, if [xt
, yt
] denotes the
offer made at time t, then the equilibrium strategies are defined as
follows:
A(t) =
8
<
:
OFFER TRADEOFFA(P, UB(t), t) IF a"s TURN
If (Ua
([xt
, yt
], t) ≥ UA(t)) ACCEPT
else REJECT IF b"s TURN
B(t) =
8
<
:
OFFER TRADEOFFB(P, UA(t), t) IF b"s TURN
If (Ub
([xt
, yt
], t) ≥ UB(t)) ACCEPT
else REJECT IF a"s TURN
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953
where UA(t) = Ua
([at+1
, bt+1
], t + 1) and UB(t) = Ub
([at+1
,
bt+1
], t + 1).
PROOF. We look ahead to the last time period (i.e., t = n) and
then reason backwards. To begin, if negotiation reaches the
deadline (n), then the agent whose turn it is takes everything and leaves
nothing for its opponent. Hence, we get the strategies A(n) and
B(n) as given in the statement of the theorem.
In all the preceding time periods (t < n), the offering agent
proposes a package that gives its opponent a cumulative utility equal
to what the opponent would get from its own equilibrium offer for
the next time period. During time period t, either a or b could
be the offering agent. Consider the case where a makes an offer
at t. The package that a offers at t gives b a cumulative utility
of Ub
([at+1
, bt+1
], t + 1). However, since there is more than one
issue, there is more than one package that gives b this cumulative
utility. From among these packages, a offers the one that maximises
its own cumulative utility (because it is a utility maximiser). Thus,
the problem for a is to find the package [at
, bt
] so as to:
maximize
mX
c=1
ka
c (1 − bt
c)δt−1
c (3)
such that
mX
c=1
bt
ckb
c ≥ UB(t)
bt
c = 0 or 1 for 1 ≤ c ≤ m
where UB(t), δt−1
c , ka
c , and kb
c are constants and bt
c (1 ≤ c ≤ m)
is a variable.
Assume that the function TRADEOFFA takes parameters P, UB(t),
and t, to solve the maximisation problem given in Equation 3 and
returns the corresponding package. If there is more than one
package that solves Equation 3, then TRADEOFFA returns any one of
them (because agent a gets equal utility from all such packages
and so does agent b). The function TRADEOFFB for agent b is
analogous to that for a.
On the other hand, the equilibrium strategy for the agent that
receives an offer is as follows. For time period t, let b denote the
receiving agent. Then, b accepts [xt
, yt
] if UB(t) ≤ Ub
([xt
, yt
], t),
otherwise it rejects the offer because it can get a higher utility in
the next time period. The equilibrium strategy for a as receiving
agent is defined analogously.
In this way, we reason backwards and obtain the offers for the
first time period. Thus, we get the equilibrium strategies (A(t) and
B(t)) given in the statement of the theorem.
The following example illustrates how the agents make tradeoffs
using the above equilibrium strategies.
EXAMPLE 1. Assume there are m = 2 issues for negotiation,
the deadline for both issues is n = 2, and the discount factor for
both issues for both agents is δ = 1/2. Let ka
1 = 3, ka
2 = 1,
kb
1 = 1, and kb
2 = 5. Let agent a be the first mover. By
using backward reasoning, a knows that if negotiation reaches the
second time period (which is the deadline), then b will get a
hundred percent of both the issues. This gives b a cumulative utility of
UB(2) = 1/2 + 5/2 = 3. Thus, in the first time period, if b gets
anything less than a utility of 3, it will reject a"s offer. So, at t = 1,
a offers the package where it gets issue 1 and b gets issue 2. This
gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the
package and an agreement takes place in the first time period.
The maximization problem in Equation 3 can be viewed as the 0-1
knapsack problem3
. In the 0-1 knapsack problem, we have a set
3
Note that for the case of divisible issues this is the fractional
knapof m items where each item has a profit and a weight. There is a
knapsack with a given capacity. The objective is to fill the knapsack
with items so as to maximize the cumulative profit of the items in
the knapsack. This problem is analogous to the negotiation problem
we want to solve (i.e., the maximization problem of Equation 3).
Since ka
c and δt−1
c are constants, maximizing
Pm
c=1 ka
c (1−bt
c)δt−1
c
is the same as minimizing
Pm
c=1 ka
c bt
c. Hence Equation 3 can be
written as:
minimize
mX
c=1
ka
c bt
c (4)
such that
mX
c=1
bt
ckb
c ≥ UB(t)
bt
c = 0 or 1 for 1 ≤ c ≤ m
Equation 4 is a minimization version of the standard 0-1 knapsack
problem4
with m items where ka
c represents the profit for item c,
kb
c the weight for item c, and UB(t) the knapsack capacity.
Example 1 was for two issues and so it was easy to find the
equilibrium offers. But, in general, it is not computationally easy to
find the equilibrium offers of Theorem 1. The following theorem
proves this.
THEOREM 2. For the package deal procedure, the problem of
finding the equilibrium offers given in Theorem 1 is NP-hard.
PROOF. Finding the equilibrium offers given in Theorem 1
requires solving the 0-1 knapsack problem given in Equation 4. Since
the 0-1 knapsack problem is NP-hard [17], the problem of finding
equilibrium for the package deal is also NP-hard.
3.3 Approximate equilibrium
Researchers in the area of algorithms have found time efficient
methods for computing approximate solutions to 0-1 knapsack
problems [10]. Hence we use these methods to find a solution to our
negotiation problem. At this stage, we would like to point out the
main difference between solving the 0-1 knapsack problem and
solving our negotiation problem. The 0-1 knapsack problem
involves decision making by a single agent regarding which items to
place in the knapsack. On the other hand, our negotiation problem
involves two players and they are both strategic. Hence, in our case,
it is not enough to just find an approximate solution to the knapsack
problem, we must also show that such an approximation forms an
equilibrium.
The traditional approach for overcoming the computational
complexity in finding an equilibrium has been to use an approximate
equilibrium (see [14, 26] for example). In this approach, a strategy
profile is said to form an approximate Nash equilibrium if neither
agent can gain more than the constant by deviating. Hence, our
aim is to use the solution to the 0-1 knapsack problem proposed
in [10] and show that it forms an approximate equilibrium to our
negotiation problem. Before doing so, we give a brief overview of
the key ideas that underlie approximation algorithms.
There are two key issues in the design of approximate algorithms
[1]:
sack problem. The factional knapsack problem is computationally
easy; it can be solved in time polynomial in the number of items in
the knapsack problem [17]. In contrast, the 0-1 knapsack problem
is computationally hard.
4
Note that for the standard 0-1 knapsack problem the weights,
profits and the capacity are positive integers. However a 0-1 knapsack
problem with fractions and non positive values can easily be
transformed to one with positive integers in time linear in m using the
methods given in [8, 17].
954 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
1. the quality of their solution, and
2. the time taken to compute the approximation.
The quality of an approximate algorithm is determined by
comparing its performance to that of the optimal algorithm and measuring
the relative error [3, 1]. The relative error is defined as (z−z∗
)/z∗
where z is the approximate solution and z∗
the optimal one. In
general, we are interested in finding approximate algorithms whose
relative error is bounded from above by a certain constant , i.e.,
(z − z∗
)/z∗
≤ (5)
Regarding the second issue of time complexity, we are interested in
finding fully polynomial approximation algorithms. An
approximation algorithm is said to be fully polynomial if for any > 0 it finds
a solution satisfying Equation 5 in time polynomially bounded by
size of the problem (for the 0-1 knapsack problem, the problem size
is equal to the number of items) and by 1/ [1].
For the 0-1 knapsack problem, Ibarra and Kim [10] presented a
fully polynomial approximation method. This method is based on
dynamic programming. It is a parametric method that takes as a
parameter and for any > 0, finds a heuristic solution z with
relative error at most , such that the time and space complexity grow
polynomially with the number of items m and 1/ . More
specifically, the space and time complexity are both O(m/ 2
) and hence
polynomial in m and 1/ (see [10] for the detailed approximation
algorithm and proof of time and space complexity).
Since the Ibarra and Kim method is fully polynomial, we use it to
solve our negotiation problem. This is done as follows. For agent
a, let APRX-TRADEOFFA(P, UB(t), t, ) denote a procedure that
returns an approximate solution to Equation 4 using the Ibarra and
Kim method. The procedure APRX-TRADEOFFB(P, UA(t), t, ) for
agent b is analogous.
For 1 ≤ c ≤ m, the approximate equilibrium offer for issue c
at time t is denoted as [¯at
c,¯bt
c] where ¯at
c and ¯bt
c denote the shares
for agent a and b respectively. We denote the equilibrium package
at time t as [¯at
,¯bt
] where ¯at
∈ Bm
(¯bt
∈ Bm
) is an m element
vector that denotes a"s (b"s) share for each of the m issues. Also,
as before, for 1 ≤ c ≤ m, δc is the discount factor for issue c.
Note that for 1 ≤ t ≤ n, ¯at
c + ¯bt
c = 1 (i.e., the sum of the agents"
shares (at time t) for each pie is one). Finally, for time period t (for
1 ≤ t ≤ n) we let ¯A(t) (respectively ¯B(t)) denote the approximate
equilibrium strategy for agent a (respectively b).The following
theorem uses this notation and characterizes an approximate
equilibrium for multi-issue negotiation.
THEOREM 3. For the package deal procedure, the following
strategies form an approximate Nash equilibrium. The
equilibrium strategy for t = n is:
¯A(n) =
j
OFFER [1, 0] IF a"s TURN
ACCEPT IF b"s TURN
¯B(n) =
j
OFFER [0, 1] IF b"s TURN
ACCEPT IF a"s TURN
For all preceding time periods t < n, if [xt
, yt
] denotes the
offer made at time t, then the equilibrium strategies are defined as
follows:
¯A(t) =
8
<
:
OFFER APRX-TRADEOFFA(P, UB(t), t, ) IF a"s TURN
If (Ua
([xt
, yt
], t) ≥ UA(t)) ACCEPT
else REJECT IF b"s TURN
¯B(t) =
8
<
:
OFFER APRX-TRADEOFFB(P, UA(t), t, ) IF b"s TURN
If (Ub
([xt
, yt
], t) ≥ UB(t)) ACCEPT
else REJECT IF a"s TURN
where UA(t) = Ua
([¯at+1
,¯bt+1
], t + 1) and UB(t) = Ub
([¯at+1
,
¯bt+1
], t + 1). An agreement takes place at t = 1.
PROOF. As in the proof for Theorem 1, we use backward
reasoning. We first obtain the strategies for the last time period t = n.
It is straightforward to get these strategies; the offering agent gets
a hundred percent of all the issues.
Then for t = n − 1, the offering agent must solve the
maximization problem of Equation 4 by substituting t = n−1 in it. For agent
a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB).
These two functions are nothing but the Ibarra and Kim"s
approximation method for solving the 0-1 knapsack problem. These two
functions take as a parameter and use the Ibarra and Kim"s
approximation method to return a package that approximately
maximizes Equation 4. Thus, the relative error for these two functions
is the same as that for Ibarra and Kim"s method (i.e., it is at most
where is given in Equation 5).
Assume that a is the offering agent for t = n − 1. Agent a must
offer a package that gives b a cumulative utility equal to what it
would get from its own approximate equilibrium offer for the next
time period (i.e., Ub
([¯at+1
,¯bt+1
], t + 1) where [¯at+1
,¯bt+1
] is the
approximate equilibrium package for the next time period). Recall
that for the last time period, the offering agent gets a hundred
percent of all the issues. Since a is the offering agent for t = n − 1
and the agents use the alternating offers protocol, it is b"s turn at
t = n. Thus Ub
([¯at+1
,¯bt+1
], t + 1) is equal to b"s cumulative
utility from receiving a hundred percent of all the issues. Using this
utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA
and obtains the approximate equilibrium package for t = n − 1.
On the other hand, if b is the offering agent at t = n − 1, it uses
APPROX-TRADEOFFB to obtain the approximate equilibrium
package.
In the same way for t < n − 1, the offering agent (say a) uses
APPROX-TRADEOFFA to find an approximate equilibrium package
that gives b a utility of Ub
([¯at+1
,¯bt+1
], t + 1). By reasoning
backwards, we obtain the offer for time period t = 1. If a (b) is the
offering agent, it proposes the offer APPROX-TRADEOFFA(P, UB(1), 1, )
(APPROX-TRADEOFFB(P, UA(1), 1, )). The receiving agent
accepts the offer. This is because the relative error in its cumulative
utility from the offer is at most . An agreement therefore takes
place in the first time period.
THEOREM 4. The time complexity of finding the approximate
equilibrium offer for the first time period is O(nm/ 2
).
PROOF. The time complexity of APPROX-TRADEOFFA and
APPROXTRADEOFFB is the same as the time complexity of the Ibarra and
Kim method [10] i.e., O(m/ 2
)). In order to find the equilibrium
offer for the first time period using backward reasoning,
APPROXTRADEOFFA (or APPROX- TRADEOFFB) is invoked n times. Hence
the time complexity of finding the approximate equilibrium offer
for the first time period is O(nm/ 2
).
This analysis was done in a complete information setting.
However an extension of this analysis to an incomplete information
setting where the agents have probability distributions over some
uncertain parameter is straightforward, as long as the negotiation is
done offline; i.e., the agents know their preference for each
individual issue before negotiation begins. For instance, consider the case
where different agents have different discount factors, and each
agent is uncertain about its opponent"s discount factor although it
knows its own. This uncertainty is modelled with a probability
distribution over the possible values for the opponent"s discount factor
and having this distribution as common knowledge to the agents.
All our analysis for the complete information setting still holds for
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955
this incomplete information setting, except for the fact that an agent
must now use the given probability distribution and find its
opponent"s expected utility instead of its actual utility. Hence, instead of
analyzing an incomplete information setting for offline negotiation,
we focus on online multi-issue negotiation.
4. ONLINE MULTI-ISSUE NEGOTIATION
We now consider a more general and, arguably more realistic,
version of multi-issue negotiation, where the agents are uncertain about
the issues they will have to negotiate about in future. In this setting,
when negotiating an issue, the agents know that they will negotiate
more issues in the future, but they are uncertain about the details of
those issues. As before, let m be the total number of issues that are
up for negotiation. The agents have a probability distribution over
the possible values of ka
c and kb
c. For 1 ≤ c ≤ m let ka
c and kb
c be
uniformly distributed over [0,1]. This probability distribution, n,
and m are common knowledge to the agents. However, the agents
come to know ka
c and kb
c only just before negotiation for issue c
begins. Once the agents reach an agreement on issue c, it cannot be
re-negotiated.
This scenario requires online negotiation since the agents must
make decisions about an issue prior to having the information about
the future issues [3]. We first give a brief introduction to online
problems and then draw an analogy between the online knapsack
problem and the negotiation problem we want to solve.
In an online problem, data is given to the algorithm
incrementally, one unit at a time [3]. The online algorithm must also
produce the output incrementally: after seeing i units of input it must
output the ith unit of output. Since decisions about the output are
made with incomplete knowledge about the entire input, an
online algorithm often cannot produce an optimal solution. Such an
algorithm can only approximate the performance of the optimal
algorithm that sees all the inputs in advance. In the design of online
algorithms, the main aim is to achieve a performance that is close
to that of the optimal offline algorithm on each input. An online
algorithm is said to be stochastic if it makes decisions on the basis of
the probability distributions for the future inputs. The performance
of stochastic online algorithms is assessed in terms of the expected
difference between the optimum and the approximate solution
(denoted E[z∗
m −zm] where z∗
m is the optimal and zm the approximate
solution). Note that the subscript m is used to indicate the fact that
this difference depends on m.
We now describe the protocol for online negotiation and then
obtain an approximate equilibrium. The protocol is defined as
follows. Let agent a denote the first mover (since we focus on the
package deal procedure, the first mover is the same for all the m
issues).
Step 1. For c = 1, the agents are given the values of ka
c and kb
c.
These two values are now common5
knowledge.
Step 2. The agents settle issue c using the alternating offers
protocol described in Section 2. Negotiation for issue c must end
within n time periods from the start of negotiation on the
issue. If an agreement is not reached within this time, then
negotiation fails on this and on all remaining issues.
Step 3. The above steps are repeated for issues c = 2, 3, . . . , m.
Negotiation for issue c (2 ≤ c ≤ m) begins in the time
period following an agreement on issue c − 1.
5
We assume common knowledge because it simplifies exposition.
However, if ka
c (kb
c) is a"s (b"s) private knowledge, then our analysis
will still hold but now an agent must find its opponent"s expected
utility on the basis of the p.d.fs for ka
c and kb
c.
Thus, during time period t, the problem for the offering agent (say
a) is to find the optimal offer for issue c on the basis of ka
c and
kb
c and the probability distribution for ka
i and kb
i (c < i ≤ m).
In order to solve this online negotiation problem we draw analogy
with the online knapsack problem. Before doing so, however, we
give a brief overview of the online knapsack problem.
In the online knapsack problem, there are m items. The agent
must examine the m items one at a time according to the order they
are input (i.e., as their profit and size coefficients become known).
Hence, the algorithm is required to decide whether or not to
include each item in the knapsack as soon as its weight and profit
become known, without knowledge concerning the items still to be
seen, except for their total number. Note that since the agents have
a probability distribution over the weights and profits of the future
items, this is a case of stochastic online knapsack problem. Our
online negotiation problem is analogous to the online knapsack
problem. This analogy is described in detail in the proof for Theorem 5.
Again, researchers in algorithms have developed time efficient
approximate solutions to the online knapsack problem [16]. Hence
we use this solution and show that it forms an equilibrium.
The following theorem characterizes an approximate equilibrium
for online negotiation. Here the agents have to choose a
strategy without knowing the features of the future issues. Because of
this information incompleteness, the relevant equilibrium solution
is that of a Bayes" Nash Equilibrium (BNE) in which each agent
plays the best response to the other agents with respect to their
expected utilities [18]. However, finding an agent"s BNE strategy is
analogous to solving the online 0-1 knapsack problem. Also, the
online knapsack can only be solved approximately [16]. Hence
the relevant equilibrium solution concept is approximate BNE (see
[26] for example). The following theorem finds this equilibrium
using procedures ONLINE- TRADEOFFA and ONLINE-TRADEOFFB
which are defined in the proof of the theorem. For a given time
period, we let zm denote the approximately optimal solution
generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z∗
m
the actual optimum.
THEOREM 5. For the package deal procedure, the following
strategies form an approximate Bayes" Nash equilibrium. The
equilibrium strategy for t = n is:
A(n) =
j
OFFER [1, 0] IF a"s TURN
ACCEPT IF b"s TURN
B(n) =
j
OFFER [0, 1] IF b"s TURN
ACCEPT IF a"s TURN
For all preceding time periods t < n, if [xt
, yt
] denotes the
offer made at time t, then the equilibrium strategies are defined as
follows:
A(t) =
8
<
:
OFFER ONLINE-TRADEOFFA(P, UB(t), t) IF a"s TURN
If (Ua
([xt
, yt
], t) ≥ UA(t)) ACCEPT
else REJECT IF b"s TURN
B(t) =
8
<
:
OFFER ONLINE-TRADEOFFB(P, UA(t), t) IF b"s TURN
If (Ub
([xt
, yt
], t) ≥ UB(t)) ACCEPT
else REJECT IF a"s TURN
where UA(t) = Ua
([¯at+1
,¯bt+1
], t + 1) and UB(t) = Ub
([¯at+1
,
¯bt+1
], t + 1). An agreement on issue c takes place at t = c. For a
given time period, the expected difference between the solution
generated by the optimal strategy and that by the approximate strategy
is E[z∗
m − zm] = O(
√
m).
956 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
PROOF. As in Theorem 1 we find the equilibrium offer for time
period t = 1 using backward induction. Let a be the offering agent
for t = 1 for all the m issues. Consider the last time period t = n
(recall from Step 2 of the online protocol that n is the deadline for
completing negotiation on the first issue). Since the first mover is
the same for all the issues, and the agents make offers alternately,
the offering agent for t = n is also the same for all the m issues.
Assume that b is the offering agent for t = n. As in Section 3,
the offering agent for t = n gets a hundred percent of all the m
issues. Since b is the offering agent for t = n, his utility for this
time period is:
UB(n) = kb
1δn−1
1 + 1/2
mX
i=2
δ
i(n−1)
i (6)
Recall that ka
i and kb
i (for c < i ≤ m) are not known to the
agents. Hence, the agents can only find their expected utilities from
the future issues on the basis of the probability distribution
functions for ka
i and kb
i . However, during the negotiation for issue c
the agents know ka
c but not kb
c (see Step 1 of the online protocol).
Hence, a computes UB(n) as follows. Agent b"s utility from issue
c = 1 is kb
1δn−1
1 (which is the first term of Equation 6). Then,
on the basis of the probability distribution functions for ka
i and
kb
i , agent a computes b"s expected utility from each future issue i
as δ
i(n−1)
i /2 (since ka
i and kb
i are uniformly distributed on [0, 1]).
Thus, b"s expected cumulative utility from these m − c issues is
1/2
Pm
i=2 δ
i(n−1)
i (which is the second term of Equation 6).
Now, in order to decide what to offer for issue c = 1, the offering
agent for t = n − 1 (i.e., agent a) must solve the following online
knapsack problem:
maximize Σm
i=1ka
i (1 − ¯bt
i)δn−1
i (7)
such that Σm
i=1kb
i
¯bt
i ≥ UB(n)
¯bt
i = 0 or 1 for 1 ≤ i ≤ m
The only variables in the above maximization problem are ¯bt
i. Now,
maximizing Σm
i=1ka
i (1−¯bt
i)δn−1
i is the same as minimizing Σm
i=1ka
i
¯bt
i
since δn−1
i and ka
i are constants. Thus, we write Equation 7 as:
minimize Σm
i=1ka
i
¯bt
i (8)
such that Σm
i=1kb
i
¯bt
i ≥ UB(n)
¯bt
i = 0 or 1 for 1 ≤ i ≤ m
The above optimization problem is analogous to the online 0-1
knapsack problem. An algorithm to solve the online knapsack
problem has already proposed in [16]. This algorithm is called the
fixed-choice online algorithm. It has time complexity linear in the
number of items (m) in the knapsack problem. We use this to solve
our online negotiation problem. Thus, our ONLINE-TRADEOFFA
algorithm is nothing but the fixed-choice online algorithm and
therefore has the same time complexity as the latter. This algorithm takes
the values of ka
i and kb
i one at a time and generates an approximate
solution to the above knapsack problem. The expected difference
between the optimum and approximate solution is E[z∗
m − zm] =
O(
√
m) [16] (see [16] for the detailed fixed-choice online
algorithm and a proof for E[z∗
m − zm] = O(
√
m)).
The fixed-choice online algorithm of [16] is a generalization of
the basic greedy algorithm for the offline knapsack problem; the
idea behind it is as follows. A threshold value is determined on the
basis of the information regarding weights and profits for the 0-1
knapsack problem. The method then includes into the knapsack all
items whose profit density (profit density of an item is its profit per
unit weight) exceeds the threshold until either the knapsack is filled
or all the m items have been considered.
In more detail, the algorithm ONLINE-TRADEOFFA works as
follows. It first gets the values of ka
1 and kb
1 and finds ¯bt
c. Since we
have a 0-1 knapsack problem, ¯bt
c can be either zero or one. Now, if
¯bt
c = 1 for t = n, then ¯bt
c must be one for 1 ≤ t < n (i.e., a must
offer ¯bt
c = 1 at t = 1). If ¯bt
c = 1 for t = n, but a offers ¯bt
c = 0
at t = 1, then agent b gets less utility than what it expects from a"s
offer and rejects the proposal. Thus, if ¯bt
c = 1 for t = n, then the
optimal strategy for a is to offer ¯bt
c = 1 at t = 1. Agent b accepts
the offer. Thus, negotiation on the first issue starts at t = 1 and an
agreement on it is also reached at t = 1.
In the next time period (i.e., t = 2), negotiation proceeds to the
next issue. The deadline for the second issue is n time periods from
the start of negotiation on the issue. For c = 2, the algorithm
ONLINE-TRADEOFFA is given the values of ka
2 and kb
2 and finds ¯bt
c
as described above. Agent offers bc at t = 2 and b accepts. Thus,
negotiation on the second issue starts at t = 2 and an agreement
on it is also reached at t = 2.
This process repeats for the remaining issues c = 3, . . . , m.
Thus, each issue is agreed upon in the same time period in which
it starts. As negotiation for the next issue starts in the following
time period (see step 3 of the online protocol), agreement on issue
i occurs at time t = i.
On the other hand, if b is the offering agent at t = 1, he uses
the algorithm ONLINE-TRADEOFFB which is defined analogously.
Thus, irrespective of who makes the first move, all the m issues are
settled at time t = m.
THEOREM 6. The time complexity of finding the approximate
equilibrium offers of Theorem 5 is linear in m.
PROOF. The time complexity of ONLINE-TRADEOFFA and
ONLINETRADEOFFB is the same as the time complexity of the fixed-choice
online algorithm of [16]. Since the latter has time complexity linear
in m, the time complexity of ONLINE-TRADEOFFA and
ONLINETRADEOFFB is also linear in m.
It is worth noting that, for the 0-1 knapsack problem, the lower
bound on the expected difference between the optimum and the
solution found by any online algorithm is Ω(1) [16]. Thus, it follows
that this lower bound also holds for our negotiation problem.
5. RELATED WORK
Work on multi-issue negotiation can be divided into two main types:
that for indivisible issues and that for divisible issues. We first
describe the existing work for the case of divisible issues. Since
Schelling [24] first noted that the outcome of negotiation depends
on the choice of negotiation procedure, much research effort has
been devoted to the study of different procedures for negotiating
multiple issues. However, most of this work has focussed on the
sequential procedure [7, 2]. For this procedure, a key issue is the
negotiation agenda. Here the term agenda refers to the order in which
the issues are negotiated. The agenda is important because each
agent"s cumulative utility depends on the agenda; if we change the
agenda then these utilities change. Hence, the agents must decide
what agenda they will use. Now, the agenda can be decided before
negotiating the issues (such an agenda is called exogenous) or it
may be decided during the process of negotiation (such an agenda
is called endogenous). For instance, Fershtman [7] analyze
sequential negotiation with exogenous agenda. A number of researchers
have also studied negotiations with an endogenous agenda [2].
In contrast to the above work that mainly deals with sequential
negotiation, [6] studies the equilibrium for the package deal
procedure. However, all the above mentioned work differs from ours in
that we focus on indivisible issues while others focus on the case
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957
where each issue is divisible. Specifically, no previous work has
determined an approximate equilibrium for multi-issue negotiation
or for online negotiation.
Existing work for the case of indivisible issues has mostly dealt
with task allocation problems (for tasks that cannot be partioned)
to a group of agents. The problem of task allocation has been
previously studied in the context of coalitions involving more than
two agents. For example [25] analyze the problem for the case
where the agents act so as to maximize the benefit of the system
as a whole. In contrast, our focus is on two agents where both of
them are self-interested and want to maximize their individual
utilities. On the other hand [22] focus on the use of contracts for task
allocation to multiple self interested agents but this work concerns
finding ways of decommitting contracts (after the initial allocation
has been done) so as to improve an agent"s utility. In contrast, our
focuses on negotiation regarding who will carry out which task.
Finally, online and approximate mechanisms have been studied
in the context of auctions [14, 9] but not for bilateral negotiations
(which is the focus of our work).
6. CONCLUSIONS
This paper has studied bilateral multi-issue negotiation between
self-interested autonomous agents with time constraints. The issues
are indivisible and different agents value different issues
differently. Thus, the problem is for the agents to decide how to allocate
the issues between themselves so as to maximize their individual
utilities. Specifically, we first showed that finding the equilibrium
offers is an NP-hard problem even in a complete information
setting. We then presented approximately optimal negotiation
strategies and showed that they form an equilibrium. These strategies
have polynomial time complexity. We also analysed the difference
between the true optimum and the approximate optimum. Finally,
we extended the analysis to online negotiation where the issues
become available at different time points and the agents are uncertain
about the features of these issues. Specifically, we showed that an
approximate equilibrium exists for online negotiation and analysed
the approximation error. These approximate strategies also have
polynomial time complexity.
There are several interesting directions for future work. First,
for online negotiation, we assumed that the constants ka
c and kb
c are
both uniformly distributed. It will be interesting to analyze the case
where ka
c and kb
c have other, possibly different, probability
distributions. Apart from this, we treated the number of issues as being
common knowledge to the agents. In future, it will be interesting
to treat the number of issues as uncertain.
7. REFERENCES
[1] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann,
A. Marchetti-Spaccamela, and M. Protasi. Complexity and
approximation: Combinatorial optimization problems and
their approximability properties. Springer, 2003.
[2] M. Bac and H. Raff. Issue-by-issue negotiations: the role of
information and time preference. Games and Economic
Behavior, 13:125-134, 1996.
[3] A. Borodin and R. El-Yaniv. Online Computation and
Competitive Analysis. Cambridge University Press, 1998.
[4] S. J. Brams. Fair division: from cake cutting to dispute
resolution. Cambridge University Press, 1996.
[5] L. A. Busch and I. J. Horstman. Bargaining frictions,
bargaining procedures and implied costs in multiple-issue
bargaining. Economica, 64:669-680, 1997.
[6] S. S. Fatima, M. Wooldridge, and N. R. Jennings.
Multi-issue negotiation with deadlines. Journal of Artificial
Intelligence Research, 27:381-417, 2006.
[7] C. Fershtman. The importance of the agenda in bargaining.
Games and Economic Behavior, 2:224-238, 1990.
[8] F. Glover. A multiphase dual algorithm for the zero-one
integer programming problem. Operations Research,
13:879-919, 1965.
[9] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes. Adaptive
limited-supply online auctions. In ACM Conference on
Electronic Commerce (ACMEC-04), pages 71-80, New
York, 2004.
[10] O. H. Ibarra and C. E. Kim. Fast approximation algorithms
for the knapsack and sum of subset problems. Journal of
ACM, 22:463-468, 1975.
[11] R. Inderst. Multi-issue bargaining with endogenous agenda.
Games and Economic Behavior, 30:64-82, 2000.
[12] R. Keeney and H. Raiffa. Decisions with Multiple
Objectives: Preferences and Value Trade-offs. New York:
John Wiley, 1976.
[13] S. Kraus. Strategic negotiation in multi-agent environments.
The MIT Press, Cambridge, Massachusetts, 2001.
[14] D. Lehman, L. I. O"Callaghan, and Y. Shoham. Truth
revelation in approximately efficient combinatorial auctions.
Journal of the ACM, 49(5):577-602, 2002.
[15] A. Lomuscio, M. Wooldridge, and N. R. Jennings. A
classification scheme for negotiation in electronic commerce.
International Journal of Group Decision and Negotiation,
12(1):31-56, 2003.
[16] A. Marchetti-Spaccamela and C. Vercellis. Stochastic online
knapsack problems. Mathematical Programming,
68:73-104, 1995.
[17] S. Martello and P. Toth. Knapsack problems: Algorithms and
computer implementations. John Wiley and Sons, 1990.
[18] M. J. Osborne and A. Rubinstein. A Course in Game Theory.
The MIT Press, 1994.
[19] H. Raiffa. The Art and Science of Negotiation. Harvard
University Press, Cambridge, USA, 1982.
[20] J. S. Rosenschein and G. Zlotkin. Rules of Encounter. MIT
Press, 1994.
[21] A. Rubinstein. Perfect equilibrium in a bargaining model.
Econometrica, 50(1):97-109, January 1982.
[22] T. Sandholm and V. Lesser. Levelled commitment contracts
and strategic breach. Games and Economic Behavior:
Special Issue on AI and Economics, 35:212-270, 2001.
[23] T. Sandholm and N. Vulkan. Bargaining with deadlines. In
AAAI-99, pages 44-51, Orlando, FL, 1999.
[24] T. C. Schelling. An essay on bargaining. American Economic
Review, 46:281-306, 1956.
[25] O. Shehory and S. Kraus. Methods for task allocation via
agent coalition formation. Artificial Intelligence Journal,
101(1-2):165-200, 1998.
[26] S. Singh, V. Soni, and M. Wellman. Computing approximate
Bayes Nash equilibria in tree games of incomplete
information. In Proceedings of the ACM Conference on
Electronic Commerce ACM-EC, pages 81-90, New York,
May 2004.
[27] I. Stahl. Bargaining Theory. Economics Research Institute,
Stockholm School of Economics, Stockholm, 1972.
958 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | gain from cooperation;disputing agent;protocol;approximation;relative error;online computation;indivisible issue;game-theory;equilibrium;negotiation;strategy;key form of interaction;multiagent system;time constraint;interaction key form |
train_I-55 | Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory | It is well established by conflict theorists and others that successful negotiation should incorporate creating value as well as claiming value. Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. | 1. INTRODUCTION
Given that negotiation is perhaps one of the oldest activities in
the history of human communication, it"s perhaps surprising that
conducted experiments on negotiations have shown that negotiators
more often than not reach inefficient compromises [1, 21]. Raiffa
[17] and Sebenius [20] provide analyses on the negotiators" failure
to achieve efficient agreements in practice and their unwillingness
to disclose private information due to strategic reasons. According
to conflict theorists Lax and Sebenius [13], most negotiation
actually involves both integrative and distributive bargaining which
they refer to as creating value and claiming value. They argue
that negotiation necessarily includes both cooperative and
competitive elements, and that these elements exist in tension. Negotiators
face a dilemma in deciding whether to pursue a cooperative or a
competitive strategy at a particular time during a negotiation. They
refer to this problem as the Negotiator"s Dilemma.
We argue that the Negotiator"s Dilemma is essentially
informationbased, due to the private information held by the agents. Such
private information contains both the information that implies the
agent"s bottom lines (or, her walk-away positions) and the
information that enforces her bargaining strength. For instance, when
bargaining to sell a house to a potential buyer, the seller would
try to hide her actual reserve price as much as possible for she
hopes to reach an agreement at a much higher price than her
reserve price. On the other hand, the outside options available to her
(e.g. other buyers who have expressed genuine interest with fairly
good offers) consist in the information that improves her
bargaining strength about which she would like to convey to her opponent.
But at the same time, her opponent is well aware of the fact that it
is her incentive to boost her bargaining strength and thus will not
accept every information she sends out unless it is substantiated by
evidence.
Coming back to the Negotiator"s Dilemma, it"s not always
possible to separate the integrative bargaining process from the
distributive bargaining process. In fact, more often than not, the two
processes interplay with each other making information manipulation
become part of the integrative bargaining process. This is because a
negotiator could use the information about his opponent"s interests
against her during the distributive negotiation process. That is, a
negotiator may refuse to concede on an important conflicting issue
by claiming that he has made a major concession (on another
issue) to meet his opponent"s interests even though the concession he
made could be insignificant to him. For instance, few buyers would
start a bargaining with a dealer over a deal for a notebook computer
by declaring that he is most interested in an extended warranty for
the item and therefore prepared to pay a high price to get such an
extended warranty.
Negotiation Support Systems (NSSs) and negotiating software
508
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
agents (NSAs) have been introduced either to assist humans in
making decisions or to enable automated negotiation to allow
computer processes to engage in meaningful negotiation to reach
agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of
the Negotiator"s Dilemma and given even bargaining power and
incomplete information, the following two undesirable situations
often arise: (i) negotiators reach inefficient compromises, or (ii)
negotiators engage in a deadlock situation in which both
negotiators refuse to act upon with incomplete information and at the same
time do not want to disclose more information.
In this paper, we argue for the role of a mediator to resolve the
above two issues. The mediator thus plays two roles in a
negotiation: (i) to encourage cooperative behaviour among the negotiators,
and (ii) to absorb the information disclosure by the negotiators to
prevent negotiators from using uncertainty and private information
as a strategic device. To take advantage of existing results in
negotiation analysis and operations research (OR) literatures [18], we
employ multi-criteria decision making (MCDM) theory to allow
the negotiation problem to be represented and analysed. Section 2
provides background on MCDM theory and the negotiation
framework. Section 3 formulates the problem. In Section 4, we discuss
our approach to integrative negotiation. Section 5 discusses the
future work with some concluding remarks.
2. BACKGROUND
2.1 Multi-criteria decision making theory
Let A denote the set of feasible alternatives available to a
decision maker M. As an act, or decision, a in A may involve
multiple aspects, we usually describe the alternatives a with a set of
attributes j; (j = 1, . . . , m). (Attributes are also referred to as
issues, or decision variables.) A typical decision maker also has
several objectives X1, . . . , Xk. We assume that Xi, (i = 1, . . . , k),
maps the alternatives to real numbers. Thus, a tuple (x1, . . . , xk) =
(X1(a), . . . , Xk(a)) denotes the consequence of the act a to the
decision maker M. By definition, objectives are statements that
delineate the desires of a decision maker. Thus, M wishes to
maximise his objectives. However, as discussed thoroughly by Keeney
and Raiffa [8], it is quite likely that a decision maker"s objectives
will conflict with each other in that the improved achievement with
one objective can only be accomplished at the expense of another.
For instance, most businesses and public services have objectives
like minimise cost and maximise the quality of services. Since
better services can often only be attained for a price, these
objectives conflict.
Due to the conflicting nature of a decision maker"s objectives, M
usually has to settle at a compromise solution. That is, he may have
to choose an act a ∈ A that does not optimise every objective. This
is the topic of the multi-criteria decision making theory. Part of the
solution to this problem is that M has to try to identify the Pareto
frontier in the consequence space {(X1(a), . . . , Xk(a))}a∈A.
DEFINITION 1. (Dominant)
Let x = (x1, . . . , xk) and x = (x1, . . . , xk) be two
consequences. x dominates x iff xi > xi for all i, and the inequality is
strict for at least one i.
The Pareto frontier in a consequence space then consists of all
consequences that are not dominated by any other consequence.
This is illustrated in Fig. 1 in which an alternative consists of two
attributes d1 and d2 and the decision maker tries to maximise the
two objectives X1 and X2. A decision a ∈ A whose consequence
does not lie on the Pareto frontier is inefficient. While the Pareto
1x
d2
a (X (a),X (a))
d1
1
x2
2
Alternative spaceA
Pareto frontier
Consequence space
optimal consequenc
Figure 1: The Pareto frontier
frontier allows M to avoid taking inefficient decisions, M still has
to decide which of the efficient consequences on the Pareto frontier
is most preferred by him.
MCDM theorists introduce a mechanism to allow the objective
components of consequences to be normalised to the payoff
valuations for the objectives. Consequences can then be ordered: if the
gains in satisfaction brought about by C1 (in comparison to C2)
equals to the losses in satisfaction brought about by C1 (in
comparison to C2), then the two consequences C1 and C2 are considered
indifferent. M can now construct the set of indifference curves1
in
the consequence space (the dashed curves in Fig. 1). The most
preferred indifference curve that intersects with the Pareto frontier is
in focus: its intersection with the Pareto frontier is the sought after
consequence (i.e., the optimal consequence in Fig. 1).
2.2 A negotiation framework
A multi-agent negotiation framework consists of:
1. A set of two negotiating agents N = {1, 2}.
2. A set of attributes Att = {α1, . . . , αm} characterising the
issues the agents are negotiating over. Each attribute α can take a
value from the set V alα;
3. A set of alternative outcomes O. An outcome o ∈ O is
represented by an assignment of values to the corresponding attributes
in Att.
4. Agents" utility: Based on the theory of multiple-criteria decision
making [8], we define the agents" utility as follows:
• Objectives: Agent i has a set of ni objectives, or interests;
denoted by j (j = 1, . . . , ni). To measure how much an
outcome o fulfills an objective j to an agent i, we use objective
functions: for each agent i, we define i"s interests using the
objective vector function fi = [fij ] : O → Rni
.
• Value functions: Instead of directly evaluating an outcome o,
agent i looks at how much his objectives are fulfilled and will
make a valuation based on these more basic criteria. Thus,
for each agent i, there is a value function σi : Rni
→ R.
In particular, Raiffa [17] shows how to systematically
construct an additive value function to each party involved in a
negotiation.
• Utility: Now, given an outcome o ∈ O, an agent i is able
to determine its value, i.e., σi(fi(o)). However, a
negotiation infrastructure is usually required to facilitate negotiation.
This might involve other mechanisms and factors/parties, e.g.,
a mediator, a legal institution, participation fees, etc. The
standard way to implement such a thing is to allow money
1
In fact, given the k-dimensional space, these should be called
indifference surfaces. However, we will not bog down to that level of
details.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509
and side-payments. In this paper, we ignore those side-effects
and assume that agent i"s utility function ui is normalised so
that ui : O → [0, 1].
EXAMPLE 1. There are two agents, A and B. Agent A has
a task T that needs to be done and also 100 units of a resource
R. Agent B has the capacity to perform task T and would like to
obtain at least 10 and at most 20 units of the resource R. Agent B is
indifferent on any amount between 10 and 20 units of the resource
R. The objective functions for both agents A and B are cost and
revenue. And they both aim at minimising costs while maximising
revenues. Having T done generates for A a revenue rA,T while
doing T incurs a cost cB,T to B. Agent B obtains a revenue rB,R
for each unit of the resource R while providing each unit of the
resource R costs agent A cA,R.
Assuming that money transfer between agents is possible, the set
Att then contains three attributes:
• T, taking values from the set {0, 1}, indicates whether the
task T is assigned to agent B;
• R, taking values from the set of non-negative integer,
indicates the amount of resource R being allocated to agent B;
and
• MT, taking values from R, indicates the payment p to be
transferred from A to B.
Consider the outcome o = [T = 1, R = k, MT = p], i.e., the
task T is assigned to B, and A allocates to B with k units of the
resource R, and A transfers p dollars to B. Then, costA(o) =
k.cA,R + p and revA(o) = rA,T ; and costB(o) = cB,T and
revA(o) =
j
k.rB,R + p if 10 ≤ k ≤ 20
p otherwise.
And, σi(costi(o), revi(o)) = revi(o) − costi(o), (i = A, B).
3. PROBLEM FORMALISATION
Consider Example 1, assume that rA,T = $150 and cB,T =
$100 and rB,R = $10 and cA,R = $7. That is, the revenues
generated for A exceeds the costs incurred to B to do task T, and B
values resource R more highly than the cost for A to provide it.
The optimal solution to this problem scenario is to assign task T to
agent B and to allocate 20 units of resource R (i.e., the maximal
amount of resource R required by agent B) from agent A to agent
B. This outcome regarding the resource and task allocation
problems leaves payoffs of $10 to agent A and $100 to agent B.2
Any
other outcome would leave at least one of the agents worse off. In
other words, the presented outcome is Pareto-efficient and should
be part of the solution outcome for this problem scenario.
However, as the agents still have to bargain over the amount of
money transfer p, neither agent would be willing to disclose their
respective costs and revenues regarding the task T and the resource
R. As a consequence, agents often do not achieve the optimal
outcome presented above in practice. To address this issue, we
introduce a mediator to help the agents discover better agreements than
the ones they might try to settle on. Note that this problem is
essentially the problem of searching for joint gains in a multilateral
negotiation in which the involved parties hold strategic information,
i.e., the integrative part in a negotiation. In order to help facilitate
this process, we introduce the role of a neutral mediator. Before
formalising the decision problems faced by the mediator and the
2
Certainly, without money transfer to compensate agent A, this
outcome is not a fair one.
negotiating agents, we discuss the properties of the solution
outcomes to be achieved by the mediator. In a negotiation setting, the
two typical design goals would be:
• Efficiency: Avoid the agents from settling on an outcome that
is not Pareto-optimal; and
• Fairness: Avoid agreements that give the most of the gains
to a subset of agents while leaving the rest with too little.
The above goals are axiomatised in Nash"s seminal work [16] on
cooperative negotiation games. Essentially, Nash advocates for the
following properties to be satisfied by solution to the bilateral
negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii)
it is invariant to affine transformation (to the consequence space);
(iii) it is symmetric; and (iv) it is independent from irrelevant
alternatives. A solution satisfying Nash"s axioms is called a Nash
bargaining solution.
It then turns out that, by taking the negotiators" utilities as its
objectives the mediator itself faces a multi-criteria decision making
problem. The issues faced by the mediator are: (i) the mediator
requires access to the negotiators" utility functions, and (ii)
making (fair) tradeoffs between different agents" utilities. Our methods
allow the agents to repeatedly interact with the mediator so that a
Nash solution outcome could be found by the parties.
Informally, the problem faced by both the mediator and the
negotiators is construction of the indifference curves. Why are the
indifference curves so important?
• To the negotiators, knowing the options available along
indifference curves opens up opportunities to reach more
efficient outcomes. For instance, consider an agent A who is
presenting his opponent with an offer θA which she refuses
to accept. Rather than having to concede, A could look at
his indifference curve going through θA and choose another
proposal θA. To him, θA and θA are indifferent but θA could
give some gains to B and thus will be more acceptable to B.
In other words, the outcome θA is more efficient than θA to
these two negotiators.
• To the mediator, constructing indifference curves requires a
measure of fairness between the negotiators. The mediator
needs to determine how much utility it needs to take away
from the other negotiators to give a particular negotiator a
specific gain G (in utility).
In order to search for integrative solutions within the outcome
space O, we characterise the relationship between the agents over
the set of attributes Att. As the agents hold different objectives and
have different capacities, it may be the case that changing between
two values of a specific attribute implies different shifts in utility
of the agents. However, the problem of finding the exact
Paretooptimal set3
is NP-hard [2].
Our approach is thus to solve this optimisation problem in two
steps. In the first steps, the more manageable attributes will be
solved. These are attributes that take a finite set of values. The
result of this step would be a subset of outcomes that contains the
Pareto-optimal set. In the second step, we employ an iterative
procedure that allows the mediator to interact with the negotiators to
find joint improvements that move towards a Pareto-optimal
outcome. This approach will not work unless the attributes from Att
3
The Pareto-optimal set is the set of outcomes whose consequences
(in the consequence space) correspond to the Pareto frontier.
510 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
are independent. Most works on multi-attribute, or multi-issue,
negotiation (e.g. [17]) assume that the attributes or the issues are
independent, resulting in an additive value function for each agent.4
ASSUMPTION 1. Let i ∈ N and S ⊆ Att. Denote by ¯S the
set Att \ S. Assume that vS and vS are two assignments of values
to the attributes of S and v1
¯S, v2
¯S are two arbitrary value
assignments to the attributes of ¯S, then (ui([vS, v1
¯S]) − ui([vS, v2
¯S])) =
(ui([vS, v1
¯S])−ui([vS, v2
¯S])). That is, the utility function of agent i
will be defined on the attributes from S independently of any value
assignment to other attributes.
4. MEDIATOR-BASED BILATERAL
NEGOTIATIONS
As discussed by Lax and Sebenius [13], under incomplete
information the tension between creating and claiming values is the
primary cause of inefficient outcomes. This can be seen most
easily in negotiations involving two negotiators; during the
distributive phase of the negotiation, the two negotiators"s objectives are
directly opposing each other. We will now formally characterise
this relationship between negotiators by defining the opposition
between two negotiating parties. The following exposition will be
mainly reproduced from [9].
Assuming for the moment that all attributes from Att take values
from the set of real numbers R, i.e., V alj ⊆ R for all j ∈ Att. We
further assume that the set O = ×j∈AttV alj of feasible outcomes
is defined by constraints that all parties must obey and O is convex.
Now, an outcome o ∈ O is just a point in the m-dimensional space
of real numbers. Then, the questions are: (i) from the point of view
of an agent i, is o already the best outcome for i? (ii) if o is not
the best outcome for i then is there another outcome o such that o
gives i a better utility than o and o does not cause a utility loss to
the other agent j in comparison to o?
The above questions can be answered by looking at the directions
of improvement of the negotiating parties at o, i.e., the directions
in the outcome space O into which their utilities increase at point
o. Under the assumption that the parties" utility functions ui are
differentiable concave, the set of all directions of improvement for
a party at a point o can be defined in terms of his most preferred,
or gradient, direction at that point. When the gradient direction
∇ui(o) of agent i at point o is outright opposing to the gradient
direction ∇uj (o) of agent j at point o then the two parties strongly
disagree at o and no joint improvements can be achieved for i and
j in the locality surrounding o.
Since opposition between the two parties can vary considerably
over the outcome space (with one pair of outcomes considered
highly antagonistic and another pair being highly cooperative), we
need to describe the local properties of the relationship. We begin
with the opposition at any point of the outcome space Rm
. The
following definition is reproduced from [9]:
DEFINITION 2. 1. The parties are in local strict opposition
at a point x ∈ Rm
iff for all points x ∈ Rm
that are
sufficiently close to x (i.e., for some > 0 such that
∀x x −x < ), an increase of one utility can be achieved
only at the expense of a decrease of the other utility.
2. The parties are in local non-strict opposition at a point x ∈
Rm
iff they are not in local strict opposition at x, i.e., iff it is
possible for both parties to raise their utilities by moving an
infinitesimal distance from x.
4
Klein et al. [10] explore several implications of complex contracts
in which attributes are possibly inter-dependent.
3. The parties are in local weak opposition at a point x ∈ Rm
iff ∇u1(x).∇u2(x) ≥ 0, i.e., iff the gradients at x of the two
utility functions form an acute or right angle.
4. The parties are in local strong opposition at a point x ∈ Rm
iff ∇u1(x).∇u2(x) < 0, i.e., iff the gradients at x form an
obtuse angle.
5. The parties are in global strict (nonstrict, weak, strong)
opposition iff for every x ∈ Rm
they are in local strict
(nonstrict, weak, strong) opposition.
Global strict and nonstrict oppositions are complementary cases.
Essentially, under global strict opposition the whole outcome space
O becomes the Pareto-optimal set as at no point in O can the
negotiating parties make a joint improvement, i.e., every point in O
is a Pareto-efficient outcome. In other words, under global strict
opposition the outcome space O can be flattened out into a single
line such that for each pair of outcomes x, y ∈ O, u1(x) < u1(y)
iff u2(x) > u2(y), i.e., at every point in O, the gradient of the two
utility functions point to two different ends of the line.
Intuitively, global strict opposition implies that there is no way to
obtain joint improvements for both agents. As a consequence, the
negotiation degenerates to a distributive negotiation, i.e., the
negotiating parties should try to claim as much shares from the
negotiation issues as possible while the mediator should aim for the
fairness of the division. On the other hand, global nonstrict opposition
allows room for joint improvements and all parties might be better
off trying to realise the potential gains by reaching Pareto-efficient
agreements. Weak and strong oppositions indicate different levels
of opposition. The weaker the opposition, the more potential gains
can be realised making cooperation the better strategy to employ
during negotiation. On the other hand, stronger opposition
suggests that the negotiating parties tend to behave strategically
leading to misrepresentation of their respective objectives and utility
functions and making joint gains more difficult to realise.
We have been temporarily making the assumption that the
outcome space O is the subset of Rm
. In many real-world
negotiations, this assumption would be too restrictive. We will continue
our exposition by lifting this restriction and allowing discrete
attributes. However, as most negotiations involve only discrete
issues with a bounded number of options, we will assume that each
attribute takes values either from a finite set or from the set of real
numbers R. In the rest of the paper, we will refer to attributes whose
values are from finite sets as simple attributes and attributes whose
values are from R as continuous attributes. The notions of local
oppositions, i.e., strict, nonstrict, weak and strong, are not
applicable to outcome spaces that contain simple attributes and nor are the
notions of global weak and strong oppositions. However, the
notions of global strict and nonstrict oppositions can be generalised
for outcome spaces that contain simple attributes.
DEFINITION 3. Given an outcome space O, the parties are in
global strict opposition iff ∀x, y ∈ O, u1(x) < u1(y) iff u2(x) >
u2(y).
The parties are in global nonstrict opposition if they are not in
global strict opposition.
4.1 Optimisation on simple attributes
In order to extract the optimal values for a subset of attributes,
in the first step of this optimisation process the mediator requests
the negotiators to submit their respective utility functions over the
set of simple attributes. Let Simp ⊆ Att denote the set of all
simple attributes from Att. Note that, due to Assumption 1, agent i"s
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511
utility function can be characterised as follows:
ui([vSimp, vSimp]) = wi
1 ∗ ui,1([vSimp]) + wi
2 ∗ ui,2([vSimp]),
where Simp = Att \ Simp, and ui,1 and ui,2 are the utility
components of ui over the sets of attributes Simp and Simp, respectively,
and 0 < wi
1, wi
2 < 1 and wi
1 + wi
2 = 1.
As attributes are independent of each other regarding the agents"
utility functions, the optimisation problem over the attributes from
Simp can be carried out by fixing ui([vSimp]) to a constant C,
and then search for the optimal values within the set of attributes
Simp. Now, how does the mediator determine the optimal values
for the attributes in Simp? Several well-known optimisation
strategies could be applicable here:
• The utilitarian solution: The sum of the agents" utilities are
maximised. Thus, the optimal values are the solution of the
following optimisation problem:
arg max
v∈V alSimp
X
i∈N
ui(v)
• The Nash solution: The product of the agents" utilities are
maximised. Thus, the optimal values are the solution of the
following optimisation problem:
arg max
v∈V alSimp
Y
i∈N
ui(v)
• The egalitarian solution (aka. the maximin solution): The
utility of the agent with minimum utility is maximised. Thus,
the optimal values are the solution of the following
optimisation problem:
arg max
v∈V alSimp
min
i∈N
ui(v)
The question now is of course whether a negotiator has the
incentive to misrepresent his utility function. First of all, recall that the
agents" utility functions are bounded, i.e., ∀o ∈ O.0 ≤ ui(o) ≤ 1.
Thus, the agents have no incentive to overstate their utility
regarding an outcome o: If o is the most preferred outcome to an agent
i then he already assigns the maximal utility to o. On the other
hand, if o is not the most preferred outcome to i then by
overstating the utility he assigns to o, the agent i runs the risk of having
to settle on an agreement which would give him less payoffs than
he is supposed to receive. However, agents do have an incentive
to understate their utility if the final settlement will be based on the
above solutions alone. Essentially, the mechanism to avoid an agent
to understate his utility regarding particular outcomes is to
guarantee a certain measure of fairness for the final settlement. That is,
the agents lose the incentive to be dishonest to obtain gains from
taking advantage of the known solutions to determine the
settlement outcome for they would be offset by the fairness maintenance
mechanism. Firsts, we state an easy lemma.
LEMMA 1. When Simp contains one single attributes, the agents
have the incentive to understate their utility functions regarding
outcomes that are not attractive to them.
By way of illustration, consider the set Simp containing only one
attribute that could take values from the finite set {A, B, C, D}.
Assume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1
to A, B, C, and D, respectively. Assume also that negotiator 2
assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D,
respectively. If agent 1 misrepresents his utility function to the mediator
by reporting utility 0 for all values A, B and C and utility 1 for
value D then the agent 2 who plays honestly in his report to the
mediator will obtain the worst outcome D given any of the above
solutions. Note that agent 1 doesn"t need to know agent 2"s utility
function, nor does he need to know the strategy employed by agent
2. As long as he knows that the mediator is going to employ one of
the above three solutions, then the above misrepresentation is the
dominant strategy for this game.
However, when the set Simp contains more than one attribute
and none of the attributes strongly dominate the other attributes
then the above problem disminishes by itself thanks to the
integrative solution. We of course have to define clearly what it means
for an attribute to strongly dominate other attributes. Intuitively, if
most of an agent"s utility concentrates on one of the attributes then
this attribute strongly dominates other attributes. We again appeal
to the Assumption 1 on additivity of utility functions to achieve a
measure of fairness within this negotiation setting. Due to
Assumption 1, we can characterise agent i"s utility component over the set
of attributes Simp by the following equation:
ui,1([vSimp]) =
X
j∈Simp
wi
j ∗ ui,j([vj]) (1)
where
P
j∈Simp wj = 1.
Then, an attribute ∈ Simp strongly dominates the rest of the
attributes in Simp (for agent i) iff wi
>
P
j∈(Simp− ) wi
j . Attribute
is said to be strongly dominant (for agent i) wrt. the set of simple
attributes Simp.
The following theorem shows that if the set of attributes Simp
does not contain a strongly dominant attribute then the negotiators
have no incentive to be dishonest.
THEOREM 1. Given a negotiation framework, if for every agent
the set of simple attributes doesn"t contain a strongly dominant
attribute, then truth-telling is an equilibrium strategy for the
negotiators during the optimisation of simple attributes.
So far, we have been concentrating on the efficiency issue while
leaving the fairness issue aside. A fair framework does not only
support a more satisfactory distribution of utility among the agents,
but also often a good measure to prevent misrepresentation of
private information by the agents. Of the three solutions presented
above, the utilitarian solution does not support fairness. On the
other hand, Nash [16] proves that the Nash solution satisfies the
above four axioms for the cooperative bargaining games and is
considered a fair solution. The egalitarian solution is another
mechanism to achieve fairness by essentially helping the worst off. The
problem with these solutions, as discussed earlier, is that they are
vulnerable to strategic behaviours when one of the attributes strongly
dominates the rest of attributes.
However, there is yet another solution that aims to guarantee
fairness, the minimax solution. That is, the utility of the agent with
maximum utility is minimised. It"s obvious that the minimax
solution produces inefficient outcomes. However, to get around this
problem (given that the Pareto-optimal set can be tractably
computed), we can apply this solution over the Pareto-optimal set only.
Let POSet ⊆ V alSimp be the Pareto-optimal subset of the simple
outcomes, the minimax solution is defined to be the solution of the
following optimisation problem.
arg min
v∈P OSet
max
i∈N
ui(v)
While overall efficiency often suffers under a minimax solution,
i.e., the sum of all agents" utilities are often lower than under other
solutions, it can be shown that the minimax solution is less
vulnerable to manipulation.
512 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
THEOREM 2. Given a negotiation framework, under the
minimax solution, if the negotiators are uncertain about their
opponents" preferences then truth-telling is an equilibrium strategy for
the negotiators during the optimisation of simple attributes.
That is, even when there is only one single simple attribute, if an
agent is uncertain whether the other agent"s most preferred
resolution is also his own most preferred resolution then he should opt for
truth-telling as the optimal strategy.
4.2 Optimisation on continuous attributes
When the attributes take values from infinite sets, we assume that
they are continuous. This is similar to the common practice in
operations research in which linear programming solutions/techniques
are applied to integer programming problems.
We denote the number of continuous attributes by k, i.e., Att =
Simp ∪ Simp and |Simp| = k. Then, the outcome space O can be
represented as follows: O = (
Q
j∈Simp V alj) × (
Q
l∈Simp V all),
where
Q
l∈Simp V all ⊆ Rk
is the continuous component of O. Let
Oc
denote the set
Q
l∈Simp V all. We"ll refer to Oc
as the feasible
set and assume that Oc
is closed and convex. After carrying out
the optimisation over the set of simple attributes, we are able to
assign the optimal values to the simple attributes from Simp. Thus,
we reduce the original problem to the problem of searching for
optimal (and fair) outcomes within the feasible set Oc
. Recall that,
by Assumption 1, we can characterise agent i"s utility function as
follows:
ui([v∗
Simp, vSimp]) = C + wi
2 ∗ ui,2([vSimp]),
where C is the constant wi
1 ∗ ui,1([v∗
Simp]) and v∗
Simp denotes the
optimal values of the simple attributes in Simp. Hence, without loss
of generality (albeit with a blatant abuse of notation), we can take
the agent i"s utility function as ui : Rk
→ [0, 1]. Accordingly we
will also take the set of outcomes under consideration by the agents
to be the feasible set Oc
. We now state another assumption to be
used in this section:
ASSUMPTION 2. The negotiators" utility functions can be
described by continuously differentiable and concave functions ui :
Rk
→ [0, 1], (i = 1, 2).
It should be emphasised that we do not assume that agents
explicitly know their utility functions. For the method to be described
in the following to work, we only assume that the agents know the
relevant information, e.g. at certain point within the feasible set Oc
,
the gradient direction of their own utility functions and some
section of their respective indifference curves. Assume that a tentative
agreement (which is a point x ∈ Rk
) is currently on the table, the
process for the agents to jointly improve this agreement in order to
reach a Pareto-optimal agreement can be described as follows. The
mediator asks the negotiators to discretely submit their respective
gradient directions at x, i.e., ∇u1(x) and ∇u2(x).
Note that the goal of the process to be described here is to search
for agreements that are more efficient than the tentative agreement
currently on the table. That is, we are searching for points x within
the feasible set Oc
such that moving to x from the current tentative
agreement x brings more gains to at least one of the agents while
not hurting any of the agents. Due to the assumption made above,
i.e. the feasible set Oc
is bounded, the conditions for an alternative
x ∈ Oc
to be efficient vary depending on the position of x. The
following results are proved in [9]:
Let B(x) = 0 denote the equation of the boundary of Oc
,
defining x ∈ Oc
iff B(x) ≥ 0. An alternative x∗
∈ Oc
is efficient iff,
either
A. x∗
is in the interior of Oc
and the parties are in local strict
opposition at x∗
, i.e.,
∇u1(x∗
) = −γ∇u2(x∗
) (2)
where γ > 0; or
B. x∗
is on the boundary of Oc
, and for some α, β ≥ 0:
α∇u1(x∗
) + β∇u2(x∗
) = ∇B(x∗
) (3)
We are now interested in answering the following questions:
(i) What is the initial tentative agreement x0?
(ii) How to find the more efficient agreement xh+1, given the
current tentative agreement xh?
4.2.1 Determining a fair initial tentative agreement
It should be emphasised that the choice of the initial tentative
agreement affects the fairness of the final agreement to be reached
by the presented method. For instance, if the initial tentative
agreement x0 is chosen to be the most preferred alternative to one of
the agents then it is also a Pareto-optimal outcome, making it
impossible to find any joint improvement from x0. However, if x0
will then be chosen to be the final settlement and if x0 turns out
to be the worst alternative to the other agent then this outcome is a
very unfair one. Thus, it"s important that the choice of the initial
tentative agreement be sensibly made.
Ehtamo et al [3] present several methods to choose the initial
tentative agreement (called reference point in their paper). However,
their goal is to approximate the Pareto-optimal set by
systematically choosing a set of reference points. Once an (approximate)
Pareto-optimal set is generated, it is left to the negotiators to decide
which of the generated Pareto-optimal outcomes to be chosen as
the final settlement. That is, distributive negotiation will then be
required to settle the issue.
We, on the other hand, are interested in a fair initial tentative
agreement which is not necessarily efficient. Improving a given
tentative agreement to yield a Pareto-optimal agreement is
considered in the next section. For each attribute j ∈ Simp, an agent i will
be asked to discretely submit three values (from the set V alj): the
most preferred value, denoted by pvi,j, the least preferred value,
denoted by wvi,j, and a value that gives i an approximately
average payoff, denoted by avi,j. (Note that this is possible
because the set V alj is bounded.) If pv1,j and pv2,j are sufficiently
close, i.e., |pv1,j − pv2,j| < Δ for some pre-defined Δ > 0,
then pv1,j and pv2,j are chosen to be the two core values,
denoted by cv1 and cv2. Otherwise, between the two values pv1,j
and av1,j, we eliminate the one that is closer to wv2,j, the
remaining value is denoted by cv1. Similarly, we obtain cv2 from the
two values pv2,j and av2,j. If cv1 = cv2 then cv1 is selected as
the initial value for the attribute j as part of the initial tentative
agreement. Otherwise, without loss of generality, we assume that
cv1 < cv2. The mediator selects randomly p values mv1, . . . , mvp
from the open interval (cv1, cv2), where p ≥ 1. The mediator then
asks the agents to submit their valuations over the set of values
{cv1, cv2, mv1, . . . , mvp}. The value whose the two valuations of
two agents are closest is selected as the initial value for the attribute
j as part of the initial tentative agreement.
The above procedure guarantees that the agents do not gain by
behaving strategically. By performing the above procedure on
every attribute j ∈ Simp, we are able to identify the initial tentative
agreement x0 such that x0 ∈ Oc
. The next step is to compute
a new tentative agreement from an existing tentative agreement so
that the new one would be more efficient than the existing one.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513
4.2.2 Computing new tentative agreement
Our procedure is a combination of the method of jointly
improving direction introduced by Ehtamo et al [4] and a method we
propose in the coming section. Basically, the idea is to see how strong
the opposition the parties are in. If the two parties are in (local)
weak opposition at the current tentative agreement xh, i.e., their
improving directions at xh are close to each other, then the
compromise direction proposed by Ehtamo et al [4] is likely to point to
a better agreement for both agents. However, if the two parties are
in local strong opposition at the current point xh then it"s unclear
whether the compromise direction would really not hurt one of the
agents whilst bringing some benefit to the other.
We will first review the method proposed by Ehtamo et al [4]
to compute the compromise direction for a group of negotiators at
a given point x ∈ Oc
. Ehtamo et al define a a function T(x)
that describes the mediator"s choice for a compromise direction at
x. For the case of two-party negotiations, the following bisecting
function, denoted by T BS
, can be defined over the interior set of Oc
.
Note that the closed set Oc
contains two disjoint subsets: Oc
=
Oc
0 ∪Oc
B , where Oc
0 denotes the set of interior points of Oc
and Oc
B
denotes the boundary of Oc
. The bisecting compromise is defined
by a function T BS
: Oc
0 → R2
,
T BS
(x) =
∇u1(x)
∇u1(x)
+
∇u2(x)
∇u2(x)
, x ∈ Oc
0. (4)
Given the current tentative agreement xh (h ≥ 0), the mediator
has to choose a point xh+1 along d = T(xh) so that all parties
gain. Ehtamo et al then define a mechanism to generate a sequence
of points and prove that when the generated sequence is bounded
and when all generated points (from the sequence) belong to the
interior set Oc
0 then the sequence converges to a weakly
Paretooptimal agreement [4, pp. 59-60].5
As the above mechanism does not work at the boundary points
of Oc
, we will introduce a procedure that works everywhere in an
alternative space Oc
. Let x ∈ Oc
and let θ(x) denote the angle
between the gradients ∇u1(x) and ∇u2(x) at x. That is,
θ(x) = arccos(
∇u1(x).∇u2(x)
∇u1(x) . ∇u2(x)
)
From Definition 2, it is obvious that the two parties are in local
strict opposition (at x) iff θ(x) = π, and they are in local strong
opposition iff π ≥ θ(x) > π/2, and they are in local weak
opposition iff π/2 ≥ θ(x) ≥ 0. Note also that the two vectors ∇u1(x)
and ∇u2(x) define a hyperplane, denoted by h∇(x), in the
kdimensional space Rk
. Furthermore, there are two indifference
curves of agents 1 and 2 going through point x, denoted by IC1(x)
and IC2(x), respectively. Let hT1(x) and hT2(x) denote the
tangent hyperplanes to the indifference curves IC1(x) and IC2(x),
respectively, at point x. The planes hT1(x) and hT2(x) intersect
h∇(x) in the lines IS1(x) and IS2(x), respectively. Note that
given a line L(x) going through the point x, there are two (unit)
vectors from x along L(x) pointing to two opposite directions,
denoted by L+
(x) and L−
(x).
We can now informally explain our solution to the problem of
searching for joint gains. When it isn"t possible to obtain a
compromise direction for joint improvements at a point x ∈ Oc
either
because the compromise vector points to the space outside of the
feasible set Oc
or because the two parties are in local strong
opposition at x, we will consider to move along the indifference curve of
one party while trying to improve the utility of the other party. As
5
Let S be the set of alternatives, x∗
is weakly Pareto optimal if
there is no x ∈ S such that ui(x) > ui(x∗
) for all agents i.
the mediator does not know the indifference curves of the parties,
he has to use the tangent hyperplanes to the indifference curves of
the parties at point x. Note that the tangent hyperplane to a curve
is a useful approximation of the curve in the immediate vicinity of
the point of tangency, x.
We are now describing an iteration step to reach the next tentative
agreement xh+1 from the current tentative agreement xh ∈ Oc
. A
vector v whose tail is xh is said to be bounded in Oc
if ∃λ > 0
such that xh +λv ∈ Oc
. To start, the mediator asks the negotiators
for their gradients ∇u1(xh) and ∇u2(xh), respectively, at xh.
1. If xh is a Pareto-optimal outcome according to equation 2 or
equation 3, then the process is terminated.
2. If 1 ≥ ∇u1(xh).∇u2(xh) > 0 and the vector T BS
(xh) is
bounded in Oc
then the mediator chooses the compromise
improving direction d = T BS
(xh) and apply the method
described by Ehtamo et al [4] to generate the next tentative
agreement xh+1.
3. Otherwise, among the four vectors ISσ
i (xh), i = 1, 2 and
σ = +/−, the mediator chooses the vector that (i) is bounded
in Oc
, and (ii) is closest to the gradient of the other agent,
∇uj (xh)(j = i). Denote this vector by T G(xh). That is,
we will be searching for a point on the indifference curve of
agent i, ICi(xh), while trying to improve the utility of agent
j. Note that when xh is an interior point of Oc
then the
situation is symmetric for the two agents 1 and 2, and the mediator
has the choice of either finding a point on IC1(xh) to
improve the utility of agent 2, or finding a point on IC2(xh) to
improve the utility of agent 1. To decide on which choice to
make, the mediator has to compute the distribution of gains
throughout the whole process to avoid giving more gains to
one agent than to the other. Now, the point xh+1 to be
generated lies somewhere on the intersection of ICi(xh) and the
hyperplane defined by ∇ui(xh) and T G(xh). This
intersection is approximated by T G(xh). Thus, the sought
after point xh+1 can be generated by first finding a point yh
along the direction of T G(xh) and then move from yh to the
same direction of ∇ui(xh) until we intersect with ICi(xh).
Mathematically, let ζ and ξ denote the vectors T G(xh) and
∇ui(xh), respectively, xh+1 is the solution to the following
optimisation problem:
max
λ1,λ2∈L
uj(xh + λ1ζ + λ2ξ)
s.t. xh+λ1ζ+λ2ξ ∈ Oc
, and ui(xh+λ1ζ+λ2ξ) = ui(xh),
where L is a suitable interval of positive real numbers; e.g.,
L = {λ|λ > 0}, or L = {λ|a < λ ≤ b}, 0 ≤ a < b.
Given an initial tentative agreement x0, the method described
above allows a sequence of tentative agreements x1, x2, . . . to be
iteratively generated. The iteration stops whenever a weakly Pareto
optimal agreement is reached.
THEOREM 3. If the sequence of agreements generated by the
above method is bounded then the method converges to a point
x∗
∈ Oc
that is weakly Pareto optimal.
5. CONCLUSION AND FUTURE WORK
In this paper we have established a framework for negotiation
that is based on MCDM theory for representing the agents"
objectives and utilities. The focus of the paper is on integrative
negotiation in which agents aim to maximise joint gains, or create value.
514 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
We have introduced a mediator into the negotiation in order to
allow negotiators to disclose information about their utilities,
without providing this information to their opponents. Furthermore, the
mediator also works toward the goal of achieving fairness of the
negotiation outcome.
That is, the approach that we describe aims for both efficiency, in
the sense that it produces Pareto optimal outcomes (i.e. no aspect
can be improved for one of the parties without worsening the
outcome for another party), and also for fairness, which chooses
optimal solutions which distribute gains amongst the agents in some
appropriate manner. We have developed a two step process for
addressing the NP-hard problem of finding a solution for a set of
integrative attributes, which is within the Pareto-optimal set for those
attributes. For simple attributes (i.e. those which have a finite set
of values) we use known optimisation techniques to find a
Paretooptimal solution. In order to discourage agents from
misrepresenting their utilities to gain an advantage, we look for solutions that
are least vulnerable to manipulation. We have shown that as long
as one of the simple attributes does not strongly dominate the
others, then truth telling is an equilibrium strategy for the negotiators
during the stage of optimising simple attributes. For non-simple
attributes we propose a mechanism that provides stepwise
improvements to move the proposed solution in the direction of a
Paretooptimal solution.
The approach presented in this paper is similar to the ideas
behind negotiation analysis [18]. Ehtamo et al [4] presents an
approach to searching for joint gains in multi-party negotiations. The
relation of their approach to our approach is discussed in the
preceding section. Lai et al [12] provide an alternative approach to
integrative negotiation. While their approach was clearly described
for the case of two-issue negotiations, the generalisation to
negotiations with more than two issues is not entirely clear.
Zhang et at [22] discuss the use of integrative negotiation in
agent organisations. They assume that agents are honest. Their
main result is an experiment showing that in some situations, agents"
cooperativeness may not bring the most benefits to the organisation
as a whole, while giving no explanation. Jonker et al [7] consider
an approach to multi-attribute negotiation without the use of a
mediator. Thus, their approach can be considered a complement of
ours. Their experimental results show that agents can reach
Paretooptimal outcomes using their approach.
The details of the approach have currently been shown only for
bilateral negotiation, and while we believe they are generalisable to
multiple negotiators, this work remains to be done. There is also
future work to be done in more fully characterising the outcomes
of the determination of values for the non-simple attributes. In
order to provide a complete framework we are also working on the
distributive phase using the mediator.
Acknowledgement
The authors acknowledge financial support by ARC Dicovery Grant
(2006-2009, grant DP0663147) and DEST IAP grant (2004-2006,
grant CG040014). The authors would like to thank Lawrence
Cavedon and the RMIT Agents research group for their helpful
comments and suggestions.
6. REFERENCES
[1] F. Alemi, P. Fos, and W. Lacorte. A demonstration of
methods for studying negotiations between physicians and
health care managers. Decision Science, 21:633-641, 1990.
[2] M. Ehrgott. Multicriteria Optimization. Springer-Verlag,
Berlin, 2000.
[3] H. Ehtamo, R. P. Hamalainen, P. Heiskanen, J. Teich,
M. Verkama, and S. Zionts. Generating pareto solutions in a
two-party setting: Constraint proposal methods.
Management Science, 45(12):1697-1709, 1999.
[4] H. Ehtamo, E. Kettunen, and R. P. Hmlinen. Searching for
joint gains in multi-party negotiations. European Journal of
Operational Research, 130:54-69, 2001.
[5] P. Faratin. Automated Service Negotiation Between
Autonomous Computational Agents. PhD thesis, University
of London, 2000.
[6] A. Foroughi. Minimizing negotiation process losses with
computerized negotiation support systems. The Journal of
Applied Business Research, 14(4):15-26, 1998.
[7] C. M. Jonker, V. Robu, and J. Treur. An agent architecture
for multi-attribute negotiation using incomplete preference
information. J. Autonomous Agents and Multi-Agent
Systems, (to appear).
[8] R. L. Keeney and H. Raiffa. Decisions with Multiple
Objectives: Preferences and Value Trade-Offs. John Wiley
and Sons, Inc., New York, 1976.
[9] G. Kersten and S. Noronha. Rational agents, contract curves,
and non-efficient compromises. IEEE Systems, Man, and
Cybernetics, 28(3):326-338, 1998.
[10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam. Protocols
for negotiating complex contracts. IEEE Intelligent Systems,
18(6):32-38, 2003.
[11] S. Kraus, J. Wilkenfeld, and G. Zlotkin. Multiagent
negotiation under time constraints. Artificial Intelligence
Journal, 75(2):297-345, 1995.
[12] G. Lai, C. Li, and K. Sycara. Efficient multi-attribute
negotiation with incomplete information. Group Decision
and Negotiation, 15:511-528, 2006.
[13] D. Lax and J. Sebenius. The manager as negotiator: The
negotiator"s dilemma: Creating and claiming value, 2nd ed.
In S. Goldberg, F. Sander & N. Rogers, editors, Dispute
Resolution, 2nd ed., pages 49-62. Little Brown & Co., 1992.
[14] M. Lomuscio and N. Jennings. A classification scheme for
negotiation in electronic commerce. In Agent-Mediated
Electronic Commerce: A European Agentlink Perspective.
Springer-Verlag, 2001.
[15] R. Maes and A. Moukas. Agents that buy and sell.
Communications of the ACM, 42(3):81-91, 1999.
[16] J. Nash. Two-person cooperative games. Econometrica,
21(1):128-140, April 1953.
[17] H. Raiffa. The Art and Science of Negotiation. Harvard
University Press, Cambridge, USA, 1982.
[18] H. Raiffa, J. Richardson, and D. Metcalfe. Negotiation
Analysis: The Science and Art of Collaborative Decision
Making. Belknap Press, Cambridge, MA, 2002.
[19] T. Sandholm. Agents in electronic commerce: Component
technologies for automated negotiation and coalition
formation. JAAMAS, 3(1):73-96, 2000.
[20] J. Sebenius. Negotiation analysis: A characterization and
review. Management Science, 38(1):18-38, 1992.
[21] L. Weingart, E. Hyder, and M. Pietrula. Knowledge matters:
The effect of tactical descriptions on negotiation behavior
and outcome. Tech. Report, CMU, 1995.
[22] X. Zhang, V. R. Lesser, and T. Wagner. Integrative
negotiation among agents situated in organizations. IEEE
Trans. on Systems, Man, and Cybernetics, Part C,
36(1):19-30, 2006.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 515 | concession;mediator;uncertainty;multi-criterion decision make;automate negotiation;integrative negotiation;inefficient compromise;dilemma;deadlock situation;creating value;claiming value;mcdm;incomplete information;negotiation |
train_I-56 | Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework | This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles. | 1. INTRODUCTION
At the core of many emerging distributed applications is the
distributed constraint satisfaction problem (DCSP) - one which
involves finding a consistent combination of actions (abstracted as
domain values) to satisfy the constraints among multiple agents
in a shared environment. Important application examples include
distributed resource allocation [1] and distributed scheduling [2].
Many important algorithms, such as distributed breakout (DBO)
[3], asynchronous backtracking (ABT) [4], asynchronous partial
overlay (APO) [5] and asynchronous weak-commitment (AWC)
[4], have been developed to address the DCSP and provide the
agent solution basis for its applications. Broadly speaking, these
algorithms are based on two different approaches, either
extending from classical backtracking algorithms [6] or introducing
mediation among the agents.
While there has been no lack of efforts in this promising
research field, especially in dealing with outstanding issues such as
resource restrictions (e.g., limits on time and communication) [7]
and privacy requirements [8], there is unfortunately no
conceptually clear treatment to prise open the model-theoretic workings of
the various agent algorithms that have been developed. As a
result, for instance, a deeper intellectual understanding on why one
algorithm is better than the other, beyond computational issues,
is not possible.
In this paper, we present a novel, unified distributed constraint
satisfaction framework based on automated negotiation [9].
Negotiation is viewed as a process of several agents searching for a
solution called an agreement. The search can be realized via a
negotiation mechanism (or algorithm) by which the agents follow
a high level protocol prescribing the rules of interactions, using
a set of strategies devised to select their own preferences at each
negotiation step.
Anchoring the DCSP search on automated negotiation, we
show in this paper that several well-known DCSP algorithms
[3] are actually mechanisms that share the same
Belief-DesireIntention (BDI) interaction protocol to reach agreements, but
use different action or value selection strategies. The proposed
framework provides not only a clearer understanding of existing
DCSP algorithms from a unified BDI agent perspective, but also
opens up the opportunities to extend and develop new strategies
for DCSP. To this end, a new strategy called Unsolicited Mutual
Advice (UMA) is proposed. Our performance evaluation shows
that UMA can outperform ABT and AWC in terms of the average
number of computational cycles for both the sparse and critical
coloring problems [6].
The rest of this paper is organized as follows. In Section 2,
we provide a formal overview of DCSP. Section 3 presents a BDI
negotiation model by which a DCSP agent reasons. Section 4
presents the existing algorithms ABT, AWC and DBO as
different strategies formalized on a common protocol. A new strategy
called Unsolicited Mutual Advice is proposed in Section 5; our
empirical results and discussion attempt to highlight the merits
of the new strategy over existing ones. Section 6 concludes the
paper and points to some future work.
2. DCSP: PROBLEM FORMALIZATION
The DCSP [4] considers the following environment.
• There are n agents with k variables x0, x1, · · · , xk−1, n ≤
k, which have values in domains D1, D2, · · · , Dk,
respectively. We define a partial function B over the
productrange {0, 1, . . . , (n−1)}×{0, 1, . . . , (k −1)} such that, that
variable xj belongs to agent i is denoted by B(i, j)!. The
exclamation mark ‘!" means ‘is defined".
• There are m constraints c0, c1, · · · cm−1 to be conjunctively
satisfied. In a similar fashion as defined for B(i, j), we use
E(l, j)!, (0 ≤ l < m, 0 ≤ j < k), to denote that xj is
relevant to the constraint cl.
The DCSP may be formally stated as follows.
Problem Statement: ∀i, j (0 ≤ i < n)(0 ≤ j < k) where
B(i, j)!, find the assignment xj = dj ∈ Dj such that ∀l (0 ≤ l <
m) where E(l, j)!, cl is satisfied.
A constraint may consist of different variables belonging to
different agents. An agent cannot change or modify the
assignment values of other agents" variables. Therefore, in
cooperatively searching for a DCSP solution, the agents would need to
communicate with one another, and adjust and re-adjust their
own variable assignments in the process.
2.1 DCSP Agent Model
In general, all DCSP agents must cooperatively interact, and
essentially perform the assignment and reassignment of domain
values to variables to resolve all constraint violations. If the
agents succeed in their resolution, a solution is found.
In order to engage in cooperative behavior, a DCSP agent needs
five fundamental parameters, namely, (i) a variable [4] or a
variable set [10], (ii) domains, (iii) priority, (iv) a neighbor list and
(v) a constraint list.
Each variable assumes a range of values called a domain. A
domain value, which usually abstracts an action, is a possible
option that an agent may take. Each agent has an assigned priority.
These priority values help decide the order in which they revise
or modify their variable assignments. An agent"s priority may be
fixed (static) or changing (dynamic) when searching for a
solution. If an agent has more than one variable, each variable can
be assigned a different priority, to help determine which variable
assignment the agent should modify first.
An agent which shares the same constraint with another agent
is called the latter"s neighbor. Each agent needs to refer to its list
of neighbors during the search process. This list may also be kept
unchanged or updated accordingly in runtime. Similarly, each
agent maintains a constraint list. The agent needs to ensure that
there is no violation of the constraints in this list. Constraints can
be added or removed from an agent"s constraint list in runtime.
As with an agent, a constraint can also be associated with a
priority value. Constraints with a high priority are said to be
more important than constraints with a lower priority. To
distinguish it from the priority of an agent, the priority of a constraint
is called its weight.
3. THE BDI NEGOTIATION MODEL
The BDI model originates with the work of M. Bratman [11].
According to [12, Ch.1], the BDI architecture is based on a
philosophical model of human practical reasoning, and draws out the
process of reasoning by which an agent decides which actions to
perform at consecutive moments when pursuing certain goals.
Grounding the scope to the DCSP framework, the common goal
of all agents is finding a combination of domain values to satisfy a
set of predefined constraints. In automated negotiation [9], such
a solution is called an agreement among the agents. Within this
scope, we found that we were able to unearth the generic behavior
of a DCSP agent and formulate it in a negotiation protocol,
prescribed using the powerful concepts of BDI. Thus, our proposed
negotiation model can be said to combine the BDI concepts with
automated negotiation in a multiagent framework, allowing us
to conceptually separate DCSP mechanisms into a common BDI
interaction protocol and the adopted strategies.
3.1 The generic protocol
Figure 1 shows the basic reasoning steps in an arbitrary round
of negotiation that constitute the new protocol. The solid line
indicates the common component or transition which always
exists regardless of the strategy used. The dotted line indicates the
Percept
Belief
Desire
Intention
Mediation
Execution
P
B
D
I
I
I
Info Message
Info Message
Negotiation Message
Negotiation Message
Negotiation Message
Negotiation Message
Negotiation Message
Negotiation Message
Negotiation Message
Figure 1: The BDI interaction protocol
component or transition which may or may not appear depending
on the adopted strategy.
Two types of messages are exchanged through this protocol,
namely, the info message and the negotiation message.
An info message perceived is a message sent by another agent.
The message will contain the current selected values and priorities
of the variables of that sending agent. The main purpose of this
message is to update the agent about the current environment.
Info message is sent out at the end of one negotiation round (also
called a negotiation cycle), and received at the beginning of next
round.
A negotiation message is a message which may be sent within
a round. This message is for mediation purposes. The agent may
put different contents into this type of message as long as it is
agreed among the group. The format of the negotiation message
and when it is to be sent out are subject to the strategy. A
negotiation message can be sent out at the end of one reasoning
step and received at the beginning of the next step.
Mediation is a step of the protocol that depends on whether the
agent"s interaction with others is synchronous or asynchronous.
In synchronous mechanism, mediation is required in every
negotiation round. In an asynchronous one, mediation is needed only in
a negotiation round when the agent receives a negotiation
message. A more in-depth view of this mediation step is provided
later in this section.
The BDI protocol prescribes the skeletal structure for DCSP
negotiation. We will show in Section 4 that several well-known
DCSP mechanisms all inherit this generic model.
The details of the six main reasoning steps for the protocol
(see Figure 1) are described as follows for a DCSP agent. For a
conceptually clearer description, we assume that there is only one
variable per agent.
• Percept. In this step, the agent receives info messages
from its neighbors in the environment, and using its Percept
function, returns an image P. This image contains the
current values assigned to the variables of all agents in its
neighbor list. The image P will drive the agent"s actions
in subsequent steps. The agent also updates its constraint
list C using some criteria of the adopted strategy.
• Belief. Using the image P and constraint list C, the agent
will check if there is any violated constraint. If there is
no violation, the agent will believe it is choosing a correct
option and therefore will take no action. The agent will
do nothing if it is in a local stable state - a snapshot of
the variables assignments of the agent and all its neighbors
by which they satisfy their shared constraints. When all
agents are in their local stable states, the whole
environment is said to be in a global stable state and an
agreeThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525
ment is found. In case the agent finds its value in conflict
with some of its neighbors", i.e., the combination of values
assigned to the variables leads to a constraint violation,
the agent will first try to reassign its own variable using a
specific strategy. If it finds a suitable option which meets
some criteria of the adopted strategy, the agent will believe
it should change to the new option. However it does not
always happen that an agent can successfully find such an
option. If no option can be found, the agent will believe it
has no option, and therefore will request its neighbors to
reconsider their variable assignments.
To summarize, there are three types of beliefs that a DCSP
agent can form: (i) it can change its variable assignment to
improve the current situation, (ii) it cannot change its
variable assignment and some constraints violations cannot be
resolved and (iii) it need not change its variable assignment
as all the constraints are satisfied.
Once the beliefs are formed, the agent will determine its
desires, which are the options that attempt to resolve the
current constraint violations.
• Desire. If the agent takes Belief (i), it will generate a list of
its own suitable domain values as its desire set. If the agent
takes Belief (ii), it cannot ascertain its desire set, but will
generate a sublist of agents from its neighbor list, whom it
will ask to reconsider their variable assignments. How this
sublist is created depends on the strategy devised for the
agent. In this situation, the agent will use a virtual desire
set that it determines based on its adopted strategy. If the
agent takes Belief (iii), it will have no desire to revise its
domain value, and hence no intention.
• Intention. The agent will select a value from its desire
set as its intention. An intention is the best desired
option that the agent assigns to its variable. The criteria for
selecting a desire as the agent"s intention depend on the
strategy used. Once the intention is formed, the agent may
either proceed to the execution step, or undergo mediation.
Again, the decision to do so is determined by some criteria
of the adopted strategy.
• Mediation. This is an important function of the agent.
Since, if the agent executes its intention without
performing intention mediation with its neighbors, the constraint
violation between the agents may not be resolved. Take
for example, suppose two agents have variables, x1 and x2,
associated with the same domain {1, 2}, and their shared
constraint is (x1 + x2 = 3). Then if both the variables are
initialized with value 1, they will both concurrently switch
between the values 2 and 1 in the absence of mediation
between them.
There are two types of mediation: local mediation and
group mediation. In the former, the agents exchange their
intentions. When an agent receives another"s intention
which conflicts with its own, the agent must mediate
between the intentions, by either changing its own intention
or informing the other agent to change its intention. In the
latter, there is an agent which acts as a group mediator.
This mediator will collect the intentions from the group - a
union of the agent and its neighbors - and determine which
intention is to be executed. The result of this mediation is
passed back to the agents in the group. Following
mediation, the agent may proceed to the next reasoning step to
execute its intention or begin a new negotiation round.
• Execution. This is the last step of a negotiation round.
The agent will execute by updating its variable assignment
if the intention obtained at this step is its own. Following
execution, the agent will inform its neighbors about its new
variable assignment and updated priority. To do so, the
agent will send out an info message.
3.2 The strategy
A strategy plays an important role in the negotiation process.
Within the protocol, it will often determine the efficiency of the
Percept
Belief
Desire
Intention
Mediation
Execution
P
B
D
I
Info Message
Info Message
Negotiation Message
Negotiation Message
Negotiation Message
Figure 2: BDI protocol with Asynchronous
Backtracking strategy
search process in terms of computational cycles and message
communication costs.
The design space when devising a strategy is influenced by the
following dimensions: (i) asynchronous or synchronous, (ii)
dynamic or static priority, (iii) dynamic or static constraint weight,
(iv) number of negotiation messages to be communicated, (v) the
negotiation message format and (vi) the completeness property.
In other words, these dimensions provide technical considerations
for a strategy design.
4. DCSP ALGORITHMS: BDI PROTOCOL
+ STRATEGIES
In this section, we apply the proposed BDI negotiation model
presented in Section 3 to expose the BDI protocol and the
different strategies used for three well-known algorithms, ABT, AWC
and DBO. All these algorithms assume that there is only one
variable per agent. Under our framework, we call the strategies
applied the ABT, AWC and DBO strategies, respectively.
To describe each strategy formally, the following mathematical
notations are used:
• n is the number of agents, m is the number of constraints;
• xi denotes the variable held by agent i, (0 ≤ i < n);
• Di denotes the domain of variable xi; Fi denotes the
neighbor list of agent i; Ci denotes its constraint list;
• pi denotes the priority of agent i; and Pi = {(xj = vj, pj =
k) | agent j ∈ Fi, vj ∈ Dj is the current value assigned
to xj and the priority value k is a positive integer } is the
perception of agent i;
• wl denotes the weight of constraint l, (0 ≤ l < m);
• Si(v) is the total weight of the violated constraints in Ci
when its variable has the value v ∈ Di.
4.1 Asynchronous Backtracking
Figure 2 presents the BDI negotiation model incorporating the
Asynchronous Backtracking (ABT) strategy. As mentioned in
Section 3, for an asynchronous mechanism that ABT is, the
mediation step is needed only in a negotiation round when an agent
receives a negotiation message.
For agent i, beginning initially with (wl = 1, (0 ≤ l < m);
pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the
constraints with agent i, its BDI-driven ABT strategy is described
as follows.
Step 1 - Percept: Update Pi upon receiving the info
messages from the neighbors (in Fi). Update Ci to be the list of
526 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
constraints which only consists of agents in Fi that have equal or
higher priority than this agent.
Step 2 - Belief: The belief function GB (Pi,Ci) will return a
value bi ∈ {0, 1, 2}, decided as follows:
• bi = 0 when agent i can find an optimal option, i.e., if
(Si(vi) = 0 or vi is in bad values list) and (∃a ∈ Di)(Si(a) =
0) and a is not in a list of domain values called bad values
list. Initially this list is empty and it will be cleared when a
neighbor of higher priority changes its variable assignment.
• bi = 1 when it cannot find an optimal option, i.e., if (∀a ∈
Di)(Si(a) = 0) or a is in bad values list.
• bi = 2 when its current variable assignment is an optimal
option, i.e., if Si(vi) = 0 and vi is not in bad value list.
Step 3 - Desire: The desire function GD (bi) will return a
desire set denoted by DS, decided as follows:
• If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and a is not
in the bad value list }.
• If bi = 1, then DS = ∅, the agent also finds agent k which
is determined by {k | pk = min(pj) with agent j ∈ Fi and
pk > pi }.
• If bi = 2, then DS = ∅.
Step 4 - Intention: The intention function GI (DS) will
return an intention, decided as follows:
• If DS = ∅, then select an arbitrary value (say, vi) from DS
as the intention.
• If DS = ∅, then assign nil as the intention (to denote its
lack thereof).
Step 5 - Execution:
• If agent i has a domain value as its intention, the agent will
update its variable assignment with this value.
• If bi = 1, agent i will send a negotiation message to agent
k, then remove k from Fi and begin its next negotiation
round. The negotiation message will contain the list of
variable assignments of those agents in its neighbor list Fi
that have a higher priority than agent i in the current image
Pi.
Mediation: When agent i receives a negotiation message,
several sub-steps are carried out, as follows:
• If the list of agents associated with the negotiation message
contains agents which are not in Fi, it will add these agents
to Fi, and request these agents to add itself to their
neighbor lists. The request is considered as a type of negotiation
message.
• Agent i will first check if the sender agent is updated with
its current value vi. The agent will add vi to its bad values
list if it is so, or otherwise send its current value to the
sender agent.
Following this step, agent i proceeds to the next negotiation
round.
4.2 Asynchronous Weak Commitment Search
Figure 3 presents the BDI negotiation model incorporating the
Asynchronous Weak Commitment (AWC) strategy. The model is
similar to that of incorporating the ABT strategy (see Figure 2).
This is not surprising; AWC and ABT are found to be
strategically similar, differing only in the details of some reasoning steps.
The distinguishing point of AWC is that when the agent cannot
find a suitable variable assignment, it will change its priority to
the highest among its group members ({i} ∪ Fi).
For agent i, beginning initially with (wl = 1, (0 ≤ l < m);
pi = i, (0 ≤ i < n)) and Fi contains all the agents who share
the constraints with agent i, its BDI-driven AWC strategy is
described as follows.
Step 1 - Percept: This step is identical to the Percept step
of ABT.
Step 2 - Belief: The belief function GB (Pi,Ci) will return a
value bi ∈ {0, 1, 2}, decided as follows:
Percept
Belief
Desire
Intention
Mediation
Execution
P
B
D
I
Info Message
Info Message
Negotiation Message
Negotiation Message
Negotiation Message
Figure 3: BDI protocol with Asynchronous
WeakCommitment strategy
• bi = 0 when the agent can find an optimal option i.e., if
(Si(vi) = 0 or the assignment xi = vi and the current
variables assignments of the neighbors in Fi who have higher
priority form a nogood [4]) stored in a list called nogood list
and ∃a ∈ Di, Si(a) = 0 (initially the list is empty).
• bi = 1 when the agent cannot find any optimal option i.e.,
if ∀a ∈ Di, Si(a) = 0.
• bi = 2 when the current assignment is an optimal option
i.e., if Si(vi) = 0 and the current state is not a nogood in
nogood list.
Step 3 - Desire: The desire function GD (bi) will return a
desire set DS, decided as follows:
• If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and the
number of constraint violations with lower priority agents
is minimized }.
• If bi = 1, then DS = {a | a ∈ Di and the number of
violations of all relevant constraints is minimized }.
• If bi = 2, then DS = ∅.
Following, if bi = 1, agent i will find a list Ki of higher priority
neighbors, defined by Ki = {k | agent k ∈ Fi and pk > pi}.
Step 4 - Intention: This step is similar to the Intention step
of ABT. However, for this strategy, the negotiation message will
contain the variable assignments (of the current image Pi) for
all the agents in Ki. This list of assignment is considered as
a nogood. If the same negotiation message had been sent out
before, agent i will have nil intention. Otherwise, the agent will
send the message and save the nogood in the nogood list.
Step 5 - Execution:
• If agent i has a domain value as its intention, the agent will
update its variable assignment with this value.
• If bi = 1, it will send the negotiation message to its
neighbors in Ki, and set pi = max{pj} + 1, with agent j ∈ Fi.
Mediation: This step is identical to the Mediation step of
ABT, except that agent i will now add the nogood contained in
the negotiation message received to its own nogood list.
4.3 Distributed Breakout
Figure 4 presents the BDI negotiation model incorporating the
Distributed Breakout (DBO) strategy. Essentially, by this
synchronous strategy, each agent will search iteratively for
improvement by reducing the total weight of the violated constraints.
The iteration will continue until no agent can improve further,
at which time if some constraints remain violated, the weights of
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 527
Percept
Belief
Desire
Intention
Mediation
Execution
P
B
D
I
I
A
Info Message
Info Message
Negotiation Message
Negotiation Message
Figure 4: BDI protocol with Distributed Breakout
strategy
these constraints will be increased by 1 to help ‘breakout" from a
local minimum.
For agent i, beginning initially with (wl = 1, (0 ≤ l < m),
pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the
constraints with agent i, its BDI-driven DBO strategy is described
as follows.
Step 1 - Percept: Update Pi upon receiving the info
messages from the neighbors (in Fi). Update Ci to be the list of its
relevant constraints.
Step 2 - Belief: The belief function GB (Pi,Ci) will return a
value bi ∈ {0, 1, 2}, decided as follows:
• bi = 0 when agent i can find an option to reduce the number
violations of the constraints in Ci, i.e., if ∃a ∈ Di, Si(a) <
Si(vi).
• bi = 1 when it cannot find any option to improve situation,
i.e., if ∀a ∈ Di, a = vi, Si(a) ≥ Si(vi).
• bi = 2 when its current assignment is an optimal option,
i.e., if Si(vi) = 0.
Step 3 - Desire: The desire function GD (bi) will return a
desire set DS, decided as follows:
• If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and
(Si(vi)−Si(a)) is maximized }. (max{(Si(vi)−Si(a))} will
be referenced by hmax
i in subsequent steps, and it defines
the maximal reduction in constraint violations).
• Otherwise, DS = ∅.
Step 4 - Intention: The intention function GI (DS) will
return an intention, decided as follows:
• If DS = ∅, then select an arbitrary value (say, vi) from DS
as the intention.
• If DS = ∅, then assign nil as the intention.
Following, agent i will send its intention to all its neighbors.
In return, it will receive intentions from these agents before
proceeding to Mediation step.
Mediation: Agent i receives all the intentions from its
neighbors. If it finds that the intention received from a neighbor agent
j is associated with hmax
j > hmax
i , the agent will automatically
cancel its current intention.
Step 5 - Execution:
• If agent i did not cancel its intention, it will update its
variable assignment with the intended value.
Percept
Belief
Desire
Intention
Mediation
Execution
P
B
D
I
I
A
Info Message
Info Message
Negotiation Message
Negotiation Message
Negotiation Message
Negotiation Message
Figure 5: BDI protocol with Unsolicited Mutual
Advice strategy
• If all intentions received and its own one are nil intention,
the agent will increase the weight of each currently violated
constraint by 1.
5. THE UMA STRATEGY
Figure 5 presents the BDI negotiation model incorporating the
Unsolicited Mutual Advice(UMA) strategy.
Unlike when using the strategies of the previous section, a
DCSP agent using UMA will not only send out a negotiation
message when concluding its Intention step, but also when
concluding its Desire step. The negotiation message that it sends out
to conclude the Desire step constitutes an unsolicited advice for
all its neighbors. In turn, the agent will wait to receive unsolicited
advices from all its neighbors, before proceeding on to determine
its intention.
For agent i, beginning initially with (wl = 1, (0 ≤ l < m),
pi = i, (0 ≤ i < n)) and Fi contains all the agents who share
the constraints with agent i, its BDI-driven UMA strategy is
described as follows.
Step 1 - Percept: Update Pi upon receiving the info
messages from the neighbors (in Fi). Update Ci to be the list of
constraints relevant to agent i.
Step 2 - Belief: The belief function GB (Pi,Ci) will return a
value bi ∈ {0, 1, 2}, decided as follows:
• bi = 0 when agent i can find an option to reduce the number
violations of the constraints in Ci, i.e., if ∃a ∈ Di, Si(a) <
Si(vi) and the assignment xi = a and the current variable
assignments of its neighbors do not form a local state stored
in a list called bad states list (initially this list is empty).
• bi = 1 when it cannot find a value a such as a ∈ Di, Si(a) <
Si(vi), and the assignment xi = a and the current variable
assignments of its neighbors do not form a local state stored
in the bad states list.
• bi = 2 when its current assignment is an optimal option,
i.e., if Si(vi) = 0.
Step 3 - Desire: The desire function GD (bi) will return a
desire set DS, decided as follows:
• If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and
(Si(vi) − Si(a)) is maximized } and the assignment xi = a
and the current variable assignments of agent i"s neighbors
do not form a state in the bad states list. In this case, DS is
called a set of voluntary desires. max{(Si(vi)−Si(a))} will
be referenced by hmax
i in subsequent steps, and it defines
528 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
the maximal reduction in constraint violations. It is also
referred to as an improvement).
• If bi = 1, then DS = {a | a = vi, Si(a) is minimized } and
the assignment xi = a and the current variable assignments
of agent i"s neighbors do not form a state in the bad states
list. In this case, DS is called a set of reluctant desires
• If bi = 2, then DS = ∅.
Following, if bi = 0, agent i will send a negotiation message
containing hmax
i to all its neighbors. This message is called a
voluntary advice. If bi = 1, agent i will send a negotiation message
called change advice to the neighbors in Fi who share the violated
constraints with agent i.
Agent i receives advices from all its neighbors and stores them
in a list called A, before proceeding to the next step.
Step 4 - Intention: The intention function GI (DS, A) will
return an intention, decided as follows:
• If there is a voluntary advice from an agent j which is
associated with hmax
j > hmax
i , assign nil as the intention.
• If DS = ∅, DS is a set of voluntary desires and hmax
i is
the biggest improvement among those associated with the
voluntary advices received, select an arbitrary value (say,
vi) from DS as the intention. This intention is called a
voluntary intention.
• If DS = ∅, DS is a set of reluctant desires and agent i
receives some change advices, select an arbitrary value (say,
vi) from DS as the intention. This intention is called
reluctant intention.
• If DS = ∅, then assign nil as the intention.
Following, if the improvement hmax
i is the biggest improvement
and equal to some improvements associated with the received
voluntary advices, agent i will send its computed intention to all
its neighbors. If agent i has a reluctant intention, it will also
send this intention to all its neighbors. In both cases, agent i
will attach the number of received change advices in the current
negotiation round with its intention. In return, agent i will receive
the intentions from its neighbors before proceeding to Mediation
step.
Mediation: If agent i does not send out its intention before
this step, i.e., the agent has either a nil intention or a voluntary
intention with biggest improvement, it will proceed to next step.
Otherwise, agent i will select the best intention among all the
intentions received, including its own (if any). The criteria to
select the best intention are listed, applied in descending order of
importance as follows.
• A voluntary intention is preferred over a reluctant intention.
• A voluntary intention (if any) with biggest improvement is
selected.
• If there is no voluntary intention, the reluctant intention
with the lowest number of constraint violations is selected.
• The intention from an agent who has received a higher
number of change advices in the current negotiation round is
selected.
• Intention from an agent with highest priority is selected.
If the selected intention is not agent i"s intention, it will cancel
its intention.
Step 5 - Execution: If agent i does not cancel its intention,
it will update its variable assignment with the intended value.
Termination Condition: Since each agent does not have
full information about the global state, it may not know when it
has reached a solution, i.e., when all the agents are in a global
stable state. Hence an observer is needed that will keep track
of the negotiation messages communicated in the environment.
Following a certain period of time when there is no more message
communication (and this happens when all the agents have no
more intention to update their variable assignments), the observer
will inform the agents in the environment that a solution has been
found.
1
2
3
4
5
6 7
8
9
10
Figure 6: Example problem
5.1 An Example
To illustrate how UMA works, consider a 2-color graph problem
[6] as shown in Figure 6. In this example, each agent has a color
variable representing a node. There are 10 color variables sharing
the same domain {Black, White}.
The following records the outcome of each step in every
negotiation round executed.
Round 1:
Step 1 - Percept: Each agent obtains the current color
assignments of those nodes (agents) adjacent to it, i.e., its
neighbors".
Step 2 - Belief: Agents which have positive improvements are
agent 1 (this agent believes it should change its color to
White), agent 2 (this believes should change its color to
White), agent 7 (this agent believes it should change its
color to Black) and agent 10 (this agent believes it should
change its value to Black). In this negotiation round, the
improvements achieved by these agents are 1. Agents which
do not have any improvements are agents 4, 5 and 8. Agents
3, 6 and 9 need not change as all their relevant constraints
are satisfied.
Step 3 - Desire: Agents 1, 2, 7 and 10 have the voluntary desire
(White color for agents 1, 2 and Black color for agents 7,
10). These agents will send the voluntary advices to all
their neighbors. Meanwhile, agents 4, 5 and 8 have the
reluctant desires (White color for agent 4 and Black color
for agents 5, 8). Agent 4 will send a change advice to
agent 2 as agent 2 is sharing the violated constraint with
it. Similarly, agents 5 and 8 will send change advices to
agents 7 and 10 respectively. Agents 3, 6 and 9 do not have
any desire to update their color assignments.
Step 4 - Intention: Agents 2, 7 and 10 receive the change
advices from agents 4, 5 and 8, respectively. They form their
voluntary intentions. Agents 4, 5 and 8 receive the
voluntary advices from agents 2, 7 and 10, hence they will not
have any intention. Agents 3, 6 and 9 do not have any
intention. Following, the intention from the agents will be
sent to all their neighbors.
Mediation: Agent 1 finds that the intention from agent 2 is
better than its intention. This is because, although both
agents have voluntary intentions with improvement of 1,
agent 2 has received one change advice from agent 4 while
agent 1 has not received any. Hence agent 1 cancels its
intention. Agent 2 will keep its intention.
Agents 7 and 10 keep their intentions since none of their
neighbors has an intention.
The rest of the agents do nothing in this step as they do
not have any intention.
Step 5 - Execution: Agent 2 changes its color to White. Agents
7 and 10 change their colors to Black.
The new state after round 1 is shown in Figure 7.
Round 2:
Step 1 - Percept: The agents obtain the current color
assignments of their neighbors.
Step 2 - Belief: Agent 3 is the only agent who has a positive
improvement which is 1. It believes it should change its
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 529
1
2
3
4
5
6 7
8
9
10
Figure 7: The graph after round 1
color to Black. Agent 2 does not have any positive
improvement. The rest of the agents need not make any change as
all their relevant constraints are satisfied. They will have
no desire, and hence no intention.
Step 3 - Desire: Agent 3 desires to change its color to Black
voluntarily, hence it sends out a voluntary advice to its
neighbor, i.e., agent 2. Agent 2 does not have any value for
its reluctant desire set as the only option, Black color, will
bring agent 2 and its neighbors to the previous state which
is known to be a bad state. Since agent 2 is sharing the
constraint violation with agent 3, it sends a change advice
to agent 3.
Step 4 - Intention: Agent 3 will have a voluntary intention
while agent 2 will not have any intention as it receives the
voluntary advice from agent 3.
Mediation: Agent 3 will keep its intention as its only neighbor,
agent 2, does not have any intention.
Step 5 - Execution: Agent 3 changes its color to Black.
The new state after round 2 is shown in Figure 8.
Round 3: In this round, every agent finds that it has no
desire and hence no intention to revise its variable assignment.
Following, with no more negotiation message communication in
the environment, the observer will inform all the agents that a
solution has been found.
2
3
4
5
6 7
8
91
10
Figure 8: The solution obtained
5.2 Performance Evaluation
To facilitate credible comparisons with existing strategies, we
measured the execution time in terms of computational cycles
as defined in [4], and built a simulator that could reproduce the
published results for ABT and AWC. The definition of a
computational cycle is as follows.
• In one cycle, each agent receives all the incoming messages,
performs local computation and sends out a reply.
• A message which is sent at time t will be received at time
t + 1. The network delay is neglected.
• Each agent has it own clock. The initial clock"s value is
0. Agents attach their clock value as a time-stamp in the
outgoing message and use the time-stamp in the incoming
message to update their own clock"s value.
Four benchmark problems [6] were considered, namely, n-queens
and node coloring for sparse, dense and critical graphs. For each
problem, a finite number of test cases were generated for
various problem sizes n. The maximum execution time was set to
0
200
400
600
800
1000
10 50 100
Number of queens
Cycles
Asynchronous
Backtracking
Asynchronous Weak
Commitment
Unsolicited Mutual
Advice
Figure 9: Relationship between execution time and
problem size
10000 cycles for node coloring for critical graphs and 1000 cycles
for other problems. The simulator program was terminated after
this period and the algorithm was considered to fail a test case if
it did not find a solution by then. In such a case, the execution
time for the test was counted as 1000 cycles.
5.2.1 Evaluation with n-queens problem
The n-queens problem is a traditional problem of constraint
satisfaction. 10 test cases were generated for each problem size
n ∈ {10, 50 and 100}.
Figure 9 shows the execution time for different problem sizes
when ABT, AWC and UMA were run.
5.2.2 Evaluation with graph coloring problem
The graph coloring problem can be characterized by three
parameters: (i) the number of colors k, the number of nodes/agents
n and the number of links m. Based on the ratio m/n, the
problem can be classified into three types [3]: (i) sparse (with
m/n = 2), (ii) critical (with m/n = 2.7 or 4.7) and (iii) dense
(with m/n = (n − 1)/4). For this problem, we did not include
ABT in our empirical results as its failure rate was found to be
very high. This poor performance of ABT was expected since
the graph coloring problem is more difficult than the n-queens
problem, on which ABT already did not perform well (see Figure
9).
The sparse and dense (coloring) problem types are relatively
easy while the critical type is difficult to solve. In the
experiments, we fix k = 3. 10 test cases were created using the method
described in [13] for each value of n ∈ {60, 90, 120}, for each
problem type.
The simulation results for each type of problem are shown in
Figures 10 - 12.
0
40
80
120
160
200
60 90 120 150
Number of Nodes
Cycles
Asynchronous
Weak
Commitment
Unsolicited
Mutual Advice
Figure 10: Comparison between AWC and UMA
(sparse graph coloring)
5.3 Discussion
5.3.1 Comparison with ABT and AWC
530 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
0
1000
2000
3000
4000
5000
6000
60 90 120
Number of Nodes
Cycles
Asynchronous
Weak
Commitment
Unsolicited
Mutual Advice
Figure 11: Comparison between AWC and UMA
(critical graph coloring)
0
10
20
30
40
50
60 90 120
Number of Nodes
Cycles
Asynchronous
Weak
Commitment
Unsolicited
Mutual Advice
Figure 12: Comparison between AWC and UMA
(dense graph coloring)
Figure 10 shows that the average performance of UMA is slightly
better than AWC for the sparse problem. UMA outperforms
AWC in solving the critical problem as shown in Figure 11. It
was observed that the latter strategy failed in some test cases.
However, as seen in Figure 12, both the strategies are very
efficient when solving the dense problem, with AWC showing slightly
better performance.
The performance of UMA, in the worst (time complexity) case,
is similar to that of all evaluated strategies. The worst case
occurs when all the possible global states of the search are reached.
Since only a few agents have the right to change their variable
assignments in a negotiation round, the number of redundant
computational cycles and info messages is reduced. As we observe
from the backtracking in ABT and AWC, the difference in the
ordering of incoming messages can result in a different number of
computational cycles to be executed by the agents.
5.3.2 Comparison with DBO
The computational performance of UMA is arguably better
than DBO for the following reasons:
• UMA can guarantee that there will be a variable
reassignment following every negotiation round whereas DBO
cannot.
• UMA introduces one more communication round trip (that
of sending a message and awaiting a reply) than DBO,
which occurs due to the need to communicate unsolicited
advices. Although this increases the communication cost
per negotiation round, we observed from our simulations
that the overall communication cost incurred by UMA is
lower due to the significantly lower number of negotiation
rounds.
• Using UMA, in the worst case, an agent will only take 2 or 3
communication round trips per negotiation round, following
which the agent or its neighbor will do a variable
assignment update. Using DBO, this number of round trips is
uncertain as each agent might have to increase the weights
of the violated constraints until an agent has a positive
improvement; this could result in a infinite loop [3].
6. CONCLUSION
Applying automated negotiation to DCSP, this paper has
proposed a protocol that prescribes the generic reasoning of a DCSP
agent in a BDI architecture. Our work shows that several
wellknown DCSP algorithms, namely ABT, AWC and DBO, can be
described as mechanisms sharing the same proposed protocol, and
only differ in the strategies employed for the reasoning steps per
negotiation round as governed by the protocol. Importantly, this
means that it might furnish a unified framework for DCSP that
not only provides a clearer BDI agent-theoretic view of existing
DCSP approaches, but also opens up the opportunities to enhance
or develop new strategies. Towards the latter, we have proposed
and formulated a new strategy - the UMA strategy. Empirical
results and our discussion suggest that UMA is superior to ABT,
AWC and DBO in some specific aspects.
It was observed from our simulations that UMA possesses the
completeness property. Future work will attempt to formally
establish this property, as well as formalize other existing DSCP
algorithms as BDI negotiation mechanisms, including the recent
endeavor that employs a group mediator [5]. The idea of DCSP
agents using different strategies in the same environment will also
be investigated.
7. REFERENCES
[1] P. J. Modi, H. Jung, M. Tambe, W.-M. Shen, and
S. Kulkarni, Dynamic distributed resource allocation: A
distributed constraint satisfaction approach, in Lecture
Notes in Computer Science, 2001, p. 264.
[2] H. Schlenker and U. Geske, Simulating large railway
networks using distributed constraint satisfaction, in 2nd
IEEE International Conference on Industrial Informatics
(INDIN-04), 2004, pp. 441- 446.
[3] M. Yokoo, Distributed Constraint Satisfaction :
Foundations of Cooperation in Multi-Agent Systems.
Springer Verlag, 2000, springer Series on Agent Technology.
[4] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara, The
distributed constraint satisfaction problem : Formalization
and algorithms, IEEE Transactions on Knowledge and
Data Engineering, vol. 10, no. 5, pp. 673-685,
September/October 1998.
[5] R. Mailler and V. Lesser, Using cooperative mediation to
solve distributed constraint satisfaction problems, in
Proceedings of the Third International Joint Conference on
Autonomous Agents and Multiagent Systems
(AAMAS-04), 2004, pp. 446-453.
[6] E. Tsang, Foundation of Constraint Satisfaction.
Academic Press, 1993.
[7] R. Mailler, R. Vincent, V. Lesser, T. Middlekoop, and
J. Shen, Soft Real-Time, Cooperative Negotiation for
Distributed Resource Allocation, AAAI Fall Symposium
on Negotiation Methods for Autonomous Cooperative
Systems, November 2001.
[8] M. Yokoo, K. Suzuki, and K. Hirayama, Secure
distributed constraint satisfaction: Reaching agreement
without revealing private information, Artificial
Intelligence, vol. 161, no. 1-2, pp. 229-246, 2005.
[9] J. S. Rosenschein and G. Zlotkin, Rules of Encounter.
The MIT Press, 1994.
[10] M. Yokoo and K. Hirayama, Distributed constraint
satisfaction algorithm for complex local problems, in
Proceedings of the Third International Conference on
Multiagent Systems (ICMAS-98), 1998, pp. 372-379.
[11] M. E. Bratman, Intentions, Plans and Practical Reason.
Harvard University Press, Cambridge, M.A, 1987.
[12] G. Weiss, Ed., Multiagent System : A Modern Approach to
Distributed Artificial Intelligence. The MIT Press,
London, U.K, 1999.
[13] S. Minton, M. D. Johnson, A. B. Philips, and P. Laird,
Minimizing conflicts: A heuristic repair method for
constraint satisfaction and scheduling problems, Artificial
Intelligence, vol. e58, no. 1-3, pp. 161-205, 1992.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 531 | algorithm;agent negotiation;constraint;resource restriction;privacy requirement;shared environment;bdi;uma;mediation;backtracking;negotiation;dcsp;belief-desireintention model;distribute constraint satisfaction problem |
train_I-57 | Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System | In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. | 1. INTRODUCTION
The role of computational models of trust within multi-agent
systems in particular, and open distributed systems in general, has
recently generated a great deal of research interest. In such systems,
agents must typically choose between interaction partners, and in
this context trust can be viewed to provide a means for agents to
represent and estimate the reliability with which these interaction
partners will fulfill their commitments. To date, however, much of
the work within this area has used domain specific or ad-hoc trust
metrics, and has focused on providing heuristics to evaluate and
update these metrics using direct experience and reputation reports
from other agents (see [8] for a review).
Recent work has attempted to place the notion of computational
trust within the framework of probability theory [6, 11]. This
approach allows many of the desiderata of computational trust models
to be addressed through principled means. In particular: (i) it
allows agents to update their estimates of the trustworthiness of a
supplier as they acquire direct experience, (ii) it provides a
natural framework for agents to express their uncertainty this
trustworthiness, and, (iii) it allows agents to exchange, combine and filter
reputation reports received from other agents.
Whilst this approach is attractive, it is somewhat limited in that it
has so far only considered single dimensional outcomes (i.e. whether
the contract has succeeded or failed in its entirety). However, in
many real world settings the success or failure of an interaction may
be decomposed into several dimensions [7]. This presents the
challenge of combining these multiple dimensions into a single metric
over which a decision can be made. Furthermore, these dimensions
will typically also exhibit correlations. For example, a contract
within a supply chain may specify criteria for timeliness, quality
and quantity. A supplier who is suffering delays may attempt a
trade-off between these dimensions by supplying the full amount
late, or supplying as much as possible (but less than the quantity
specified within the contract) on time. Thus, correlations will
naturally arise between these dimensions, and hence, between the
probabilities that describe the successful fulfillment of each contract
dimension. To date, however, no such principled framework exists
to describe these multi-dimensional contracts, nor the correlations
between these dimensions (although some ad-hoc models do exist
- see section 2 for more details).
To rectify this shortcoming, in this paper we develop a
probabilistic model of computational trust that explicitly deals with
correlated multi-dimensional contracts. The starting point for our work
is to consider how an agent can estimate the utility that it will derive
from interacting with a supplier. Here we use standard approaches
from the literature of data fusion (since this is a well developed
field where the notion of multi-dimensional correlated estimates is
well established1
) to show that this naturally leads to a trust model
where the agent must estimate probabilities and correlations over
1
In this context, the multiple dimensions typically represent the
physical coordinates of a target being tracked, and correlations arise
through the operation and orientation of sensors.
1070
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
multiple dimensions. Building upon this, we then devise a novel
trust model that addresses the three desiderata discussed above. In
more detail, in this paper we extend the state of the art in four key
ways:
1. We devise a novel multi-dimensional probabilistic trust model
that enables an agent to estimate the expected utility of a
contract, by estimating (i) the probability that each contract
dimension will be successfully fulfilled, and (ii) the
correlations between these estimates.
2. We present an exact probabilistic model based upon the
Dirichlet distribution that allows agents to use their direct
experience of contract outcomes to calculate the probabilities and
correlations described above. We then benchmark this
solution and show that it leads to good estimates.
3. We show that agents can use the sufficient statistics of this
Dirichlet distribution in order to exchange reputation reports
with one another. The sufficient statistics represent
aggregations of their direct experience, and thus, express contract
outcomes in a compact format with no loss of information.
4. We show that, while being efficient, the aggregation of
contract outcomes can lead to double counting, and rumour
propagation, in decentralised reputation systems. Thus, we present
a novel solution based upon the idea of private and shared
information. We show that it yields estimates consistent with a
centralised reputation system, whilst maintaining the anonymity
of the agents, and avoiding overconfidence.
The remainder of this paper is organised as follows: in section 2 we
review related work. In section 3 we present our notation for a
single dimensional contract, before introducing our multi-dimensional
trust model in section 4. In sections 5 and 6 we discuss
communicating reputation, and present our solution to rumour propagation
in decentralised reputation systems. We conclude in section 7.
2. RELATED WORK
The need for a multi-dimensional trust model has been recognised
by a number of researchers. Sabater and Sierra present a model of
reputation, in which agents form contracts based on multiple
variables (such as delivery date and quality), and define impressions as
subjective evaluations of the outcome of these contracts. They
provide heuristic approaches to combining these impressions to form
a measure they call subjective reputation.
Likewise, Griffiths decomposes overall trust into a number of
different dimensions such as success, cost, timeliness and
quality [4]. In his case, each dimension is scored as a real number
that represents a comparative value with no strong semantic
meaning. He develops an heuristic rule to update these values based
on the direct experiences of the individual agent, and an heuristic
function that takes the individual trust dimensions and generates a
single scalar that is then used to select between suppliers. Whilst,
he comments that the trust values could have some associated
confidence level, heuristics for updating these levels are not presented.
Gujral et al. take a similar approach and present a trust model
over multiple domain specific dimensions [5]. They define
multidimensional goal requirements, and evaluate an expected payoff
based on a supplier"s estimated behaviour. These estimates are,
however, simple aggregations over the direct experience of several
agents, and there is no measure of the uncertainty. Nevertheless,
they show that agents who select suppliers based on these multiple
dimensions outperform those who consider just a single one.
By contrast, a number of researchers have presented more
principled computational trust models based on probability theory, albeit
limited to a single dimension. Jøsang and Ismail describe the Beta
Reputation System whereby the reputation of an agent is compiled
from the positive and negative reports from other agents who have
interacted with it [6]. The beta distribution represents a natural
choice for representing these binary outcomes, and it provides a
principled means of representing uncertainty. Moreover, they
provide a number of extensions to this initial model including an
approach to exchanging reputation reports using the sufficient
statistics of the beta distribution, methods to discount the opinions of
agents who themselves have low reputation ratings, and techniques
to deal with reputations that may change over time.
Likewise, Teacy et al. use the beta distribution to describe an
agent"s belief in the probability that another agent will
successfully fulfill its commitments [11]. They present a formalism using
a beta distribution that allows the agent to estimate this probability
based upon its direct experience, and again they use the sufficient
statistics of this distribution to communicate this estimate to other
agents. They provide a number of extensions to this initial model,
and, in particular, they consider that agents may not always
truthfully report their trust estimates. Thus, they present a principled
approach to detecting and removing inconsistent reports.
Our work builds upon these more principled approaches.
However, the starting point of our approach is to consider an agent that
is attempting to estimate the expected utility of a contract. We show
that estimating this expected utility requires that an agent must
estimate the probability with which the supplier will fulfill its contract.
In the single-dimensional case, this naturally leads to a trust model
using the beta distribution (as per Jøsang and Ismail and Teacy et
al.). However, we then go on to extend this analysis to multiple
dimensions, where we use the natural extension of the beta
distribution, namely the Dirichlet distribution, to represent the agent"s
belief over multiple dimensions.
3. SINGLE-DIMENSIONAL TRUST
Before presenting our multi-dimensional trust model, we first
introduce the notation and formalism that we will use by describing the
more familiar single dimensional case. We consider an agent who
must decide whether to engage in a future contract with a supplier.
This contract will lead to some outcome, o, and we consider that
o = 1 if the contract is successfully fulfilled, and o = 0 if not2
.
In order for the agent to make a rational decision, it should
consider the utility that it will derive from this contract. We assume
that in the case that the contract is successfully fulfilled, the agent
derives a utility u(o = 1), otherwise it receives no utility3
. Now,
given that the agent is uncertain of the reliability with which the
supplier will fulfill the contract, it should consider the expected
utility that it will derive, E[U], and this is given by:
E[U] = p(o = 1)u(o = 1) (1)
where p(o = 1) is the probability that the supplier will successfully
fulfill the contract. However, whilst u(o = 1) is known by the
agent, p(o = 1) is not. The best the agent can do is to determine
a distribution over possible values of p(o = 1) given its direct
experience of previous contract outcomes. Given that it has been
able to do so, it can then determine an estimate of the expected
utility4
of the contract, E[E[U]], and a measure of its uncertainty
in this expected utility, Var(E[U]). This uncertainty is important
since a risk averse agent may make a decision regarding a contract,
2
Note that we only consider binary contract outcomes, although
extending this to partial outcomes is part of our future work.
3
Clearly this can be extended to the case where some utility is
derived from an unsuccessful outcome.
4
Note that this is often called the expected expected utility, and
this is the notation that we adopt here [2].
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071
not only on its estimate of the expected utility of the contract, but
also on the probability that the expected utility will exceed some
minimum amount. These two properties are given by:
E[E[U]] = ˆp(o = 1)u(o = 1) (2)
Var(E[U]) = Var(p(o = 1))u(o = 1)2
(3)
where ˆp(o = 1) and Var(p(o = 1)) are the estimate and
uncertainty of the probability that a contract will be successfully
fulfilled, and are calculated from the distribution over possible values
of p(o = 1) that the agent determines from its direct experience.
The utility based approach that we present here provides an
attractive motivation for this model of Teacy et al. [11].
Now, in the case of binary contract outcomes, the beta
distribution is the natural choice to represent the distribution over possible
values of p(o = 1) since within Bayesian statistics this well known
to be the conjugate prior for binomial observations [3]. By adopting
the beta distribution, we can calculate ˆp(o = 1) and Var(p(o = 1))
using standard results, and thus, if an agent observed N previous
contracts of which n were successfully fulfilled, then:
ˆp(o = 1) =
n + 1
N + 2
and:
Var(p(o = 1)) =
(n + 1)(N − n + 1)
(N + 2)2(N + 3)
Note that as expected, the greater the number of contracts the agent
observes, the smaller the variance term Var(p(o = 1)), and, thus,
the less the uncertainty regarding the probability that a contract will
be successfully fulfilled, ˆp(o = 1).
4. MULTI-DIMENSIONAL TRUST
We now extend the description above, to consider contracts
between suppliers and agents that are represented by multiple
dimensions, and hence the success or failure of a contract can be
decomposed into the success or failure of each separate
dimension. Consider again the example of the supply chain that
specifies the timeliness, quantity, and quality of the goods that are to
be delivered. Thus, within our trust model oa = 1 now
indicates a successful outcome over dimension a of the contract and
oa = 0 indicates an unsuccessful one. A contract outcome, X,
is now composed of a vector of individual contract part outcomes
(e.g. X = {oa = 1, ob = 0, oc = 0, . . .}).
Given a multi-dimensional contract whose outcome is described
by the vector X, we again consider that in order for an agent to
make a rational decision, it should consider the utility that it will
derive from this contract. To this end, we can make the general
statement that the expected utility of a contract is given by:
E[U] = p(X)U(X)T
(4)
where p(X) is a joint probability distribution over all possible
contract outcomes:
p(X) =
⎛
⎜
⎜
⎜
⎝
p(oa = 1, ob = 0, oc = 0, . . .)
p(oa = 1, ob = 1, oc = 0, . . .)
p(oa = 0, ob = 1, oc = 0, . . .)
...
⎞
⎟
⎟
⎟
⎠
(5)
and U(X) is the utility derived from these possible outcomes:
U(X) =
⎛
⎜
⎜
⎜
⎝
u(oa = 1, ob = 0, oc = 0, . . .)
u(oa = 1, ob = 1, oc = 0, . . .)
u(oa = 0, ob = 1, oc = 0, . . .)
...
⎞
⎟
⎟
⎟
⎠
(6)
As before, whilst U(X) is known to the agent, the probability
distribution p(X) is not. Rather, given the agent"s direct experience
of the supplier, the agent can determine a distribution over possible
values for p(X). In the single dimensional case, a beta distribution
was the natural choice over possible values of p(o = 1). In the
multi-dimensional case, where p(X) itself is a vector of
probabilities, the corresponding natural choice is the Dirichlet distribution,
since this is a conjugate prior for multinomial proportions [3].
Given this distribution, the agent is then able to calculate an
estimate of the expected utility of a contract. As before, this estimate
is itself represented by an expected value given by:
E[E[U]] = ˆp(X)U(X)T
(7)
and a variance, describing the uncertainty in this expected utility:
Var(E[U]) = U(X)Cov(p(X))U(X)T
(8)
where:
Cov(p(X)) E[(p(X) − ˆp(X))(p(X) − ˆp(X))T
] (9)
Thus, whilst the single dimensional case naturally leads to a trust
model in which the agents attempt to derive an estimate of
probability that a contract will be successfully fulfilled, ˆp(o = 1), along
with a scalar variance that describes the uncertainty in this
probability, Var(p(o = 1)), in this case, the agents must derive an
estimate of a vector of probabilities, ˆp(X), along with a covariance
matrix, Cov(p(X)), that represents the uncertainty in p(X) given
the observed contractual outcomes. At this point, it is interesting
to note that the estimate in the single dimensional case, ˆp(o = 1),
has a clear semantic meaning in relation to trust; it is the agent"s
belief in the probability of a supplier successfully fulfilling a
contract. However, in the multi-dimensional case the agent must
determine ˆp(X), and since this describes the probability of all possible
contract outcomes, including those that are completely un-fulfilled,
this direct semantic interpretation is not present. In the next
section, we describe the exemplar utility function that we shall use in
the remainder of this paper.
4.1 Exemplar Utility Function
The approach described so far is completely general, in that it
applies to any utility function of the form described above, and also
applies to the estimation of any joint probability distribution. In
the remainder of this paper, for illustrative purposes, we shall limit
the discussion to the simplest possible utility function that exhibits
a dependence upon the correlations between the contract
dimensions. That is, we consider the case that expected utility is
dependent only on the marginal probabilities of each contract dimension
being successfully fulfilled, rather than the full joint probabilities:
U(X) =
⎛
⎜
⎜
⎜
⎝
u(oa = 1)
u(ob = 1)
u(oc = 1)
...
⎞
⎟
⎟
⎟
⎠
(10)
Thus, ˆp(X) is a vector estimate of the probability of each contract
dimension being successfully fulfilled, and maintains the clear
semantic interpretation seen in the single dimensional case:
ˆp(X) =
⎛
⎜
⎜
⎜
⎝
ˆp(oa = 1)
ˆp(ob = 1)
ˆp(oc = 1)
...
⎞
⎟
⎟
⎟
⎠
(11)
The correlations between the contract dimensions affect the
uncertainty in the expected utility. To see this, consider the covariance
1072 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
matrix that describes this uncertainty, Cov(p(X)), is now given by:
Cov(p(X)) =
⎛
⎜
⎜
⎜
⎝
Va Cab Cac . . .
Cab Vb Cbc . . .
Cac Cbc Vc . . .
...
...
...
⎞
⎟
⎟
⎟
⎠
(12)
In this matrix, the diagonal terms, Va, Vb and Vc, represent the
uncertainties in p(oa = 1), p(ob = 1) and p(oc = 1) within
p(X). The off-diagonal terms, Cab, Cac and Cbc, represent the
correlations between these probabilities. In the next section, we use
the Dirichlet distribution to calculate both ˆp(X) and Cov(p(X))
from an agent"s direct experience of previous contract outcomes.
We first illustrate why this is necessary by considering an
alternative approach to modelling multi-dimensional contracts whereby
an agent na¨ıvely assumes that the dimensions are independent, and
thus, it models each individually by separate beta distributions (as
in the single dimensional case we presented in section 3). This
is actually equivalent to setting the off-diagonal terms within the
covariance matrix, Cov(p(X)), to zero. However, doing so can
lead an agent to assume that its estimate of the expected utility of
the contract is more accurate than it actually is. To illustrate this,
consider a specific scenario with the following values: u(oa =
1) = u(ob = 1) = 1 and Va = Vb = 0.2. In this case,
Var(E[U]) = 0.4(1 + Cab), and thus, if the correlation Cab is
ignored then the variance in the expected utility is 0.4. However, if
the contract outcomes are completely correlated then Cab = 1 and
Var(E[U]) is actually 0.8. Thus, in order to have an accurate
estimate of the variance of the expected contract utility, and to make
a rational decision, it is essential that the agent is able to
represent and calculate these correlation terms. In the next section, we
describe how an agent may do so using the Dirichlet distribution.
4.2 The Dirichlet Distribution
In this section, we describe how the agent may use its direct
experience of previous contracts, and the standard results of the Dirichlet
distribution, to determine an estimate of the probability that each
contract dimension will be successful fulfilled, ˆp(X), and a
measure of the uncertainties in these probabilities that expresses the
correlations between the contract dimensions, Cov(p(X)).
We first consider the calculation of ˆp(X) and the diagonal terms
of the covariance matrix Cov(p(X)). As described above, the
derivation of these results is identical to the case of the single
dimensional beta distribution, where out of N contract outcomes,
n are successfully fulfilled. In the multi-dimensional case,
however, we have a vector {na, nb, nc, . . .} that represents the number
of outcomes for which each of the individual contract dimensions
were successfully fulfilled. Thus, in terms of the standard Dirichlet
parameters where αa = na + 1 and α0 = N + 2, the agent can
estimate the probability of this contract dimension being successfully
fulfilled:
ˆp(oa = 1) =
αa
α0
=
na + 1
N + 2
and can also calculate the variance in any contract dimension:
Va =
αa(α0 − αa)
α2
0(1 + α0)
=
(na + 1)(N − na + 1)
(N + 2)2(N + 3)
However, calculating the off-diagonal terms within Cov(p(X)) is
more complex since it is necessary to consider the correlations
between the contract dimensions. Thus, for each pair of dimensions
(i.e. a and b), we must consider all possible combinations of
contract outcomes, and thus we define nab
ij as the number of contract
outcomes for which both oa = i and ob = j. For example, nab
10
represents the number of contracts for which oa = 1 and ob = 0.
Now, using the standard Dirichlet notation, we can define αab
ij
nab
ij + 1 for all i and j taking values 0 and 1, and then, to calculate
the cross-correlations between contract pairs a and b, we note that
the Dirichlet distribution over pair-wise joint probabilities is:
Prob(pab) = Kab
i∈{0,1} j∈{0,1}
p(oa = i, ob = j)αab
ij −1
where:
i∈{0,1} j∈{0,1}
p(oa = i, ob = j) = 1
and Kab is a normalising constant [3]. From this we can derive
pair-wise probability estimates and variances:
E[p(oa = i, ob = j)] =
αab
ij
α0
(13)
V [p(oa = i, ob = j)] =
αab
ij (α0 − αab
ij )
α2
0(1 + α0)
(14)
where:
α0 =
i∈{0,1} j∈{0,1}
αab
ij (15)
and in fact, α0 = N + 2, where N is the total number of contracts
observed. Likewise, we can express the covariance in these
pairwise probabilities in similar terms:
C[p(oa = i, ob = j), p(oa = m, ob = n)] =
−αab
ij αab
mn
α2
0(1 + α0)
Finally, we can use the expression:
p(oa = 1) =
j∈{0,1}
p(oa = 1, ob = j)
to determine the covariance Cab. To do so, we first simplify the
notation by defining V ab
ij V [p(oa = i, ob = j)] and Cab
ijmn
C[p(oa = i, ob = j), p(oa = m, ob = n)]. The covariance for the
probability of positive contract outcomes is then the covariance
between j∈{0,1} p(oa = 1, ob = j) and i∈{0,1} p(oa = i, ob =
1), and thus:
Cab = Cab
1001 + Cab
1101 + Cab
1011 + V ab
11 .
Thus, given a set of contract outcomes that represent the agent"s
previous interactions with a supplier, we may use the Dirichlet
distribution to calculate the mean and variance of the probability of
any contract dimension being successfully fulfilled (i.e. ˆp(oa = 1)
and Va). In addition, by a somewhat more complex procedure we
can also calculate the correlations between these probabilities (i.e.
Cab). This allows us to calculate an estimate of the probability that
any contract dimension will be successfully fulfilled, ˆp(X), and
also represent the uncertainty and correlations in these probabilities
by the covariance matrix, Cov(p(X)). In turn, these results may be
used to calculate the estimate and uncertainty in the expected
utility of the contract. In the next section we present empirical results
that show that in practise this formalism yields significant
improvements in these estimates compared to the na¨ıve approximation
using multiple independent beta distributions.
4.3 Empirical Comparison
In order to evaluate the effectiveness of our formalism, and show
the importance of the off-diagonal terms in Cov(p(X)), we
compare two approaches:
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1073
−1 −0.5 0 0.5 1
0.2
0.4
0.6
0.8
Correlation (ρ)
Var(E[U])
Dirichlet Distribution
Indepedent Beta Distributions
−1 −0.5 0 0.5 1
0.5
1
1.5
2
2.5
x 10
4
Correlation (ρ)
Information(I)
Dirichlet Distribution
Indepedent Beta Distributions
Figure 1: Plots showing (i) the variance of the expected contract
utility and (ii) the information content of the estimates
computed using the Dirichlet distribution and multiple independent
beta distributions. Results are averaged over 106
runs, and the
error bars show the standard error in the mean.
• Dirichlet Distribution: We use the full Dirichlet
distribution, as described above, to calculate ˆp(X) and Cov(p(X))
including all its off-diagonal terms that represent the
correlations between the contract dimensions.
• Independent Beta Distributions: We use independent beta
distributions to represent each contract dimension, in order
to calculate ˆp(X), and then, as described earlier, we
approximate Cov(p(X)) and ignore the correlations by setting all
the off-diagonal terms to zero.
We consider a two-dimensional case where u(oa = 1) = 6 and
u(ob = 1) = 2, since this allows us to plot ˆp(X) and Cov(p(X))
as ellipses in a two-dimensional plane, and thus explain the
differences between the two approaches. Specifically, we initially
allocate the agent some previous contract outcomes that represents its
direct experience with a supplier. The number of contracts is drawn
uniformly between 10 and 20, and the actual contract outcomes are
drawn from an arbitrary joint distribution intended to induce
correlations between the contract dimensions. For each set of
contracts, we use the approaches described above to calculate ˆp(X)
and Cov(p(X)), and hence, the variance in the expected contract
utility, Var(E[U]). In addition, we calculate a scalar measure of the
information content, I, of the covariance matrix Cov(p(X)), which
is a standard way of measuring the uncertainty encoded within the
covariance matrix [1]. More specifically, we calculate the
determinant of the inverse of the covariance matrix:
I = det(Cov(p(X))−1
) (16)
and note that the larger the information content, the more precise
ˆp(X) will be, and thus, the better the estimate of the expected
utility that the agent is able to calculate. Finally, we use the results
0.3 0.4 0.5 0.6 0.7 0.8
0.3
0.4
0.5
0.6
0.7
0.8
p(o =1)
p(o=1)
a
b
Dirichlet Distribution
Indepedent Beta Distributions
Figure 2: Examples of ˆp(X) and Cov(p(X)) plotted as second
standard error ellipses.
presented in section 4.2 to calculate the actual correlation, ρ,
associated with this particular set of contract outcomes:
ρ =
Cab
√
VaVb
(17)
where Cab, Va and Vb are calculated as described in section 4.2.
The results of this analysis are shown in figure 1. Here we show
the values of I and Var(E[U]) calculated by the agents, plotted
against the correlation of the contract outcomes, ρ, that constituted
their direct experience. The results are averaged over 106
simulation runs. Note that as expected, when the dimensions of the
contract outcomes are uncorrelated (i.e. ρ = 0), then both approaches
give the same results. However, the value of using our formalism
with the full Dirichlet distribution is shown when the correlation
between the dimensions increases (either negatively or positively).
As can be seen, if we approximate the Dirichlet distribution with
multiple independent beta distributions, all of the correlation
information contained within the covariance matrix, Cov(p(X)), is
lost, and thus, the information content of the matrix is much lower.
The loss of this correlation information leads the variance of the
expected utility of the contract to be incorrect (either over or under
estimated depending on the correlation)5
, with the exact amount of
mis-estimation depending on the actual utility function chosen (i.e.
the values of u(oa = 1) and u(ob = 1)).
In addition, in figure 2 we illustrate an example of the estimates
calculated through both methods, for a single exemplar set of
contract outcomes. We represent the probability estimates, ˆp(X), and
the covariance matrix, Cov(p(X)), in the standard way as an
ellipse [1]. That is, ˆp(X) determines the position of the center of
the ellipse, Cov(p(X)) defines its size and shape. Note that whilst
the ellipse resulting from the full Dirichlet formalism accurately
reflects the true distribution (samples of which are plotted as points),
that calculated by using multiple independent Beta distributions
(and thus ignoring the correlations) results in a much larger ellipse
that does not reflect the true distribution. The larger size of this
ellipse is a result of the off-diagonal terms of the covariance matrix
being set to zero, and corresponds to the agent miscalculating the
uncertainty in the probability of each contract dimension being
fulfilled. This, in turn, leads it to miscalculate the uncertainty in the
expected utility of a contract (shown in figure 1 as Var(E[U]).
5. COMMUNICATING REPUTATION
Having described how an individual agent can use its own direct
experience of contract outcomes in order to estimate the
probabil5
Note that the plots are not smooth due to the fact that given a
limited number of contract outcomes, then the mean of Va and Vb
do not vary smoothly with ρ.
1074 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
ity that a multi-dimensional contract will be successfully fulfilled,
we now go on to consider how agents within an open multi-agent
system can communicate these estimates to one another. This is
commonly referred to as reputation and allows agents with limited
direct experience of a supplier to make rational decisions.
Both Jøsang and Ismail, and Teacy et al. present models whereby
reputation is communicated between agents using the sufficient
statistics of the beta distribution [6, 11]. This approach is attractive since
these sufficient statistics are simple aggregations of contract
outcomes (more precisely, they are simply the total number of
contracts observed, N, and the number of these that were successfully
fulfilled, n). Under the probabilistic framework of the beta
distribution, reputation reports in this form may simply be aggregated with
an agent"s own direct experience, in order to gain a more precise
estimate based on a larger set of contract outcomes.
We can immediately extend this approach to the multi-dimensional
case considered here, by requiring that the agents exchange the
sufficient statistics of the Dirichlet distribution instead of the beta
distribution. In this case, for each pair of dimensions (i.e. a and b), the
agents must communicate a vector of contract outcomes, N, which
are the sufficient statistics of the Dirichlet distribution, given by:
N =< nab
ij > ∀a, b, i ∈ {0, 1}, j ∈ {0, 1} (18)
Thus, an agent is able to communicate the sufficient statistics of
its own Dirichlet distribution in terms of just 2d(d − 1) numbers
(where d is the number of contract dimensions). For instance, in
the case of three dimensions, N, is given by:
N =< nab
00, nab
01, nab
10, nab
11, nac
00, nac
01, nac
10, nac
11, nbc
00, nbc
01, nbc
10, nbc
11 >
and, hence, large sets of contract outcomes may be communicated
within a relatively small message size, with no loss of information.
Again, agents receiving these sufficient statistics may simply
aggregate them with their own direct experience in order to gain a
more precise estimate of the trustworthiness of a supplier.
Finally, we note that whilst it is not the focus of our work here,
by adopting the same principled approach as Jøsang and Ismail, and
Teacy et al., many of the techniques that they have developed (such
as discounting reports from unreliable agents, and filtering
inconsistent reports from selfish agents) may be directly applied within
this multi-dimensional model. However, we now go on to consider
a new issue that arises in both the single and multi-dimensional
models, namely the problems that arise when such aggregated
sufficient statistics are propagated within decentralised agent networks.
6. RUMOUR PROPAGATION
WITHIN REPUTATION SYSTEMS
In the previous section, we described the use of sufficient
statistics to communicate reputation, and we showed that by aggregating
contract outcomes together into these sufficient statistics, a large
number of contract outcomes can be represented and
communicated in a compact form. Whilst, this is an attractive property, it
can be problematic in practise, since the individual provenance of
each contract outcome is lost in the aggregation. Thus, to ensure an
accurate estimate, the reputation system must ensure that each
observation of a contract outcome is included within the aggregated
statistics no more than once.
Within a centralised reputation system, where all agents report
their direct experience to a trusted center, such double counting of
contract outcomes is easy to avoid. However, in a decentralised
reputation system, where agents communicate reputation to one
another, and aggregate their direct experience with these reputation
reports on-the-fly, avoiding double counting is much more difficult.
a1 a2
a3
¨
¨¨
¨¨
¨¨B
E
T
N1
N1
N1 + N2
Figure 3: Example of rumour propagation in a decentralised
reputation system.
For example, consider the case shown in figure 3 where three
agents (a1 . . . a3), each with some direct experience of a supplier,
share reputation reports regarding this supplier. If agent a1 were
to provide its estimate to agents a2 and a3 in the form of the
sufficient statistics of its Dirichlet distribution, then these agents can
aggregate these contract outcomes with their own, and thus obtain
more precise estimates. If at a later stage, agent a2 were to send
its aggregate vector of contract outcomes to agent a3, then agent
a3 being unaware of the full history of exchanges, may attempt to
combine these contract outcomes with its own aggregated vector.
However, since both vectors contain a contribution from agent a1,
these will be counted twice in the final aggregated vector, and will
result in a biased and overconfident estimate. This is termed
rumour propagation or data incest in the data fusion literature [9].
One possible solution would be to uniquely identify the source of
each contract outcome, and then propagate each vector, along with
its label, through the network. Agents can thus identify identical
observations that have arrived through different routes, and after
removing the duplicates, can aggregate these together to form their
estimates. Whilst this appears to be attractive in principle, for a
number of reasons, it is not always a viable solution in practise [12].
Firstly, agents may not actually wish to have their uniquely labelled
contract outcomes passed around an open system, since such
information may have commercial or practical significance that could
be used to their disadvantage. As such, agents may only be willing
to exchange identifiable contract outcomes with a small number of
other agents (perhaps those that they have some sort of reciprocal
relationship with). Secondly, the fact that there is no aggregation
of the contract outcomes as they pass around the network means
that the message size increases over time, and the ultimate size of
these messages is bounded only by the number of agents within the
system (possibly an extremely large number for a global system).
Finally, it may actually be difficult to assign globally agreeable,
consistent, and unique labels for each agent within an open system.
In the next section, we develop a novel solution to the problem of
rumour propagation within decentralised reputation systems. Our
solution is based on an approach developed within the area of target
tracking and data fusion [9]. It avoids the need to uniquely identify
an agent, it allows agents to restrict the number of other agents
who they reveal their private estimates to, and yet it still allows
information to propagate throughout the network.
6.1 Private and Shared Information
Our solution to rumour propagation within decentralised reputation
systems introduces the notion of private information that an agent
knows it has not communicated to any other agent, and shared
information that has been communicated to, or received from,
another agent. Thus, the agent can decompose its contract outcome
vector, N, into two vectors, a private one, Np, that has not been
communicated to another agent, and a shared one, Ns, that has
been shared with, or received from, another agent:
N = Np + Ns (19)
Now, whenever an agent communicates reputation, it
communicates both its private and shared vectors separately. Both the
origThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075
inating and receiving agents then update their two vectors
appropriately. To understand this, consider the case that agent aα sends
its private and shared contract outcome vectors, Nα
p and Nα
s , to
agent aβ that itself has private and shared contract outcomes Nβ
p
and Nβ
s . Each agent updates its vectors of contract outcomes
according to the following procedure:
• Originating Agent: Once the originating agent has sent both
its shared and private contract outcome vectors to another
agent, its private information is no longer private. Thus, it
must remove the contract outcomes that were in its private
vector, and add them into its shared vector:
Nα
s ← Nα
s + Nα
p
Nα
p ← ∅.
• Receiving Agent: The goal of the receiving agent is to
accumulate the largest number contract outcomes (since this will
result in the most precise estimate) without including shared
information from both itself and the other agent (since this
may result in double counting of contract outcomes). It has
two choices depending on the total number of contract
outcomes6
within its own shared vector, Nβ
s , and within that of
the originating agent, Nα
s . Thus, it updates its vector
according to the procedure below:
- Nβ
s > Nα
s : If the receiving agent"s shared vector
represents a greater number of contract outcomes than that
of the shared vector of the originating agent, then the
agent combines its shared vector with the private
vector of the originating agent:
Nβ
s ← Nβ
s + Nα
p
Nβ
p unchanged.
- Nβ
s < Nα
s : Alternatively if the receiving agent"s shared
vector represents a smaller number contract outcomes
than that of the shared vector of the originating agent,
then the receiving agent discards its own shared vector
and forms a new one from both the private and shared
vectors of the originating agent:
Nβ
s ← Nα
s + Nα
p
Nβ
p unchanged.
In the case that Nβ
s = Nα
s then either option is
appropriate. Once the receiving agent has updated its sets, it uses the
contract outcomes within both to form its trust estimate. If
agents receive several vectors simultaneously, this approach
generalises to the receiving agent using the largest shared
vector, and the private vectors of itself and all the originating
agents to form its new shared vector.
This procedure has a number of attractive properties. Firstly, since
contract outcomes in an agent"s shared vector are never combined
with those in the shared vector of another agent, outcomes that
originated from the same agent are never combined together, and
thus, rumour propagation is completely avoided. However, since
the receiving agent may discard its own shared vector, and adopt
the shared vector of the originating agent, information is still
propagated around the network. Moreover, since contract outcomes are
aggregated together within the private and shared vectors, the
message size is constant and does not increase as the number of
interactions increases. Finally, an agent only communicates its own
private contract outcomes to its immediate neighbours. If this agent
6
Note that this may be calculated from N = nab
00 +nab
01 +nab
10 +nab
11.
subsequently passes it on, it does so as unidentifiable aggregated
information within its shared information. Thus, an agent may limit
the number of agents with which it is willing to reveal
identifiable contract outcomes, and yet these contract outcomes can still
propagate within the network, and thus, improve estimates of other
agents. Next, we demonstrate empirically that these properties can
indeed be realised in practise.
6.2 Empirical Comparison
In order to evaluate the effectiveness of this procedure we
simulated random networks consisting of ten agents. Each agent has
some direct experience of interacting with a supplier (as described
in section 4.3). At each iteration of the simulation, it interacts with
its immediate neighbours and exchanges reputation reports through
the sufficient statistics of their Dirichlet distributions. We compare
our solution to two of the most obvious decentralised alternatives:
• Private and Shared Information: The agents follow the
procedure described in the previous section. That is, they
maintain separate private and shared vectors of contract
outcomes, and at each iteration they communicate both these
vectors to their immediate neighbours.
• Rumour Propagation: The agents do not differentiate
between private and shared contract outcomes. At the first
iteration they communicate all of the contract outcomes that
constitute their direct experience. In subsequent iterations,
they propagate contract outcomes that they receive from any
of the neighbours, to all their other immediate neighbours.
• Private Information Only: The agents only communicate
the contract outcomes that constitute their direct experience.
In all cases, at each iteration, the agents use the Dirichlet
distribution in order to calculate their trust estimates. We compare these
three decentralised approaches to a centralised reputation system:
• Centralised Reputation: All the agents pass their direct
experience to a centralised reputation system that aggregates
them together, and passes this estimate back to each agent.
This centralised solution makes the most effective use of
information available in the network. However, most real world
problems demand decentralised solutions due to scalability,
modularity and communication concerns. Thus, this centralised solution
is included since it represents the optimal case, and allows us to
benchmark our decentralised solution.
The results of these comparisons are shown in figure 4. Here
we show the sum of the information content of each agent"s
covariance matrix (calculated as discussed earlier in section 4.3), for
each of these four different approaches. We first note that where
private information only is communicated, there is no change in
information after the first iteration. Once each agent has received the
direct experience of its immediate neighbours, no further increase
in information can be achieved. This represents the minimum
communication, and it exhibits the lowest total information of the four
cases. Next, we note that in the case of rumour propagation, the
information content increases continually, and rapidly exceeds the
centralised reputation result. The fact that the rumour propagation
case incorrectly exceeds this limit, indicates that it is continuously
counting the same contract outcomes as they cycle around the
network, in the belief that they are independent events. Finally, we
note that using private and shared information represents a
compromise between the private information only case and the centralised
reputation case. Information is still allowed to propagate around
the network, however rumours are eliminated.
As before, we also plot a single instance of the trust estimates
from one agent (i.e. ˆp(X) and Cov(p(X))) as a set of ellipses on a
1076 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
1 2 3 4 5
10
4
10
6
10
8
10
10
Iteration
Information(I)
Private & Shared Information
Rumour Propagation
Private Information Only
Centralised Reputation
Figure 4: Sum of information over all agents as a function of
the communication iteration.
two-dimensional plane (along with samples from the true
distribution). As expected, the centralised reputation system achieves the
best estimate of the true distribution, since it uses the direct
experience of all agents. The private information only case shows the
largest ellipse since it propagates the least information around the
network. The rumour propagation case shows the smallest ellipse,
but it is inconsistent with the actual distribution p(X). Thus,
propagating rumours around the network and double counting contract
outcomes in the belief that they are independent events, results in
an overconfident estimate. However, we note that our solution,
using separate vectors of private and shared information, allows us to
propagate more information than the private information only case,
but we completely avoid the problems of rumour propagation.
Finally, we consider the effect that this has on the agents"
calculation of the expected utility of the contract. We assume the same
utility function as used in section 4.3 (i.e. u(oa = 1) = 6 and
u(ob = 1) = 2), and in table 1 we present the estimate of the
expected utility, and its standard deviation calculated for all four cases
by a single agent at iteration five (after communication has ceased
to have any further effect for all methods other than rumour
propagation). We note that the rumour propagation case is clearly
inconsistent with the centralised reputation system, since its standard
deviation is too small and does not reflect the true uncertainty in
the expected utility, given the contract outcomes. However, we
observe that our solution represents the closest case to the centralised
reputation system, and thus succeeds in propagating information
throughout the network, whilst also avoiding bias and
overconfidence. The exact difference between it and the centralised
reputation system depends upon the topology of the network, and the
history of exchanges that take place within it.
7. CONCLUSIONS
In this paper we addressed the need for a principled probabilistic
model of computational trust that deals with contracts that have
multiple correlated dimensions. Our starting point was an agent
estimating the expected utility of a contract, and we showed that this
leads to a model of computational trust that uses the Dirichlet
distribution to calculate a trust estimate from the direct experience of an
agent. We then showed how agents may use the sufficient statistics
of this Dirichlet distribution to represent and communicate
reputation within a decentralised reputation system, and we presented a
solution to rumour propagation within these systems.
Our future work in this area is to extend the exchange of
reputation to the case where contracts are not homogeneous. That
is, not all agents observe the same contract dimensions. This is
a challenging extension, since in this case, the sufficient statistics
of the Dirichlet distribution can not be used directly. However, by
0.2 0.3 0.4 0.5 0.6 0.7
0.1
0.2
0.3
0.4
0.5
0.6
0.7
p(o =1)
p(o=1)
a
b
Private & Shared Information
Rumour Propagation
Private Information Only
Centralised Reputation
Figure 5: Instances of ˆp(X) and Cov(p(X)) plotted as second
standard error ellipses after 5 communication iterations.
Method E[E[U]] ± Var(E[U])
Private and Shared Information 3.18 ± 0.54
Rumour Propagation 3.33 ± 0.07
Private Information Only 3.20 ± 0.65
Centralised Reputation 3.17 ± 0.42
Table 1: Estimated expected utility and its standard error as
calculated by a single agent after 5 communication iterations.
addressing this challenge, we hope to be able to apply these
techniques to a setting in which a suppliers provides a range of services
whose failures are correlated, and agents only have direct
experiences of different subsets of these services.
8. ACKNOWLEDGEMENTS
This research was undertaken as part of the ALADDIN (Autonomous
Learning Agents for Decentralised Data and Information Networks)
project and is jointly funded by a BAE Systems and EPSRC
strategic partnership (EP/C548051/1).
9. REFERENCES
[1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with Applications to
Tracking and Navigation. Wiley Interscience, 2001.
[2] C. Boutilier. The foundations of expected expected utility. In Proc. of the 4th
Int. Joint Conf. on on Artificial Intelligence, pages 285-290, Acapulco,
Mexico, 2003.
[3] M. Evans, N. Hastings, and B. Peacock. Statistical Distributions. John Wiley
& Sons, Inc., 1993.
[4] N. Griffiths. Task delegation using experience-based multi-dimensional trust.
In Proc. of the 4th Int. Joint Conf. on Autonomous Agents and Multiagent
Systems, pages 489-496, New York, USA, 2005.
[5] N. Gukrai, D. DeAngelis, K. K. Fullam, and K. S. Barber. Modelling
multi-dimensional trust. In Proc. of the 9th Int. Workshop on Trust in Agent
Systems, Hakodate, Japan, 2006.
[6] A. Jøsang and R. Ismail. The beta reputation system. In Proc. of the 15th Bled
Electronic Commerce Conf., pages 324-337, Bled, Slovenia, 2002.
[7] E. M. Maximilien and M. P. Singh. Agent-based trust model involving
multiple qualities. In Proc. of the 4th Int. Joint Conf. on Autonomous Agents
and Multiagent Systems, pages 519-526, Utrecht, The Netherlands, 2005.
[8] S. D. Ramchurn, D. Hunyh, and N. R. Jennings. Trust in multi-agent systems.
Knowledge Engineering Review, 19(1):1-25, 2004.
[9] S. Reece and S. Roberts. Robust, low-bandwidth, multi-vehicle mapping. In
Proc. of the 8th Int. Conf. on Information Fusion, Philadelphia, USA, 2005.
[10] J. Sabater and C. Sierra. REGRET: A reputation model for gregarious
societies. In Proc. of the 4th Workshop on Deception, Fraud and Trust in Agent
Societies, pages 61-69, Montreal, Canada, 2001.
[11] W. T. L. Teacy, J. Patel, N. R. Jennings, and M. Luck. TRAVOS: Trust and
reputation in the context of inaccurate information sources. Autonomous
Agents and Multi-Agent Systems, 12(2):183-198, 2006.
[12] S. Utete. Network Management in Decentralised Sensing Systems. PhD thesis,
University of Oxford, UK, 1994.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1077 | dirichlet distribution;rumour propogation;double counting;datum fusion;overconfidence;multi-dimensional trust;trust model;probability theory;reputation system;correlation;rumour propagation;heuristic;anonymity |
train_I-58 | An Efficient Heuristic Approach for Security Against Multiple Adversaries | In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. | 1. INTRODUCTION
In many multiagent domains, agents must act in order to
provide security against attacks by adversaries. A common issue that
agents face in such security domains is uncertainty about the
adversaries they may be facing. For example, a security robot may
need to make a choice about which areas to patrol, and how often
[16]. However, it will not know in advance exactly where a robber
will choose to strike. A team of unmanned aerial vehicles (UAVs)
[1] monitoring a region undergoing a humanitarian crisis may also
need to choose a patrolling policy. They must make this decision
without knowing in advance whether terrorists or other adversaries
may be waiting to disrupt the mission at a given location. It may
indeed be possible to model the motivations of types of adversaries
the agent or agent team is likely to face in order to target these
adversaries more closely. However, in both cases, the security robot
or UAV team will not know exactly which kinds of adversaries may
be active on any given day.
A common approach for choosing a policy for agents in such
scenarios is to model the scenarios as Bayesian games. A Bayesian
game is a game in which agents may belong to one or more types;
the type of an agent determines its possible actions and payoffs.
The distribution of adversary types that an agent will face may
be known or inferred from historical data. Usually, these games
are analyzed according to the solution concept of a Bayes-Nash
equilibrium, an extension of the Nash equilibrium for Bayesian
games. However, in many settings, a Nash or Bayes-Nash
equilibrium is not an appropriate solution concept, since it assumes that
the agents" strategies are chosen simultaneously [5].
In some settings, one player can (or must) commit to a strategy
before the other players choose their strategies. These scenarios are
known as Stackelberg games [6]. In a Stackelberg game, a leader
commits to a strategy first, and then a follower (or group of
followers) selfishly optimize their own rewards, considering the action
chosen by the leader. For example, the security agent (leader) must
first commit to a strategy for patrolling various areas. This strategy
could be a mixed strategy in order to be unpredictable to the
robbers (followers). The robbers, after observing the pattern of patrols
over time, can then choose their strategy (which location to rob).
Often, the leader in a Stackelberg game can attain a higher
reward than if the strategies were chosen simultaneously. To see the
advantage of being the leader in a Stackelberg game, consider a
simple game with the payoff table as shown in Table 1. The leader
is the row player and the follower is the column player. Here, the
leader"s payoff is listed first.
1 2 3
1 5,5 0,0 3,10
2 0,0 2,2 5,0
Table 1: Payoff table for example normal form game.
The only Nash equilibrium for this game is when the leader plays
2 and the follower plays 2 which gives the leader a payoff of 2.
311
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
However, if the leader commits to a uniform mixed strategy of
playing 1 and 2 with equal (0.5) probability, the follower"s best
response is to play 3 to get an expected payoff of 5 (10 and 0 with
equal probability). The leader"s payoff would then be 4 (3 and 5
with equal probability). In this case, the leader now has an
incentive to deviate and choose a pure strategy of 2 (to get a payoff of
5). However, this would cause the follower to deviate to strategy
2 as well, resulting in the Nash equilibrium. Thus, by committing
to a strategy that is observed by the follower, and by avoiding the
temptation to deviate, the leader manages to obtain a reward higher
than that of the best Nash equilibrium.
The problem of choosing an optimal strategy for the leader to
commit to in a Stackelberg game is analyzed in [5] and found to
be NP-hard in the case of a Bayesian game with multiple types of
followers. Thus, efficient heuristic techniques for choosing
highreward strategies in these games is an important open issue.
Methods for finding optimal leader strategies for non-Bayesian games
[5] can be applied to this problem by converting the Bayesian game
into a normal-form game by the Harsanyi transformation [8]. If, on
the other hand, we wish to compute the highest-reward Nash
equilibrium, new methods using mixed-integer linear programs (MILPs)
[17] may be used, since the highest-reward Bayes-Nash
equilibrium is equivalent to the corresponding Nash equilibrium in the
transformed game. However, by transforming the game, the
compact structure of the Bayesian game is lost. In addition, since the
Nash equilibrium assumes a simultaneous choice of strategies, the
advantages of being the leader are not considered.
This paper introduces an efficient heuristic method for
approximating the optimal leader strategy for security domains, known as
ASAP (Agent Security via Approximate Policies). This method has
three key advantages. First, it directly searches for an optimal
strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing
it to find high-reward non-equilibrium strategies like the one in the
above example. Second, it generates policies with a support which
can be expressed as a uniform distribution over a multiset of fixed
size as proposed in [12]. This allows for policies that are simple
to understand and represent [12], as well as a tunable parameter
(the size of the multiset) that controls the simplicity of the policy.
Third, the method allows for a Bayes-Nash game to be expressed
compactly without conversion to a normal-form game, allowing for
large speedups over existing Nash methods such as [17] and [11].
The rest of the paper is organized as follows. In Section 2 we
fully describe the patrolling domain and its properties. Section 3
introduces the Bayesian game, the Harsanyi transformation, and
existing methods for finding an optimal leader"s strategy in a
Stackelberg game. Then, in Section 4 the ASAP algorithm is presented
for normal-form games, and in Section 5 we show how it can be
adapted to the structure of Bayesian games with uncertain
adversaries. Experimental results showing higher reward and faster
policy computation over existing Nash methods are shown in Section
6, and we conclude with a discussion of related work in Section 7.
2. THE PATROLLING DOMAIN
In most security patrolling domains, the security agents (like
UAVs [1] or security robots [16]) cannot feasibly patrol all areas all
the time. Instead, they must choose a policy by which they patrol
various routes at different times, taking into account factors such as
the likelihood of crime in different areas, possible targets for crime,
and the security agents" own resources (number of security agents,
amount of available time, fuel, etc.). It is usually beneficial for
this policy to be nondeterministic so that robbers cannot safely rob
certain locations, knowing that they will be safe from the security
agents [14]. To demonstrate the utility of our algorithm, we use a
simplified version of such a domain, expressed as a game.
The most basic version of our game consists of two players: the
security agent (the leader) and the robber (the follower) in a world
consisting of m houses, 1 . . . m. The security agent"s set of pure
strategies consists of possible routes of d houses to patrol (in an
order). The security agent can choose a mixed strategy so that the
robber will be unsure of exactly where the security agent may
patrol, but the robber will know the mixed strategy the security agent
has chosen. For example, the robber can observe over time how
often the security agent patrols each area. With this knowledge, the
robber must choose a single house to rob. We assume that the
robber generally takes a long time to rob a house. If the house chosen
by the robber is not on the security agent"s route, then the robber
successfully robs it. Otherwise, if it is on the security agent"s route,
then the earlier the house is on the route, the easier it is for the
security agent to catch the robber before he finishes robbing it.
We model the payoffs for this game with the following variables:
• vl,x: value of the goods in house l to the security agent.
• vl,q: value of the goods in house l to the robber.
• cx: reward to the security agent of catching the robber.
• cq: cost to the robber of getting caught.
• pl: probability that the security agent can catch the robber at
the lth house in the patrol (pl < pl ⇐⇒ l < l).
The security agent"s set of possible pure strategies (patrol routes)
is denoted by X and includes all d-tuples i =< w1, w2, ..., wd >
with w1 . . . wd = 1 . . . m where no two elements are equal (the
agent is not allowed to return to the same house). The robber"s
set of possible pure strategies (houses to rob) is denoted by Q and
includes all integers j = 1 . . . m. The payoffs (security agent,
robber) for pure strategies i, j are:
• −vl,x, vl,q, for j = l /∈ i.
• plcx +(1−pl)(−vl,x), −plcq +(1−pl)(vl,q), for j = l ∈ i.
With this structure it is possible to model many different types
of robbers who have differing motivations; for example, one robber
may have a lower cost of getting caught than another, or may value
the goods in the various houses differently. If the distribution of
different robber types is known or inferred from historical data,
then the game can be modeled as a Bayesian game [6].
3. BAYESIAN GAMES
A Bayesian game contains a set of N agents, and each agent n
must be one of a given set of types θn. For our patrolling domain,
we have two agents, the security agent and the robber. θ1 is the set
of security agent types and θ2 is the set of robber types. Since there
is only one type of security agent, θ1 contains only one element.
During the game, the robber knows its type but the security agent
does not know the robber"s type. For each agent (the security agent
or the robber) n, there is a set of strategies σn and a utility function
un : θ1 × θ2 × σ1 × σ2 → .
A Bayesian game can be transformed into a normal-form game
using the Harsanyi transformation [8]. Once this is done, new,
linear-program (LP)-based methods for finding high-reward
strategies for normal-form games [5] can be used to find a strategy in the
transformed game; this strategy can then be used for the Bayesian
game. While methods exist for finding Bayes-Nash equilibria
directly, without the Harsanyi transformation [10], they find only a
312 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
single equilibrium in the general case, which may not be of high
reward. Recent work [17] has led to efficient mixed-integer linear
program techniques to find the best Nash equilibrium for a given
agent. However, these techniques do require a normal-form game,
and so to compare the policies given by ASAP against the optimal
policy, as well as against the highest-reward Nash equilibrium, we
must apply these techniques to the Harsanyi-transformed matrix.
The next two subsections elaborate on how this is done.
3.1 Harsanyi Transformation
The first step in solving Bayesian games is to apply the Harsanyi
transformation [8] that converts the Bayesian game into a normal
form game. Given that the Harsanyi transformation is a standard
concept in game theory, we explain it briefly through a simple
example in our patrolling domain without introducing the
mathematical formulations. Let us assume there are two robber types a and
b in the Bayesian game. Robber a will be active with probability
α, and robber b will be active with probability 1 − α. The rules
described in Section 2 allow us to construct simple payoff tables.
Assume that there are two houses in the world (1 and 2) and
hence there are two patrol routes (pure strategies) for the agent:
{1,2} and {2,1}. The robber can rob either house 1 or house 2
and hence he has two strategies (denoted as 1l, 2l for robber type
l). Since there are two types assumed (denoted as a and b), we
construct two payoff tables (shown in Table 2) corresponding to
the security agent playing a separate game with each of the two
robber types with probabilities α and 1 − α. First, consider robber
type a. Borrowing the notation from the domain section, we assign
the following values to the variables: v1,x = v1,q = 3/4, v2,x =
v2,q = 1/4, cx = 1/2, cq = 1, p1 = 1, p2 = 1/2. Using these
values we construct a base payoff table as the payoff for the game
against robber type a. For example, if the security agent chooses
route {1,2} when robber a is active, and robber a chooses house 1,
the robber receives a reward of -1 (for being caught) and the agent
receives a reward of 0.5 for catching the robber. The payoffs for the
game against robber type b are constructed using different values.
Security agent: {1,2} {2,1}
Robber a
1a -1, .5 -.375, .125
2a -.125, -.125 -1, .5
Robber b
1b -.9, .6 -.275, .225
2b -.025, -.025 -.9, .6
Table 2: Payoff tables: Security Agent vs Robbers a and b
Using the Harsanyi technique involves introducing a chance node,
that determines the robber"s type, thus transforming the security
agent"s incomplete information regarding the robber into imperfect
information [3]. The Bayesian equilibrium of the game is then
precisely the Nash equilibrium of the imperfect information game. The
transformed, normal-form game is shown in Table 3. In the
transformed game, the security agent is the column player, and the set
of all robber types together is the row player. Suppose that robber
type a robs house 1 and robber type b robs house 2, while the
security agent chooses patrol {1,2}. Then, the security agent and the
robber receive an expected payoff corresponding to their payoffs
from the agent encountering robber a at house 1 with probability α
and robber b at house 2 with probability 1 − α.
3.2 Finding an Optimal Strategy
Although a Nash equilibrium is the standard solution concept for
games in which agents choose strategies simultaneously, in our
security domain, the security agent (the leader) can gain an advantage
by committing to a mixed strategy in advance. Since the followers
(the robbers) will know the leader"s strategy, the optimal response
for the followers will be a pure strategy. Given the common
assumption, taken in [5], in the case where followers are indifferent,
they will choose the strategy that benefits the leader, there must
exist a guaranteed optimal strategy for the leader [5].
From the Bayesian game in Table 2, we constructed the Harsanyi
transformed bimatrix in Table 3. The strategies for each player
(security agent or robber) in the transformed game correspond to all
combinations of possible strategies taken by each of that player"s
types. Therefore, we denote X = σθ1
1 = σ1 and Q = σθ2
2 as the
index sets of the security agent and robbers" pure strategies
respectively, with R and C as the corresponding payoff matrices. Rij is
the reward of the security agent and Cij is the reward of the
robbers when the security agent takes pure strategy i and the robbers
take pure strategy j. A mixed strategy for the security agent is a
probability distribution over its set of pure strategies and will be
represented by a vector x = (px1, px2, . . . , px|X|), where pxi ≥ 0
and
P
pxi = 1. Here, pxi is the probability that the security agent
will choose its ith pure strategy.
The optimal mixed strategy for the security agent can be found
in time polynomial in the number of rows in the normal form game
using the following linear program formulation from [5].
For every possible pure strategy j by the follower (the set of all
robber types),
max
P
i∈X pxiRij
s.t. ∀j ∈ Q,
P
i∈σ1
pxiCij ≥
P
i∈σ1
pxiCij
P
i∈X pxi = 1
∀i∈X , pxi >= 0
(1)
Then, for all feasible follower strategies j, choose the one that
maximizes
P
i∈X pxiRij, the reward for the security agent (leader).
The pxi variables give the optimal strategy for the security agent.
Note that while this method is polynomial in the number of rows
in the transformed, normal-form game, the number of rows
increases exponentially with the number of robber types. Using this
method for a Bayesian game thus requires running |σ2||θ2|
separate linear programs. This is no surprise, since finding the leader"s
optimal strategy in a Bayesian Stackelberg game is NP-hard [5].
4. HEURISTIC APPROACHES
Given that finding the optimal strategy for the leader is NP-hard,
we provide a heuristic approach. In this heuristic we limit the
possible mixed strategies of the leader to select actions with
probabilities that are integer multiples of 1/k for a predetermined integer
k. Previous work [14] has shown that strategies with high entropy
are beneficial for security applications when opponents" utilities
are completely unknown. In our domain, if utilities are not
considered, this method will result in uniform-distribution strategies.
One advantage of such strategies is that they are compact to
represent (as fractions) and simple to understand; therefore they can
be efficiently implemented by real organizations. We aim to
maintain the advantage provided by simple strategies for our security
application problem, incorporating the effect of the robbers"
rewards on the security agent"s rewards. Thus, the ASAP heuristic
will produce strategies which are k-uniform. A mixed strategy is
denoted k-uniform if it is a uniform distribution on a multiset S of
pure strategies with |S| = k. A multiset is a set whose elements
may be repeated multiple times; thus, for example, the mixed
strategy corresponding to the multiset {1, 1, 2} would take strategy 1
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 313
{1,2} {2,1}
{1a, 1b} −1α − .9(1 − α), .5α + .6(1 − α) −.375α − .275(1 − α), .125α + .225(1 − α)
{1a, 2b} −1α − .025(1 − α), .5α − .025(1 − α) −.375α − .9(1 − α), .125α + .6(1 − α)
{2a, 1b} −.125α − .9(1 − α), −.125α + .6(1 − α) −1α − .275(1 − α), .5α + .225(1 − α)
{2a, 2b} −.125α − .025(1 − α), −.125α − .025(1 − α) −1α − .9(1 − α), .5α + .6(1 − α)
Table 3: Harsanyi Transformed Payoff Table
with probability 2/3 and strategy 2 with probability 1/3. ASAP
allows the size of the multiset to be chosen in order to balance the
complexity of the strategy reached with the goal that the identified
strategy will yield a high reward.
Another advantage of the ASAP heuristic is that it operates
directly on the compact Bayesian representation, without requiring
the Harsanyi transformation. This is because the different follower
(robber) types are independent of each other. Hence, evaluating
the leader strategy against a Harsanyi-transformed game matrix
is equivalent to evaluating against each of the game matrices for
the individual follower types. This independence property is
exploited in ASAP to yield a decomposition scheme. Note that the LP
method introduced by [5] to compute optimal Stackelberg policies
is unlikely to be decomposable into a small number of games as it
was shown to be NP-hard for Bayes-Nash problems. Finally, note
that ASAP requires the solution of only one optimization problem,
rather than solving a series of problems as in the LP method of [5].
For a single follower type, the algorithm works the following
way. Given a particular k, for each possible mixed strategy x for the
leader that corresponds to a multiset of size k, evaluate the leader"s
payoff from x when the follower plays a reward-maximizing pure
strategy. We then take the mixed strategy with the highest payoff.
We need only to consider the reward-maximizing pure
strategies of the followers (robbers), since for a given fixed strategy x
of the security agent, each robber type faces a problem with fixed
linear rewards. If a mixed strategy is optimal for the robber, then
so are all the pure strategies in the support of that mixed strategy.
Note also that because we limit the leader"s strategies to take on
discrete values, the assumption from Section 3.2 that the followers
will break ties in the leader"s favor is not significant, since ties will
be unlikely to arise. This is because, in domains where rewards are
drawn from any random distribution, the probability of a follower
having more than one pure optimal response to a given leader
strategy approaches zero, and the leader will have only a finite number
of possible mixed strategies.
Our approach to characterize the optimal strategy for the security
agent makes use of properties of linear programming. We briefly
outline these results here for completeness, for detailed discussion
and proofs see one of many references on the topic, such as [2].
Every linear programming problem, such as:
max cT
x | Ax = b, x ≥ 0,
has an associated dual linear program, in this case:
min bT
y | AT
y ≥ c.
These primal/dual pairs of problems satisfy weak duality: For any x
and y primal and dual feasible solutions respectively, cT
x ≤ bT
y.
Thus a pair of feasible solutions is optimal if cT
x = bT
y, and
the problems are said to satisfy strong duality. In fact if a linear
program is feasible and has a bounded optimal solution, then the
dual is also feasible and there is a pair x∗
, y∗
that satisfies cT
x∗
=
bT
y∗
. These optimal solutions are characterized with the following
optimality conditions (as defined in [2]):
• primal feasibility: Ax = b, x ≥ 0
• dual feasibility: AT
y ≥ c
• complementary slackness: xi(AT
y − c)i = 0 for all i.
Note that this last condition implies that
cT
x = xT
AT
y = bT
y,
which proves optimality for primal dual feasible solutions x and y.
In the following subsections, we first define the problem in its
most intuititive form as a mixed-integer quadratic program (MIQP),
and then show how this problem can be converted into a
mixedinteger linear program (MILP).
4.1 Mixed-Integer Quadratic Program
We begin with the case of a single type of follower. Let the
leader be the row player and the follower the column player. We
denote by x the vector of strategies of the leader and q the vector
of strategies of the follower. We also denote X and Q the index
sets of the leader and follower"s pure strategies, respectively. The
payoff matrices R and C correspond to: Rij is the reward of the
leader and Cij is the reward of the follower when the leader takes
pure strategy i and the follower takes pure strategy j. Let k be the
size of the multiset.
We first fix the policy of the leader to some k-uniform policy
x. The value xi is the number of times pure strategy i is used in
the k-uniform policy, which is selected with probability xi/k. We
formulate the optimization problem the follower solves to find its
optimal response to x as the following linear program:
max
X
j∈Q
X
i∈X
1
k
Cijxi qj
s.t.
P
j∈Q qj = 1
q ≥ 0.
(2)
The objective function maximizes the follower"s expected reward
given x, while the constraints make feasible any mixed strategy q
for the follower. The dual to this linear programming problem is
the following:
min a
s.t. a ≥
X
i∈X
1
k
Cijxi j ∈ Q. (3)
From strong duality and complementary slackness we obtain that
the follower"s maximum reward value, a, is the value of every pure
strategy with qj > 0, that is in the support of the optimal mixed
strategy. Therefore each of these pure strategies is optimal.
Optimal solutions to the follower"s problem are characterized by linear
programming optimality conditions: primal feasibility constraints
in (2), dual feasibility constraints in (3), and complementary
slackness
qj a −
X
i∈X
1
k
Cijxi
!
= 0 j ∈ Q.
These conditions must be included in the problem solved by the
leader in order to consider only best responses by the follower to
the k-uniform policy x.
314 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
The leader seeks the k-uniform solution x that maximizes its
own payoff, given that the follower uses an optimal response q(x).
Therefore the leader solves the following integer problem:
max
X
i∈X
X
j∈Q
1
k
Rijq(x)j xi
s.t.
P
i∈X xi = k
xi ∈ {0, 1, . . . , k}.
(4)
Problem (4) maximizes the leader"s reward with the follower"s best
response (qj for fixed leader"s policy x and hence denoted q(x)j)
by selecting a uniform policy from a multiset of constant size k. We
complete this problem by including the characterization of q(x)
through linear programming optimality conditions. To simplify
writing the complementary slackness conditions, we will constrain
q(x) to be only optimal pure strategies by just considering integer
solutions of q(x). The leader"s problem becomes:
maxx,q
X
i∈X
X
j∈Q
1
k
Rijxiqj
s.t.
P
i xi = kP
j∈Q qj = 1
0 ≤ (a −
P
i∈X
1
k
Cijxi) ≤ (1 − qj)M
xi ∈ {0, 1, ...., k}
qj ∈ {0, 1}.
(5)
Here, the constant M is some large number. The first and fourth
constraints enforce a k-uniform policy for the leader, and the
second and fifth constraints enforce a feasible pure strategy for the
follower. The third constraint enforces dual feasibility of the
follower"s problem (leftmost inequality) and the complementary
slackness constraint for an optimal pure strategy q for the follower
(rightmost inequality). In fact, since only one pure strategy can be
selected by the follower, say qh = 1, this last constraint enforces that
a =
P
i∈X
1
k
Cihxi imposing no additional constraint for all other
pure strategies which have qj = 0.
We conclude this subsection noting that Problem (5) is an
integer program with a non-convex quadratic objective in general,
as the matrix R need not be positive-semi-definite. Efficient
solution methods for non-linear, non-convex integer problems remains
a challenging research question. In the next section we show a
reformulation of this problem as a linear integer programming
problem, for which a number of efficient commercial solvers exist.
4.2 Mixed-Integer Linear Program
We can linearize the quadratic program of Problem 5 through the
change of variables zij = xiqj, obtaining the following problem
maxq,z
P
i∈X
P
j∈Q
1
k
Rijzij
s.t.
P
i∈X
P
j∈Q zij = k
P
j∈Q zij ≤ k
kqj ≤
P
i∈X zij ≤ k
P
j∈Q qj = 1
0 ≤ (a −
P
i∈X
1
k
Cij(
P
h∈Q zih)) ≤ (1 − qj)M
zij ∈ {0, 1, ...., k}
qj ∈ {0, 1}
(6)
PROPOSITION 1. Problems (5) and (6) are equivalent.
Proof: Consider x, q a feasible solution of (5). We will show
that q, zij = xiqj is a feasible solution of (6) of same objective
function value. The equivalence of the objective functions, and
constraints 4, 6 and 7 of (6) are satisfied by construction. The fact
that
P
j∈Q zij = xi as
P
j∈Q qj = 1 explains constraints 1, 2, and
5 of (6). Constraint 3 of (6) is satisfied because
P
i∈X zij = kqj.
Let us now consider q, z feasible for (6). We will show that q and
xi =
P
j∈Q zij are feasible for (5) with the same objective value.
In fact all constraints of (5) are readily satisfied by construction. To
see that the objectives match, notice that if qh = 1 then the third
constraint in (6) implies that
P
i∈X zih = k, which means that
zij = 0 for all i ∈ X and all j = h. Therefore,
xiqj =
X
l∈Q
zilqj = zihqj = zij.
This last equality is because both are 0 when j = h. This shows
that the transformation preserves the objective function value,
completing the proof.
Given this transformation to a mixed-integer linear program (MILP),
we now show how we can apply our decomposition technique on
the MILP to obtain significant speedups for Bayesian games with
multiple follower types.
5. DECOMPOSITION FOR MULTIPLE
ADVERSARIES
The MILP developed in the previous section handles only one
follower. Since our security scenario contains multiple follower
(robber) types, we change the response function for the follower
from a pure strategy into a weighted combination over various pure
follower strategies where the weights are probabilities of
occurrence of each of the follower types.
5.1 Decomposed MIQP
To admit multiple adversaries in our framework, we modify the
notation defined in the previous section to reason about multiple
follower types. We denote by x the vector of strategies of the leader
and ql
the vector of strategies of follower l, with L denoting the
index set of follower types. We also denote by X and Q the index
sets of leader and follower l"s pure strategies, respectively. We also
index the payoff matrices on each follower l, considering the
matrices Rl
and Cl
.
Using this modified notation, we characterize the optimal
solution of follower l"s problem given the leaders k-uniform policy x,
with the following optimality conditions:
X
j∈Q
ql
j = 1
al
−
X
i∈X
1
k
Cl
ijxi ≥ 0
ql
j(al
−
X
i∈X
1
k
Cl
ijxi) = 0
ql
j ≥ 0
Again, considering only optimal pure strategies for follower l"s
problem we can linearize the complementarity constraint above.
We incorporate these constraints on the leader"s problem that
selects the optimal k-uniform policy. Therefore, given a priori
probabilities pl
, with l ∈ L of facing each follower, the leader solves
the following problem:
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 315
maxx,q
X
i∈X
X
l∈L
X
j∈Q
pl
k
Rl
ijxiql
j
s.t.
P
i xi = kP
j∈Q ql
j = 1
0 ≤ (al
−
P
i∈X
1
k
Cl
ijxi) ≤ (1 − ql
j)M
xi ∈ {0, 1, ...., k}
ql
j ∈ {0, 1}.
(7)
Problem (7) for a Bayesian game with multiple follower types
is indeed equivalent to Problem (5) on the payoff matrix obtained
from the Harsanyi transformation of the game. In fact, every pure
strategy j in Problem (5) corresponds to a sequence of pure
strategies jl, one for each follower l ∈ L. This means that qj = 1 if
and only if ql
jl
= 1 for all l ∈ L. In addition, given the a
priori probabilities pl
of facing player l, the reward in the Harsanyi
transformation payoff table is Rij =
P
l∈L pl
Rl
ijl
. The same
relation holds between C and Cl
. These relations between a pure
strategy in the equivalent normal form game and pure strategies in
the individual games with each followers are key in showing these
problems are equivalent.
5.2 Decomposed MILP
We can linearize the quadratic programming problem 7 through
the change of variables zl
ij = xiql
j, obtaining the following
problem
maxq,z
P
i∈X
P
l∈L
P
j∈Q
pl
k
Rl
ijzl
ij
s.t.
P
i∈X
P
j∈Q zl
ij = k
P
j∈Q zl
ij ≤ k
kql
j ≤
P
i∈X zl
ij ≤ k
P
j∈Q ql
j = 1
0 ≤ (al
−
P
i∈X
1
k
Cl
ij(
P
h∈Q zl
ih)) ≤ (1 − ql
j)M
P
j∈Q zl
ij =
P
j∈Q z1
ij
zl
ij ∈ {0, 1, ...., k}
ql
j ∈ {0, 1}
(8)
PROPOSITION 2. Problems (7) and (8) are equivalent.
Proof: Consider x, ql
, al
with l ∈ L a feasible solution of (7).
We will show that ql
, al
, zl
ij = xiql
j is a feasible solution of (8)
of same objective function value. The equivalence of the objective
functions, and constraints 4, 7 and 8 of (8) are satisfied by
construction. The fact that
P
j∈Q zl
ij = xi as
P
j∈Q ql
j = 1 explains
constraints 1, 2, 5 and 6 of (8). Constraint 3 of (8) is satisfied
because
P
i∈X zl
ij = kql
j.
Lets now consider ql
, zl
, al
feasible for (8). We will show that
ql
, al
and xi =
P
j∈Q z1
ij are feasible for (7) with the same
objective value. In fact all constraints of (7) are readily satisfied by
construction. To see that the objectives match, notice for each l
one ql
j must equal 1 and the rest equal 0. Let us say that ql
jl
= 1,
then the third constraint in (8) implies that
P
i∈X zl
ijl
= k, which
means that zl
ij = 0 for all i ∈ X and all j = jl. In particular this
implies that
xi =
X
j∈Q
z1
ij = z1
ij1
= zl
ijl
,
the last equality from constraint 6 of (8). Therefore xiql
j = zl
ijl
ql
j =
zl
ij. This last equality is because both are 0 when j = jl.
Effectively, constraint 6 ensures that all the adversaries are calculating
their best responses against a particular fixed policy of the agent.
This shows that the transformation preserves the objective function
value, completing the proof.
We can therefore solve this equivalent linear integer program
with efficient integer programming packages which can handle
problems with thousands of integer variables. We implemented the
decomposed MILP and the results are shown in the following section.
6. EXPERIMENTAL RESULTS
The patrolling domain and the payoffs for the associated game
are detailed in Sections 2 and 3. We performed experiments for this
game in worlds of three and four houses with patrols consisting of
two houses. The description given in Section 2 is used to generate
a base case for both the security agent and robber payoff functions.
The payoff tables for additional robber types are constructed and
added to the game by adding a random distribution of varying size
to the payoffs in the base case. All games are normalized so that,
for each robber type, the minimum and maximum payoffs to the
security agent and robber are 0 and 1, respectively.
Using the data generated, we performed the experiments using
four methods for generating the security agent"s strategy:
• uniform randomization
• ASAP
• the multiple linear programs method from [5] (to find the true
optimal strategy)
• the highest reward Bayes-Nash equilibrium, found using the
MIP-Nash algorithm [17]
The last three methods were applied using CPLEX 8.1. Because
the last two methods are designed for normal-form games rather
than Bayesian games, the games were first converted using the
Harsanyi transformation [8]. The uniform randomization method is
simply choosing a uniform random policy over all possible patrol
routes. We use this method as a simple baseline to measure the
performance of our heuristics. We anticipated that the uniform policy
would perform reasonably well since maximum-entropy policies
have been shown to be effective in multiagent security domains
[14]. The highest-reward Bayes-Nash equilibria were used in order
to demonstrate the higher reward gained by looking for an optimal
policy rather than an equilibria in Stackelberg games such as our
security domain.
Based on our experiments we present three sets of graphs to
demonstrate (1) the runtime of ASAP compared to other common
methods for finding a strategy, (2) the reward guaranteed by ASAP
compared to other methods, and (3) the effect of varying the
parameter k, the size of the multiset, on the performance of ASAP.
In the first two sets of graphs, ASAP is run using a multiset of
80 elements; in the third set this number is varied. The first set of
graphs, shown in Figure 1 shows the runtime graphs for three-house
(left column) and four-house (right column) domains. Each of the
three rows of graphs corresponds to a different randomly-generated
scenario. The x-axis shows the number of robber types the
security agent faces and the y-axis of the graph shows the runtime in
seconds. All experiments that were not concluded in 30 minutes
(1800 seconds) were cut off. The runtime for the uniform policy
is always negligible irrespective of the number of adversaries and
hence is not shown.
The ASAP algorithm clearly outperforms the optimal,
multipleLP method as well as the MIP-Nash algorithm for finding the
highestreward Bayes-Nash equilibrium with respect to runtime. For a
316 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 1: Runtimes for various algorithms on problems of 3
and 4 houses.
domain of three houses, the optimal method cannot reach a
solution for more than seven robber types, and for four houses it
cannot solve for more than six types within the cutoff time in any of
the three scenarios. MIP-Nash solves for even fewer robber types
within the cutoff time. On the other hand, ASAP runs much faster,
and is able to solve for at least 20 adversaries for the three-house
scenarios and for at least 12 adversaries in the four-house
scenarios within the cutoff time. The runtime of ASAP does not increase
strictly with the number of robber types for each scenario, but in
general, the addition of more types increases the runtime required.
The second set of graphs, Figure 2, shows the reward to the patrol
agent given by each method for three scenarios in the three-house
(left column) and four-house (right column) domains. This reward
is the utility received by the security agent in the patrolling game,
and not as a percentage of the optimal reward, since it was not
possible to obtain the optimal reward as the number of robber types
increased. The uniform policy consistently provides the lowest
reward in both domains; while the optimal method of course
produces the optimal reward. The ASAP method remains consistently
close to the optimal, even as the number of robber types increases.
The highest-reward Bayes-Nash equilibria, provided by the
MIPNash method, produced rewards higher than the uniform method,
but lower than ASAP. This difference clearly illustrates the gains in
the patrolling domain from committing to a strategy as the leader
in a Stackelberg game, rather than playing a standard Bayes-Nash
strategy.
The third set of graphs, shown in Figure 3 shows the effect of the
multiset size on runtime in seconds (left column) and reward (right
column), again expressed as the reward received by the security
agent in the patrolling game, and not a percentage of the optimal
Figure 2: Reward for various algorithms on problems of 3 and
4 houses.
reward. Results here are for the three-house domain. The trend is
that as as the multiset size is increased, the runtime and reward level
both increase. Not surprisingly, the reward increases monotonically
as the multiset size increases, but what is interesting is that there is
relatively little benefit to using a large multiset in this domain. In
all cases, the reward given by a multiset of 10 elements was within
at least 96% of the reward given by an 80-element multiset. The
runtime does not always increase strictly with the multiset size;
indeed in one example (scenario 2 with 20 robber types), using a
multiset of 10 elements took 1228 seconds, while using 80 elements
only took 617 seconds. In general, runtime should increase since a
larger multiset means a larger domain for the variables in the MILP,
and thus a larger search space. However, an increase in the number
of variables can sometimes allow for a policy to be constructed
more quickly due to more flexibility in the problem.
7. SUMMARY AND RELATED WORK
This paper focuses on security for agents patrolling in hostile
environments. In these environments, intentional threats are caused
by adversaries about whom the security patrolling agents have
incomplete information. Specifically, we deal with situations where
the adversaries" actions and payoffs are known but the exact
adversary type is unknown to the security agent. Agents acting in the
real world quite frequently have such incomplete information about
other agents. Bayesian games have been a popular choice to model
such incomplete information games [3]. The Gala toolkit is one
method for defining such games [9] without requiring the game to
be represented in normal form via the Harsanyi transformation [8];
Gala"s guarantees are focused on fully competitive games. Much
work has been done on finding optimal Bayes-Nash equilbria for
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 317
Figure 3: Reward for ASAP using multisets of 10, 30, and 80
elements
subclasses of Bayesian games, finding single Bayes-Nash
equilibria for general Bayesian games [10] or approximate Bayes-Nash
equilibria [18]. Less attention has been paid to finding the optimal
strategy to commit to in a Bayesian game (the Stackelberg scenario
[15]). However, the complexity of this problem was shown to be
NP-hard in the general case [5], which also provides algorithms for
this problem in the non-Bayesian case.
Therefore, we present a heuristic called ASAP, with three key
advantages towards addressing this problem. First, ASAP searches
for the highest reward strategy, rather than a Bayes-Nash
equilibrium, allowing it to find feasible strategies that exploit the
natural first-mover advantage of the game. Second, it provides
strategies which are simple to understand, represent, and implement.
Third, it operates directly on the compact, Bayesian game
representation, without requiring conversion to normal form. We provide
an efficient Mixed Integer Linear Program (MILP) implementation
for ASAP, along with experimental results illustrating significant
speedups and higher rewards over other approaches.
Our k-uniform strategies are similar to the k-uniform strategies
of [12]. While that work provides epsilon error-bounds based on
the k-uniform strategies, their solution concept is still that of a
Nash equilibrium, and they do not provide efficient algorithms for
obtaining such k-uniform strategies. This contrasts with ASAP,
where our emphasis is on a highly efficient heuristic approach that
is not focused on equilibrium solutions.
Finally the patrolling problem which motivated our work has
recently received growing attention from the multiagent community
due to its wide range of applications [4, 13]. However most of this
work is focused on either limiting energy consumption involved in
patrolling [7] or optimizing on criteria like the length of the path
traveled [4, 13], without reasoning about any explicit model of an
adversary[14].
Acknowledgments : This research is supported by the United States
Department of Homeland Security through Center for Risk and Economic
Analysis of Terrorism Events (CREATE). It is also supported by the
Defense Advanced Research Projects Agency (DARPA), through the
Department of the Interior, NBC, Acquisition Services Division, under Contract
No. NBCHD030010. Sarit Kraus is also affiliated with UMIACS.
8. REFERENCES
[1] R. W. Beard and T. McLain. Multiple UAV cooperative
search under collision avoidance and limited range
communication constraints. In IEEE CDC, 2003.
[2] D. Bertsimas and J. Tsitsiklis. Introduction to Linear
Optimization. Athena Scientific, 1997.
[3] J. Brynielsson and S. Arnborg. Bayesian games for threat
prediction and situation analysis. In FUSION, 2004.
[4] Y. Chevaleyre. Theoretical analysis of multi-agent patrolling
problem. In AAMAS, 2004.
[5] V. Conitzer and T. Sandholm. Choosing the best strategy to
commit to. In ACM Conference on Electronic Commerce,
2006.
[6] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991.
[7] C. Gui and P. Mohapatra. Virtual patrol: A new power
conservation design for surveillance using sensor networks.
In IPSN, 2005.
[8] J. C. Harsanyi and R. Selten. A generalized Nash solution for
two-person bargaining games with incomplete information.
Management Science, 18(5):80-106, 1972.
[9] D. Koller and A. Pfeffer. Generating and solving imperfect
information games. In IJCAI, pages 1185-1193, 1995.
[10] D. Koller and A. Pfeffer. Representations and solutions for
game-theoretic problems. Artificial Intelligence,
94(1):167-215, 1997.
[11] C. Lemke and J. Howson. Equilibrium points of bimatrix
games. Journal of the Society for Industrial and Applied
Mathematics, 12:413-423, 1964.
[12] R. J. Lipton, E. Markakis, and A. Mehta. Playing large
games using simple strategies. In ACM Conference on
Electronic Commerce, 2003.
[13] A. Machado, G. Ramalho, J. D. Zucker, and A. Drougoul.
Multi-agent patrolling: an empirical analysis on alternative
architectures. In MABS, 2002.
[14] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus. Security
in multiagent systems by policy randomization. In AAMAS,
2006.
[15] T. Roughgarden. Stackelberg scheduling strategies. In ACM
Symposium on TOC, 2001.
[16] S. Ruan, C. Meirina, F. Yu, K. R. Pattipati, and R. L. Popp.
Patrolling in a stochastic environment. In 10th Intl.
Command and Control Research Symp., 2005.
[17] T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-integer
programming methods for finding nash equilibria. In AAAI,
2005.
[18] S. Singh, V. Soni, and M. Wellman. Computing approximate
Bayes-Nash equilibria with tree-games of incomplete
information. In ACM Conference on Electronic Commerce,
2004.
318 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | heuristic approach;game theory;security of agent system;decomposition for multiple adversary;bayesian game;agent system security;patrolling domain;bayesian and stackelberg game;bayes-nash equilibrium;adversarial multiagent domain;mixed-integer linear program;np-hard;agent security via approximate policy |
train_I-59 | An Agent-Based Approach for Privacy-Preserving Recommender Systems | Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. | 1. INTRODUCTION
Information Filtering (IF) systems aim at countering
information overload by extracting information that is
relevant for a given user out of a large body of information
available via an information provider. In contrast to
Information Retrieval (IR) systems, where relevant information
is extracted based on search queries, IF architectures
generate personalized information based on user profiles
containing, for each given user, personal data, preferences, and
rated items. The provided body of information is usually
structured and collected in provider profiles. Filtering
techniques operate on these profiles in order to generate
recommendations of items that are probably relevant for a given
user, or in order to determine users with similar interests,
or both. Depending on the respective goal, the resulting
systems constitute Recommender Systems [5], Matchmaker
Systems [10], or a combination thereof.
The aspect of privacy is an essential issue in all IF systems:
Generating personalized information obviously requires the
use of personal data. According to surveys indicating major
privacy concerns of users in the context of Recommender
Systems and e-commerce in general [23], users can be
expected to be less reluctant to provide personal information
if they trust the system to be privacy-preserving with regard
to personal data. Similar considerations also apply to the
information provider, who may want to control the
dissemination of the provided information, and to the provider of the
filtering techniques, who may not want the details of the
utilized filtering algorithms to become common knowledge. A
privacy-preserving IF system should therefore balance these
requirements and protect the privacy of all parties involved
in a multilateral way, while addressing general requirements
regarding performance, security and quality of the
recommendations as well. As described in the following section,
there are several approaches with similar goals, but none of
these provide a generic approach in which the privacy of all
parties is preserved.
We have developed an agent-based approach for
privacypreserving IF which has been utilized for realizing a
combined Recommender/Matchmaker System as part of an
application supporting users in planning entertainment-related
activities. In this paper, we focus on the Recommender
System functionality. Our approach is based on Multi-Agent
System (MAS) technology because fundamental features of
agents such as autonomy, adaptability and the ability to
communicate are essential requirements of our approach. In
other words, the realized approach does not merely
constitute a solution for privacy-preserving IF within a MAS
context, but rather utilizes a MAS architecture in order to
realize a solution for privacy-preserving IF, which could not
be realized easily otherwise.
The paper is structured as follows: Section 2 describes
related work. Section 3 describes the general ideas of our
approach. In Section 4, we describe essential details of the
319
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
modules of our approach and their implementation. In
Section 5, we evaluate the approach, mainly via the realized
application. Section 6 concludes the paper with an outlook
and outlines further work.
2. RELATED WORK
There is a large amount of work in related areas, such as
Private Information Retrieval [7], Privacy-Preserving Data
Mining [2], and other privacy-preserving protocols [4, 16],
most of which is based on Secure Multi-Party Computation
[27]. We have ruled out Secure Multi-Party Computation
approaches mainly because of their complexity, and because
the algorithm that is computed securely is not considered to
be private in these approaches.
Various enforcement mechanisms have been suggested
that are applicable in the context of privacy-preserving
Information Filtering, such as enterprise privacy policies [17]
or hippocratic databases [1], both of which annotate user
data with additional meta-information specifying how the
data is to be handled on the provider side. These approaches
ultimately assume that the provider actually intends to
protect the privacy of the user data, and offer support for this
task, but they are not intended to prevent the provider
from acting in a malicious manner. Trusted computing, as
specified by the Trusted Computing Group, aims at
realizing trusted systems by increasing the security of open
systems to a level comparable with the level of security that
is achievable in closed systems. It is based on a
combination of tamper-proof hardware and various software
components. Some example applications, including peer-to-peer
networks, distributed firewalls, and distributed computing
in general, are listed in [13].
There are some approaches for privacy-preserving
Recommender Systems based on distributed collaborative filtering,
in which recommendations are generated via a public model
aggregating the distributed user profiles without containing
explicit information about user profiles themselves. This
is achieved via Secure Multi-Party Computation [6], or via
random perturbation of the user data [20]. In [19],
various approaches are integrated within a single architecture.
In [10], an agent-based approach is described in which user
agents representing similar users are discovered via a
transitive traversal of user agents. Privacy is preserved through
pseudonymous interaction between the agents and through
adding obfuscating data to personal information. More
recent related approaches are described in [18].
In [3], an agent-based architecture for privacy-preserving
demographic filtering is described which may be
generalized in order to support other kinds of filtering techniques.
While in some aspects similar to our approach, this
architecture addresses at least two aspects inadequately, namely
the protection of the filter against manipulation attempts,
and the prevention of collusions between the filter and the
provider.
3. PRIVACY-PRESERVING
INFORMATION FILTERING
We identify three main abstract entities participating in
an information filtering process within a distributed system:
A user entity, a provider entity, and a filter entity. Whereas
in some applications the provider and filter entities
explicitly trust each other, because they are deployed by the same
party, our solution is applicable more generically because it
does not require any trust between the main abstract
entities. In this paper, we focus on aspects related to the
information filtering process itself, and omit all aspects
related to information collection and processing, i.e. the stages
in which profiles are generated and maintained, mainly
because these stages are less critical with regard to privacy, as
they involve fewer different entities.
3.1 Requirements
Our solution aims at meeting the following requirements
with regard to privacy:
• User Privacy: No linkable information about user
profiles should be acquired permanently by any other
entity or external party, including other user entities.
Single user profile items, however, may be acquired
permanently if they are unlinkable, i.e. if they
cannot be attributed to a specific user or linked to other
user profile items. Temporary acquisition of private
information is permitted as well. Sets of
recommendations may be acquired permanently by the provider,
but they should not be linkable to a specific user.
These concessions simplify the resulting protocol and
allow the provider to obtain recommendations and
single unlinkable user profile items, and thus to determine
frequently requested information and optimize the
offered information accordingly.
• Provider Privacy: No information about provider
profiles, with the exception of the recommendations,
should be acquired permanently by other entities or
external parties. Again, temporary acquisition of
private information is permitted. Additionally, the
propagation of provider information is entirely under the
control of the provider. Thus, the provider is enabled
to prevent misuse such as the automatic large-scale
extraction of information.
• Filter Privacy: Details of the algorithms applied by
the filtering techniques should not be acquired
permanently by any other entity or external party. General
information about the algorithm may be provided by
the filter entity in order to help other entities to reach
a decision on whether to apply the respective filtering
technique.
In addition, general requirements regarding the quality
of the recommendations as well as security aspects,
performance and broadness of the resulting system have to be
addressed as well. While minor trade-offs may be acceptable,
the resulting system should reach a level similar to regular
Recommender Systems with regard to these requirements.
3.2 Outline of the Solution
The basic idea for realizing a protocol fulfilling these
privacy-related requirements in Recommender Systems is
implied by allowing the temporary acquisition of private
information (see [8] for the original approach): User and
provider entity both propagate the respective profile data to
the filter entity. The filter entity provides the
recommendations, and subsequently deletes all private information, thus
fulfilling the requirement regarding permanent acquisition
of private information.
320 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
The entities whose private information is propagated have
to be certain that the respective information is actually
acquired temporarily only. Trust in this regard may be
established in two main ways:
• Trusted Software: The respective entity itself is trusted
to remove the respective information as specified.
• Trusted Environment: The respective entity operates
in an environment that is trusted to control the
communication and life cycle of the entity to an extent
that the removal of the respective information may
be achieved regardless of the attempted actions of the
entity itself. Additionally, the environment itself is
trusted not to act in a malicious manner (e.g. it is
trusted not to acquire and propagate the respective
information itself).
In both cases, trust may be established in various ways.
Reputation-based mechanisms, additional trusted third
parties certifying entities or environments, or trusted
computing mechanisms may be used. Our approach is based on a
trusted environment realized via trusted computing
mechanisms, because we see this solution as the most generic
and realistic approach. This decision is discussed briefly in
Section 5.
We are now able to specify the abstract information
filtering protocol as shown in Figure 1: The filter entity deploys a
Temporary Filter Entity (TFE) operating in a trusted
environment. The user entity deploys an additional relay entity
operating in the same environment. Through mechanisms
provided by this environment, the relay entity is able to
control the communication of the TFE, and the provider entity
is able to control the communication of both relay entity
and the TFE. Thus, it is possible to ensure that the
controlled entities are only able to propagate recommendations,
but no other private information. In the first stage (steps
1.1 to 1.3 of Figure 1), the relay entity establishes control of
the TFE, and thus prevents it from propagating user profile
information. User profile data is propagated without
participation of the provider entity from the user entity to the
TFE via the relay entity. In the second stage (steps 2.1 to
2.3 of Figure 1), the provider entity establishes control of
both relay and TFE, and thus prevents them from
propagating provider profile information. Provider profile data is
propagated from the provider entity to the TFE via the
relay entity. In the third stage (steps 3.1 to 3.5 of Figure 1),
the TFE returns the recommendations via the relay entity,
and the controlled entities are terminated. Taken together,
these steps ensure that all private information is acquired
temporarily only by the other main entities. The problems
of determining acceptable queries on the provider profile and
ensuring unlinkability of the recommendations are discussed
in the following section.
Our approach requires each entity in the distributed
architecture to have the following five main abilities: The ability
to perform certain well-defined tasks (such as carrying out a
filtering process) with a high degree of autonomy, i.e. largely
independent of other entities (e.g. because the respective
entity is not able to communicate in an unrestricted manner),
the ability to be deployable dynamically in a well-defined
environment, the ability to communicate with other entities,
the ability to achieve protection against external
manipulation attempts, and the ability to control and restrict the
communication of other entities.
Figure 1: The abstract privacy-preserving
information filtering protocol. All communication across
the environments indicated by dashed lines is
prevented with the exception of communication with
the controlling entity.
MAS architectures are an ideal solution for realizing a
distributed system characterized by these features, because
they provide agents constituting entities that are actually
characterized by autonomy, mobility and the ability to
communicate [26], as well as agent platforms as environments
providing means to realize the security of agents. In this
context, the issue of malicious hosts, i.e. hosts attacking
agents, has to be addressed explicitly. Furthermore, existing
MAS architectures generally do not allow agents to control
the communication of other agents. It is possible, however,
to expand a MAS architecture and to provide designated
agents with this ability. For these reasons, our solution is
based on a FIPA[11]-compliant MAS architecture. The
entities introduced above are mapped directly to agents, and
the trusted environment in which they exist is realized in
the form of agent platforms.
In addition to the MAS architecture itself, which is
assumed as given, our solution consists of the following five
main modules:
• The Controller Module described in Section 4.1
provides functionality for controlling the communication
capabilities of agents.
• The Transparent Persistence Module facilitates
the use of different data storage mechanisms, and
provides a uniform interface for accessing persistent
information, which may be utilized for monitoring critical
interactions involving potentially private information
e.g. as part of queries. Its description is outside the
scope of this paper.
• The Recommender Module, details of which are
described in Section 4.2, provides Recommender System
functionality.
• The Matchmaker Module provides Matchmaker
System functionality. It additionally utilizes social
aspects of MAS technology. Its description is outside the
scope of this paper.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 321
• Finally, a separate module described in Section 4.4
provides Exemplary Filtering Techniques in order
to show that various restrictions imposed on filtering
techniques by our approach may actually be fulfilled.
The trusted environment introduced above encompasses
the MAS architecture itself and the Controller Module,
which have to be trusted to act in a non-malicious manner
in order to rule out the possibility of malicious hosts.
4. MAIN MODULES AND
IMPLEMENTATION
In this section, we describe the main modules of our
approach, and outline the implementation. While we have
chosen a specific architecture for the implementation, the
specification of the module is applicable to any FIPA-compliant
MAS architecture. A module basically encompasses
ontologies, functionality provided by agents via agent services, and
internal functionality. Throughout this paper, {m}KX
denotes a message m encrypted via a non-specified symmetric
encryption scheme with a secret key KX used for
encryption and decryption which is initially known only to
participant X. A key KXY is a key shared by participants
X and Y . A cryptographic hash function is used at
various points of the protocol, i.e. a function returning a hash
value h(x) for given data x that is both preimage-resistant
and collision-resistant1
. We denote a set of hash values for
a data set X = {x1, .., xn} as H(X) = {h(x1), .., h(xn)},
whereas h(X) denotes a single hash value of a data set.
4.1 Controller Module
As noted above, the ability to control the communication
of agents is generally not a feature of existing MAS
architectures2
but at the same time a central requirement of our
approach for privacy-preserving Information Filtering. The
required functionality cannot be realized based on regular
agent services or components, because an agent on a
platform is usually not allowed to interfere with the actions of
other agents in any way. Therefore, we add additional
infrastructure providing the required functionality to the MAS
architecture itself, resulting in an agent environment with
extended functionality and responsibilities.
Controlling the communication capabilities of an agent is
realized by restricting via rules, in a manner similar to a
firewall, but with the consent of the respective agent, its
incoming and outgoing communication to specific platforms
or agents on external platforms as well as other possible
communication channels, such as the file system. Consent
is required because otherwise the overall security would be
compromised, as attackers could arbitrarily block various
communication channels. Our approach does not require
controlling the communication between agents on the same
platform, and therefore this aspect is not addressed.
Consequently, all rules addressing communication capabilities
have to be enforced across entire platforms, because
otherwise a controlled agent could just use a non-controlled agent
1
In the implementation, we have used the Advanced
Encryption Standard (AES) as the symmetric encryption scheme
and SHA-1 as the cryptographic hash function.
2
A recent survey on agent environments [24] concludes that
aspects related to agent environments are often neglected,
and does not indicate any existing work in this particular
area.
on the same platform as a relay for communicating with
agents residing on external platforms. Various agent services
provide functionality for adding and revoking control of
platforms, including functionality required in complex scenarios
where controlled agents in turn control further platforms.
The implementation of the actual control mechanism
depends on the actual MAS architecture. In our
implementation, we have utilized methods provided via the Java
Security Manager as part of the Java security model. Thus,
the supervisor agent is enabled to define custom security
policies, thereby granting or denying other agents access to
resources required for communication with other agents as
well as communication in general, such as files or sockets for
TCP/IP-based communication.
4.2 Recommender Module
The Recommender Module is mainly responsible for
carrying out information filtering processes, according to the
protocol described in Table 1. The participating entities are
realized as agents, and the interactions as agent services. We
assume that mechanisms for secure agent communication are
available within the respective MAS architecture. Two
issues have to be addressed in this module: The relevant parts
of the provider profile have to be retrieved without
compromising the user"s privacy, and the recommendations have to
be propagated in a privacy-preserving way.
Our solution is based on a threat model in which no main
abstract entity may safely assume any other abstract entity
to act in an honest manner: Each entity has to assume that
other entities may attempt to obtain private information,
either while following the specified protocol or even by
deviating from the protocol. According to [15], we classify the
former case as honest-but-curious behavior (as an example,
the TFE may propagate recommendations as specified, but
may additionally attempt to propagate private information),
and the latter case as malicious behavior (as an example, the
filter may attempt to propagate private information instead
of the recommendations).
4.2.1 Retrieving the Provider Profile
As outlined above, the relay agent relays data between
the TFE agent and the provider agent. These agents are
not allowed to communicate directly, because the TFE agent
cannot be assumed to act in an honest manner. Unlike the
user profile, which is usually rather small, the provider
profile is often too voluminous to be propagated as a whole
efficiently. A typical example is a user profile containing
ratings of about 100 movies, while the provider profile
contains some 10,000 movies. Retrieving only the relevant part
of the provider profile, however, is problematic because it
has to be done without leaking sensitive information about
the user profile. Therefore, the relay agent has to analyze all
queries on the provider profile, and reject potentially critical
queries, such as queries containing a set of user profile items.
Because the propagation of single unlinkable user profile
items is assumed to be uncritical, we extend the
information filtering protocol as follows: The relevant parts of the
provider profile are retrieved based on single anonymous
interactions between the relay and the provider. If the MAS
architecture used for the implementation does not provide
an infrastructure for anonymous agent communication, this
feature has to be provided explicitly: The most
straightforward way is to use additional relay agents deployed via
322 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 1: The basic information filtering protocol
with participants U = user agent, P = provider
agent, F = TFE agent, R = relay agent, based on the
abstract protocol shown in Figure 1. UP denotes the
user profile with UP = {up1, .., upn}, PP denotes the
provider profile, and REC denotes the set of
recommendations with REC = {rec1, .., recm}.
Phase. Sender → Message or Action
Step Receiver
1.1 R → F establish control
1.2 U → R UP
1.3 R → F UP
2.1 P → R, F establish control
2.2 P → R PP
2.3 R → F PP
3.1 F → R REC
3.2 R → P REC
3.3 P → U REC
3.4 R → F terminate F
3.5 P → R terminate R
the main relay agent and used once for a single anonymous
interaction. Obviously, unlinkability is only achieved if
multiple instances of the protocol are executed simultaneously
between the provider and different users. Because agents
on controlled platforms are unable to communicate
anonymously with the respective controlling agent, control has to
be established after the anonymous interactions have been
completed. To prevent the uncontrolled relay agents from
propagating provider profile data, the respective data is
encrypted and the key is provided only after control has been
established. Therefore, the second phase of the protocol
described in Table 1 is replaced as described in Table 2.
Additionally, the relay agent may allow other interactions
as long as no user profile items are used within the queries.
In this case, the relay agent has to ensure that the provider
does not obtain any information exceeding the information
deducible via the recommendations themselves. The
clusterbased filtering technique described in Section 4.3 is an
example for a filtering technique operating in this manner.
4.2.2 Recommendation Propagation
The propagation of the recommendations is even more
problematic mainly because more participants are involved:
Recommendations have to be propagated from the TFE
agent via the relay and provider agent to the user agent. No
participant should be able to alter the recommendations or
use them for the propagation of private information.
Therefore, every participant in this chain has to obtain and verify
the recommendations in unencrypted form prior to the next
agent in the chain, i.e. the relay agent has to verify the
recommendations before the provider obtains them, and so on.
Therefore, the final phase of the protocol described in Table
1 is replaced as described in Table 3. It basically consists of
two parts (Step 3.1 to 3.4, and Step 3.5 to Step 3.8), each of
which provide a solution for a problem related to the
prisoners" problem [22], in which two participants (the prisoners)
intend to exchange a message via a third, untrusted
participant (the warden) who may read the message but must not
be able to alter it in an undetectable manner. There are
various solutions for protocols addressing the prisoners"
probTable 2: The updated second stage of the
information filtering protocol with definitions as above. PPq
is the part of the provider profile PP returned as the
result of the query q.
Phase. Sender → Message or Action
Step Receiver
repeat 2.1 to 2.3 ∀ up ∈ UP:
2.1 F → R q(up) (a query based on up)
2.2 R anon
→ P q(up) (R remains anonymous)
2.3 P → R anon
{PPq(up)}KP
2.4 P → R, F establish control
2.5 P → R KP
2.6 R → F PPq(UP )
Table 3: The updated final stage of the information
filtering protocol with definitions as above.
Phase. Sender → Message or Action
Step Receiver
3.1 F → R REC, {H(REC)}KPF
3.2 R → P h(KR), {{H(REC)}KPF }KR
3.3 P → R KP F
3.4 R → P KR
repeat 3.5 ∀ rec ∈ REC:
3.5 R → P {rec}KURrec
repeat 3.6 ∀ rec ∈ REC:
3.6 P → U h(KPrec ), {{rec}KURrec
}KPrec
repeat 3.7 to 3.8 ∀ rec ∈ REC:
3.7 U → P KURrec
3.8 P → U KPrec
3.9 U → F terminate F
3.10 P → U terminate U
lem. The more obvious of these, however, such as protocols
based on the use of digital signatures, introduce additional
threats e.g. via the possibility of additional subliminal
channels [22]. In order to minimize the risk of possible threats,
we have decided to use a protocol that only requires a
symmetric encryption scheme.
The first part of the final phase is carried out as follows:
In order to prevent the relay from altering
recommendations, they are propagated by the filter together with an
encrypted hash in Step 3.1. Thus, the relay is able to verify
the recommendations before they are propagated further.
The relay, however, may suspect the data propagated as
the encrypted hash to contain private information instead
of the actual hash value. Therefore, the encrypted hash is
encrypted again and propagated together with a hash on
the respective key in Step 3.2. In Step 3.3, the key KP F
is revealed to the relay, allowing the relay to validate the
encrypted hash. In Step 3.4, the key KR is revealed to the
provider, allowing the provider to decrypt the data received
in Step 3.2 and thus to obtain H(REC). Propagating the
hash of the key KR prevents the relay from altering the
recommendations to REC after Step 3.3, which would be
undetectable otherwise because the relay could choose a key
KR so that {{H(REC)}KPF }KR = {{H(REC )}KPF }KR
.
The encryption scheme used for encrypting the hash has to
be secure against known-plaintext attacks, because
otherwise the relay may be able to obtain KP F after Step 3.1 and
subsequently alter the recommendations in an undetectable
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323
way. Additionally, the encryption scheme must not be
commutative for similar reasons.
The remaining protocol steps are interactions between
relay, provider and user agent. The interactions of Step 3.5
to Step 3.8 ensure, via the same mechanisms used in Step
3.1 to 3.4, that the provider is able to analyze the
recommendations before the user obtains them, but at the same
time prevent the provider from altering the
recommendations. Additionally, the recommendations are not processed
at once, but rather one at a time, to prevent the provider
from withholding all recommendations.
Upon completion of the protocol, both user and provider
have obtained a set of recommendations. If the user wants
these recommendations to be unlinkable to himself, the user
agent has to carry out the entire protocol anonymously.
Again, the most straightforward way to achieve this is to use
additional relay agents deployed via the user agent which are
used once for a single information filtering process.
4.3 Exemplary Filtering Techniques
The filtering technique applied by the TFE agent cannot
be chosen freely: All collaboration-based approaches, such
as collaborative filtering techniques based on the profiles
of a set of users, are not applicable because the provider
profile does not contain user profile data (unless this data
has been collected externally). Instead, these approaches
are realized via the Matchmaker Module, which is outside
the scope of this paper. Learning-based approaches are not
applicable because the TFE agent cannot propagate any
acquired data to the filter, which effectively means that the
filter is incapable of learning. Filtering techniques that are
actually applicable are feature-based approaches, such as
content-based filtering (in which profile items are compared
via their attributes) and knowledge-based filtering (in which
domain-specific knowledge is applied in order to match user
and provider profile items). An overview of different classes
and hybrid combinations of filtering techniques is given in
[5]. We have implemented two generic content-based
filtering approaches that are applicable within our approach:
A direct content-based filtering technique based on the
class of item-based top-N recommendation algorithms [9] is
used in cases where the user profile contains items that are
also contained in the provider profile. In a preprocessing
stage, i.e. prior to the actual information filtering processes,
a model is generated containing the k most similar items for
each provider profile item. While computationally rather
complex, this approach is feasible because it has to be done
only once, and it is carried out in a privacy-preserving way
via interactions between the provider agent and a TFE
agent. The resulting model is stored by the provider agent
and can be seen as an additional part of the provider
profile. In the actual information filtering process, the k most
similar items are retrieved for each single user profile item
via queries on the model (as described in Section 4.2.1, this
is possible in a privacy-preserving way via anonymous
communication). Recommendations are generated by selecting
the n most frequent items from the result sets that are not
already contained within the user profile.
As an alternative approach applicable when the user
profile contains information in addition to provider profile
items, we provide a cluster-based approach in which provider
profile items are clustered in a preprocessing stage via an
agglomerative hierarchical clustering approach. Each cluster is
represented by a centroid item, and the cluster elements are
either sub-clusters or, on the lowest level, the items
themselves. In the information filtering stage, the relevant items
are retrieved by descending through the cluster hierarchy in
the following manner: The cluster items of the highest level
are retrieved independent of the user profile. By
comparing these items with the user profile data, the most relevant
sub-clusters are determined and retrieved in a subsequent
iteration. This process is repeated until the lowest level
is reached, which contains the items themselves as
recommendations. Throughout the process, user profile items are
never propagated to the provider as such. The
information deducible about the user profile does not exceed the
information deducible via the recommendations themselves
(because essentially only a chain of cluster centroids leading
to the recommendations is retrieved), and therefore it is not
regarded as privacy-critical.
4.4 Implementation
We have implemented the approach for privacy-preserving
IF based on JIAC IV [12], a FIPA-compliant MAS
architecture. JIAC IV integrates fundamental aspects of
autonomous agents regarding pro-activeness, intelligence,
communication capabilities and mobility by providing a scalable
component-based architecture. Additionally, JIAC IV offers
components realizing management and security
functionality, and provides a methodology for Agent-Oriented
Software Engineering. JIAC IV stands out among MAS
architectures as the only security-certified architecture, since it
has been certified by the German Federal Office for
Information Security according to the EAL3 of the Common Criteria
for Information Technology Security standard [14]. JIAC IV
offers several security features in the areas of access control
for agent services, secure communication between agents,
and low-level security based on Java security policies [21],
and thus provides all security-related functionality required
for our approach.
We have extended the JIAC IV architecture by adding the
mechanisms for communication control described in Section
4.1. Regarding the issue of malicious hosts, we currently
assume all providers of agent platforms to be trusted. We
are additionally developing a solution that is actually based
on a trusted computing infrastructure.
5. EVALUATION
For the evaluation of our approach, we have examined
whether and to which extent the requirements (mainly
regarding privacy, performance, and quality) are actually met.
Privacy aspects are directly addressed by the modules and
protocols described above and therefore not evaluated
further here. Performance is a critical issue, mainly because of
the overhead caused by creating additional agents and agent
platforms for controlling communication, and by the
additional interactions within the Recommender Module.
Overall, a single information filtering process takes about ten
times longer than a non-privacy-preserving information
filtering process leading to the same results, which is a
considerable overhead but still acceptable under certain conditions,
as described in the following section.
5.1 The Smart Event Assistant
As a proof of concept, and in order to evaluate
performance and quality under real-life conditions, we have
ap324 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 2: The Smart Event Assistant, a
privacypreserving Recommender System supporting users
in planning entertainment-related activities.
plied our approach within the Smart Event Assistant, a
MAS-based Recommender System which integrates various
personalized services for entertainment planning in
different German cities, such as a restaurant finder and a movie
finder [25]. Additional services, such as a calendar, a
routing service and news services complement the information
services. An intelligent day planner integrates all
functionality by providing personalized recommendations for the
various information services, based on the user"s preferences
and taking into account the location of the user as well as
the potential venues. All services are accessible via mobile
devices as well3
. Figure 2 shows a screenshot of the
intelligent day planner"s result dialog. The Smart Event
Assistant is entirely realized as a MAS system providing, among
other functionality, various filter agents and different service
provider agents, which together with the user agents utilize
the functionality provided by our approach.
Recommendations are generated in two ways: A push
service delivers new recommendations to the user in regular
intervals (e.g. once per day) via email or SMS. Because the
user is not online during these interactions, they are less
critical with regard to performance and the protracted
duration of the information filtering process is acceptable in
this case. Recommendations generated for the intelligent
day planner, however, have to be delivered with very little
latency because the process is triggered by the user, who
expects to receive results promptly. In this scenario, the
overall performance is substantially improved by setting up
the relay agent and the TFE agent offline, i.e. prior to the
user"s request, and by an efficient retrieval of the relevant
3
The Smart Event Assistant may be accessed online via
http://www.smarteventassistant.de.
Table 4: Complexity of typical privacy-preserving
(PP) vs. non-privacy-preserving (NPP) filtering
processes in the realized application. In the
nonprivacy-preserving version, an agent retrieves the
profiles directly and propagates the result to a
provider agent.
scenario push day planning
version NPP PP NPP PP
profile size (retrieved/total amount of items)
user 25/25 25/25
provider 125/10,000 500/10,000
elapsed time in filtering process (in seconds)
setup n/a 2.2 n/a offline
database access 0.2 0.5 0.4 0.4
profile propagation n/a 0.8 n/a 0.3
filtering algorithm 0.2 0.2 0.2 0.2
result propagation 0.1 1.1 0.1 1.1
complete time 0.5 4.8 0.7 2.0
part of the provider profile: Because the user is only
interested in items, such as movies, available within a certain
time period and related to specific locations, such as
screenings at cinemas in a specific city, the relevant part of the
provider profile is usually small enough to be propagated
entirely. Because these additional parameters are not seen
as privacy-critical (as they are not based on the user
profile, but rather constitute a short-term information need),
the relevant part of the provider profile may be propagated
as a whole, with no need for complex interactions. Taken
together, these improvements result in a filtering process
that takes about three times as long as the respective
nonprivacy-preserving filtering process, which we regard as an
acceptable trade-off for the increased level of privacy. Table
4 shows the results of the performance evaluation in more
detail. In these scenarios, a direct content-based filtering
technique similar to the one described in Section 4.3 is
applied. Because equivalent filtering techniques have been
applied successfully in regular Recommender Systems [9], there
are no negative consequences with regard to the quality of
the recommendations.
5.2 Alternative Approaches
As described in Section 3.2, our solution is based on
trusted computing. There are more straightforward ways
to realize privacy-preserving IF, e.g. by utilizing a
centralized architecture in which the privacy-preserving
providerside functionality is realized as trusted software based on
trusted computing. However, we consider these approaches
to be unsuitable because they are far less generic: Whenever
some part of the respective software is patched, upgraded or
replaced, the entire system has to be analyzed again in order
to determine its trustworthiness, a process that is
problematic in itself due to its complexity. In our solution, only a
comparatively small part of the overall system is based on
trusted computing. Because agent platforms can be utilized
for a large variety of tasks, and because we see trusted
computing as the most promising approach to realize secure and
trusted agent environments, it seems reasonable to assume
that these respective mechanisms will be generally available
in the future, independent of specific solutions such as the
one described here.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325
6. CONCLUSION & FURTHER WORK
We have developed an agent-based approach for
privacypreserving Recommender Systems. By utilizing
fundamental features of agents such as autonomy, adaptability and
the ability to communicate, by extending the capabilities
of agent platform managers regarding control of agent
communication, by providing a privacy-preserving protocol for
information filtering processes, and by utilizing suitable
filtering techniques we have been able to realize an approach
which actually preserves privacy in Information Filtering
architectures in a multilateral way. As a proof of concept, we
have used the approach within an application supporting
users in planning entertainment-related activities.
We envision various areas of future work: To achieve
complete user privacy, the protocol should be extended in order
to keep the recommendations themselves private as well.
Generally, the feedback we have obtained from users of the
Smart Event Assistant indicates that most users are
indifferent to privacy in the context of entertainment-related
personal information. Therefore, we intend to utilize the
approach to realize a Recommender System in a more
privacysensitive domain, such as health or finance, which would
enable us to better evaluate user acceptance.
7. ACKNOWLEDGMENTS
We would like to thank our colleagues Andreas Rieger
and Nicolas Braun, who co-developed the Smart Event
Assistant. The Smart Event Assistant is based on a project
funded by the German Federal Ministry of Education and
Research under Grant No. 01AK037, and a project funded
by the German Federal Ministry of Economics and Labour
under Grant No. 01MD506.
8. REFERENCES
[1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu.
Hippocratic databases. In 28th Int"l Conf. on Very
Large Databases (VLDB), Hong Kong, 2002.
[2] R. Agrawal and R. Srikant. Privacy-preserving data
mining. In Proc. of the ACM SIGMOD Conference on
Management of Data, pages 439-450. ACM Press,
May 2000.
[3] E. A¨ımeur, G. Brassard, J. M. Fernandez, and F. S.
Mani Onana. Privacy-preserving demographic
filtering. In SAC "06: Proceedings of the 2006 ACM
symposium on Applied computing, pages 872-878, New
York, NY, USA, 2006. ACM Press.
[4] M. Bawa, R. Bayardo, Jr., and R. Agrawal.
Privacy-preserving indexing of documents on the
network. In Proc. of the 2003 VLDB, 2003.
[5] R. Burke. Hybrid recommender systems: Survey and
experiments. User Modeling and User-Adapted
Interaction, 12(4):331-370, 2002.
[6] J. Canny. Collaborative filtering with privacy. In
IEEE Symposium on Security and Privacy, pages
45-57, 2002.
[7] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan.
Private information retrieval. In IEEE Symposium on
Foundations of Computer Science, pages 41-50, 1995.
[8] R. Ciss´ee. An architecture for agent-based
privacy-preserving information filtering. In Proceedings
of the 6th International Workshop on Trust, Privacy,
Deception and Fraud in Agent Systems, 2003.
[9] M. Deshpande and G. Karypis. Item-based top-N
recommendation algorithms. ACM Trans. Inf. Syst.,
22(1):143-177, 2004.
[10] L. Foner. Political artifacts and personal privacy: The
yenta multi-agent distributed matchmaking system.
PhD thesis, MIT, 1999.
[11] Foundation for Intelligent Physical Agents. FIPA
Abstract Architecture Specification, Version L, 2002.
[12] S. Fricke, K. Bsufka, J. Keiser, T. Schmidt,
R. Sesseler, and S. Albayrak. Agent-based telematic
services and telecom applications. Communications of
the ACM, 44(4), April 2001.
[13] T. Garfinkel, M. Rosenblum, and D. Boneh. Flexible
OS support and applications for trusted computing. In
Proceedings of HotOS-IX, May 2003.
[14] T. Geissler and O. Kroll-Peters. Applying security
standards to multi agent systems. In AAMAS
Workshop: Safety & Security in Multiagent Systems, 2004.
[15] O. Goldreich, S. Micali, and A. Wigderson. How to
play any mental game. In Proc. of STOC "87, pages
218-229, New York, NY, USA, 1987. ACM Press.
[16] S. Jha, L. Kruger, and P. McDaniel. Privacy
preserving clustering. In ESORICS 2005, volume 3679
of LNCS. Springer, 2005.
[17] G. Karjoth, M. Schunter, and M. Waidner. The
platform for enterprise privacy practices:
Privacy-enabled management of customer data. In
PET 2002, volume 2482 of LNCS. Springer, 2003.
[18] H. Link, J. Saia, T. Lane, and R. A. LaViolette. The
impact of social networks on multi-agent recommender
systems. In Proc. of the Workshop on Cooperative
Multi-Agent Learning (ECML/PKDD "05), 2005.
[19] B. N. Miller, J. A. Konstan, and J. Riedl. PocketLens:
Toward a personal recommender system. ACM Trans.
Inf. Syst., 22(3):437-476, 2004.
[20] H. Polat and W. Du. SVD-based collaborative filtering
with privacy. In Proc. of SAC "05, pages 791-795,
New York, NY, USA, 2005. ACM Press.
[21] T. Schmidt. Advanced Security Infrastructure for
Multi-Agent-Applications in the Telematic Area. PhD
thesis, Technische Universit¨at Berlin, 2002.
[22] G. J. Simmons. The prisoners" problem and the
subliminal channel. In D. Chaum, editor, Proc. of
Crypto "83, pages 51-67. Plenum Press, 1984.
[23] M. Teltzrow and A. Kobsa. Impacts of user privacy
preferences on personalized systems: a comparative
study. In Designing personalized user experiences in
eCommerce, pages 315-332. 2004.
[24] D. Weyns, H. Parunak, F. Michel, T. Holvoet, and
J. Ferber. Environments for multiagent systems:
State-of-the-art and research challenges. In
Environments for Multiagent Systems, volume 3477 of
LNCS. Springer, 2005.
[25] J. Wohltorf, R. Ciss´ee, and A. Rieger. Berlintainment:
An agent-based context-aware entertainment planning
system. IEEE Communications Magazine, 43(6), 2005.
[26] M. Wooldridge and N. R. Jennings. Intelligent agents:
Theory and practice. Knowledge Engineering Review,
10(2):115-152, 1995.
[27] A. Yao. Protocols for secure computation. In Proc. of
IEEE FOGS "82, pages 160-164, 1982.
326 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | privacy;secure multi-party computation;privacy-preserving recommender system;information filter;recommender system;multi-agent system;distributed artificial intelligence-multiagent system;trusted software;java security model;feature-based approach;learning-based approach;trust;information search;multi-agent system technology;retrieval-information filtering |
train_I-60 | On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks | As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented. | 1. INTRODUCTION
As technology advances, more and more cars are being
equipped with devices, which enable them to act as
autonomous agents. An important advancement in this
respect is the introduction of ad-hoc communication networks
(such as Wi-Fi), which enable the exchange of information
between cars, e.g., for locating road congestions [1] and
optimal routes [15] or improving traffic safety [2].
Vehicle-To-Vehicle (V2V) communication is already
onboard by some car manufactures, enabling the
collaboration between different cars on the road. For example, GM"s
proprietary algorithm [6], called the threat assessment
algorithm, constantly calculates, in real time, other vehicles"
positions and speeds, and enables messaging other cars when
a collision is imminent; Also, Honda has began testing its
system in which vehicles talk with each other and with the
highway system itself [7].
In this paper, we investigate the attraction of being a
selfish agent in vehicular networks. That is, we investigate the
benefits achieved by car owners, who tamper with on-board
devices and incorporate their own self-interested agents in
them, which act for their benefit. We build on the notion
of Gossip Networks, introduced by Shavitt and Shay [15], in
which the agents can obtain road congestion information by
gossiping with peer agents using ad-hoc communication.
We recognize two typical behaviors that the self-interested
agents could embark upon, in the context of vehicular
networks. In the first behavior, described in Section 4, the
objective of the self-interested agents is to maximize their
own utility, expressed by their average journey duration on
the road. This situation can be modeled in real life by car
owners, whose aim is to reach their destination as fast as
possible, and would like to have their way free of other cars. To
this end they will let their agents cheat the other agents, by
injecting false information into the network. This is achieved
by reporting heavy traffic values for the roads on their route
to other agents in the network in the hope of making the
other agents believe that the route is jammed, and causing
them to choose a different route.
The second type of behavior, described in Section 5, is
modeled by the self-interested agents" objective to cause
disorder in the network, more than they are interested in
maximizing their own utility. This kind of behavior could
be generated, for example, by vandalism or terrorists, who
aim to cause as much mayhem in the network as possible.
We note that the introduction of self-interested agents to
the network, would most probably motivate other agents to
try and detect these agents in order to minimize their effect.
This is similar, though in a different context, to the problem
introduced by Lamport et al. [8] as the Byzantine Generals
Problem. However, the introduction of mechanisms to deal
with self-interested agents is costly and time consuming. In
this paper we focus mainly on the attractiveness of selfish
behavior by these agents, while we also provide some insights
327
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
into the possibility of detecting self-interested agents and
minimizing their effect.
To demonstrate the benefits achieved by self-interested
agents, we have used a simulation environment, which
models the transportation network in a central part of a large
real city. The simulation environment is further described in
Section 3. Our simulations provide insights to the benefits
of self-interested agents cheating. Our findings can motivate
future research in this field in order to minimize the effect
of selfish-agents.
The rest of this paper is organized as follows. In Section 2
we review related work in the field of self-interested agents
and V2V communications. We continue and formally
describe our environment and simulation settings in Section 3.
Sections 4 and 5 describe the different behaviors of the
selfinterested agents and our findings. Finally, we conclude the
paper with open questions and future research directions.
2. RELATED WORK
In their seminal paper, Lamport et al. [8] describe the
Byzantine Generals problem, in which processors need to
handle malfunctioning components that give conflicting
information to different parts of the system. They also present
a model in which not all agents are connected, and thus an
agent cannot send a message to all the other agents. Dolev et
al. [5] has built on this problem and has analyzed the number
of faulty agents that can be tolerated in order to eventually
reach the right conclusion about true data. Similar work
is presented by Minsky et al. [11], who discuss techniques
for constructing gossip protocols that are resilient to up to
t malicious host failures. As opposed to the above works,
our work focuses on vehicular networks, in which the agents
are constantly roaming the network and exchanging data.
Also, the domain of transportation networks introduces
dynamic data, as the load of the roads is subject to change. In
addition, the system in transportation networks has a
feedback mechanism, since the load in the roads depends on the
reports and the movement of the agents themselves.
Malkhi et al. [10] present a gossip algorithm for
propagating information in a network of processors, in the presence
of malicious parties. Their algorithm prevents the spreading
of spurious gossip and diffuses genuine data. This is done
in time, which is logarithmic in the number of processes
and linear in the number of corrupt parties. Nevertheless,
their work assumes that the network is static and also that
the agents are static (they discuss a network of processors).
This is not true for transportation networks. For example,
in our model, agents might gossip about heavy traffic load
of a specific road, which is currently jammed, yet this
information might be false several minutes later, leaving the
agents to speculate whether the spreading agents are indeed
malicious or not. In addition, as the agents are constantly
moving, each agent cannot choose with whom he interacts
and exchanges data.
In the context of analyzing the data and deciding whether
the data is true or not, researchers have focused on
distributed reputation systems or decision mechanisms to decide
whether or not to share data.
Yu and Singh [18] build a social network of agents"
reputations. Every agent keeps a list of its neighbors, which can
be changed over time, and computes the trustworthiness of
other agents by updating the current values of testimonies
obtained from reliable referral chains. After a bad
experience with another agent every agent decreases the rating of
the "bad" agent and propagates this bad experience
throughout the network so that other agents can update their
ratings accordingly. This approach might be implemented in
our domain to allow gossip agents to identify self-interested
agents and thus minimize their effect. However, the
implementation of such a mechanism is an expensive addition
to the infrastructure of autonomous agents in
transportation networks. This is mainly due to the dynamic nature of
the list of neighbors in transportation networks. Thus, not
only does it require maintaining the neighbors" list, since the
neighbors change frequently, but it is also harder to build a
good reputation system.
Leckie et al. [9] focus on the issue of when to share
information between the agents in the network. Their domain
involves monitoring distributed sensors. Each agent
monitors a subset of the sensors and evaluates a hypothesis based
on the local measurements of its sensors. If the agent
believes that a hypothesis is sufficient likely he exchanges this
information with the other agents. In their domain, the
goal of all the agents is to reach a global consensus about
the likelihood of the hypothesis. In our domain, however, as
the agents constantly move, they have many samples, which
they exchange with each other. Also, the data might also
vary (e.g., a road might be reported as jammed, but a few
minutes later it could be free), thus making it harder to
decide whether to trust the agent, who sent the data.
Moreover, the agent might lie only about a subset of its samples,
thus making it even harder to detect his cheating.
Some work has been done in the context of gossip networks
or transportation networks regarding the spreading of data
and its dissemination.
Datta et al. [4] focus on information dissemination in
mobile ad-hoc networks (MANET). They propose an
autonomous gossiping algorithm for an infrastructure-less
mobile ad-hoc networking environment. Their autonomous
gossiping algorithm uses a greedy mechanism to spread data
items in the network. The data items are spread to
immediate neighbors that are interested in the information, and
avoid ones that are not interested. The decision which node
is interested in the information is made by the data item
itself, using heuristics. However, their work concentrates on
the movement of the data itself, and not on the agents who
propagate the data. This is different from our scenario in
which each agent maintains the data it has gathered, while
the agent itself roams the road and is responsible (and has
the capabilities) for spreading the data to other agents in
the network.
Das et al. [3] propose a cooperative strategy for content
delivery in vehicular networks. In their domain, peers
download a file from a mesh and exchange pieces of the file among
themselves. We, on the other hand, are interested in
vehicular networks in which there is no rule forcing the agents to
cooperate among themselves.
Shibata et al. [16] propose a method for cars to
cooperatively and autonomously collect traffic jam statistics to
estimate arrival time to destinations for each car. The
communication is based on IEEE 802.11, without using a fixed
infrastructure on the ground. While we use the same
domain, we focus on a different problem. Shibata et al. [16]
mainly focus on efficiently broadcasting the data between
agents (e.g., avoid duplicates and communication overhead),
as we focus on the case where agents are not cooperative in
328 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
nature, and on how selfish agents affect other agents and the
network load.
Wang et al. [17] also assert, in the context of wireless
networks, that individual agents are likely to do what is most
beneficial for their owners, and will act selfishly. They design
a protocol for communication in networks in which all agents
are selfish. Their protocol motivates every agent to
maximize its profit only when it behaves truthfully (a mechanism
of incentive compatibility). However, the domain of wireless
networks is quite different from the domain of
transportation networks. In the wireless network, the wireless terminal
is required to contribute its local resources to transmit data.
Thus, Wang et al. [17] use a payment mechanism, which
attaches costs to terminals when transmitting data, and thus
enables them to maximize their utility when transmitting
data, instead of acting selfishly. Unlike this, in the context
of transportation networks, constructing such a mechanism
is not quite a straightforward task, as self-interested agents
and regular gossip agents might incur the same cost when
transmitting data. The difference between the two types of
agents only exists regarding the credibility of the data they
exchange.
In the next section, we will describe our transportation
network model and gossiping between the agents. We will
also describe the different agents in our system.
3. MODEL AND SIMULATIONS
We first describe the formal transportation network model,
and then we describe the simulations designs.
3.1 Formal Model
Following Shavitt and Shay [15] and Parshani [13], the
transportation network is represented by a directed graph
G(V, E), where V is the set of vertices representing
junctions, and E is the set of edges, representing roads. An edge
e ∈ E is associated with a weight w > 0, which specifies
the time it takes to traverse the road associated with that
edge. The roads" weights vary in time according to the
network (traffic) load. Each car, which is associated with an
autonomous agent, is given a pair of origin and destination
points (vertices). A journey is defined as the (not
necessarily simple) path taken by an agent between the origin vertex
and the destination vertex. We assume that there is always
a path between a source and a destination. A journey length
is defined as the sum of all weights of the edges constituting
this path. Every agent has to travel between its origin and
destination points and aims to minimize its journey length.
Initially, agents are ignorant about the state of the roads.
Regular agents are only capable of gathering information
about the roads as they traverse them. However, we assume
that some agents have means of inter-vehicle
communication (e.g., IEEE 802.11) with a given communication range,
which enables them to communicate with other agents with
the same device. Those agents are referred to as gossip
agents. Since the communication range is limited, the
exchange of information using gossiping is done in one of two
ways: (a) between gossip agents passing one another, or (b)
between gossip agents located at the same junction. We
assume that each agent stores the most recent information it
has received or gathered around the edges in the network.
A subset of the gossip agents are those agents who are
selfinterested and manipulate the devices for their own benefit.
We will refer to these agents as self-interested agents. A
detailed description of their behavior is given in Sections 4
and 5.
3.2 Simulation Design
Building on [13], the network in our simulations replicates
a central part of a large city, and consists of 50 junctions
and 150 roads, which are approximately the number of main
streets in the city. Each simulation consists of 6 iterations.
The basic time unit of the iteration is a step, which
equivalents to about 30 seconds. Each iteration simulates six hours
of movements. The average number of cars passing through
the network during the iteration is about 70,000 and the
average number of cars in the network at a specific time unit
is about 3,500 cars. In each iteration the same agents are
used with the same origin and destination points, whereas
the data collected in earlier iterations is preserved in the
future iterations (referred to as the history of the agent).
This allows us to simulate somewhat a daily routine in the
transportation network (e.g., a working week).
Each of the experiments that we describe below is run
with 5 different traffic scenarios. Each such traffic scenario
differs from one another by the initial load of the roads and
the designated routes of the agents (cars) in the network.
For each such scenario 5 simulations are run, creating a total
of 25 simulations for each experiment.
It has been shown by Parshani et al. [13, 14] that the
information propagation in the network is very efficient when
the percentage of gossiping agents is 10% or more. Yet, due
to congestion caused by too many cars rushing to what is
reported as the less congested part of the network 20-30%
of gossiping agents leads to the most efficient routing results
in their experiments. Thus, in our simulation, we focus only
on simulations in which the percentage of gossip agents is
20%.
The simulations were done with different percentages of
self-interested agents. To gain statistical significance we ran
each simulation with changes in the set of the gossip agents,
and the set of the self-interested agents.
In order to gain a similar ordinal scale, the results were
normalized. The normalized values were calculated by
comparing each agent"s result to his results when the same
scenario was run with no self-interested agents. This was done
for all of the iterations. Using the normalized values enabled
us to see how worse (or better) each agent would perform
compared to the basic setting. For example, if an average
journey length of a certain agent in iteration 1 with no
selfinterested agent was 50, and the length was 60 in the same
scenario and iteration in which self-interested agents were
involved, then the normalized value for that agent would be
60/50 = 1.2.
More details regarding the simulations are described in
Sections 4 and 5.
4. SPREADING LIES, MAXIMIZING
UTILITY
In the first set of experiments we investigated the benefits
achieved by the self-interested agents, whose aim was to
minimize their own journey length. The self-interested agents
adopted a cheating approach, in which they sent false data
to their peers.
In this section we first describe the simulations with the
self-interested agents. Then, we model the scenario as a
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329
game with two types of agents, and prove that the
equilibrium result can only be achieved when there is no efficient
exchange of gossiping information in the network.
4.1 Modeling the Self-Interested Agents"
Behavior
While the gossip agents gather data and send it to other
agents, the self-interested agents" behavior is modeled as
follows:
1. Calculate the shortest path from origin to destination.
2. Communicate the following data to other agents:
(a) If the road is not in the agent"s route - send the
true data about it (e.g., data about roads it has
received from other agents)
(b) For all roads in the agent"s route, which the agent
has not yet traversed, send a random high weight.
Basically, the self-interested agent acts the same as the
gossip agent. It collects data regarding the weight of the roads
(either by traversing the road or by getting the data from
other agents) and sends the data it has collected to other
agents. However, the self-interested agent acts differently
when the road is in its route. Since the agent"s goal is to
reach its destination as fast as possible, the agent will falsely
report that all the roads in its route are heavily congested.
This is in order to free the path for itself, by making other
agents recalculate their paths, this time without including
roads on the self-interested agent"s route. To this end, for
all the roads in its route, which the agent has not yet passed,
the agent generates a random weight, which is above the
average weight of the roads in the network. It then associates
these new weights with the roads in its route and sends them
to the other agents.
While an agent can also divert cars from its route by
falsely reporting congested roads in parallel to its route as
free, this behavior is not very likely since other agents,
attempting to use the roads, will find the mistake within a
short time and spread the true congestion on the road. On
the other hand, if an agent manages to persuade other agents
not to use a road, it will be harder for them to detect that
the said roads are not congested.
In addition, to avoid being influenced by its own lies and
other lies spreading in the network, all self-interested agents
will ignore data received about roads with heavy traffic (note
that data about roads that are not heavily traffic will not
be ignored)1
.
In the next subsection we describe the simulation results,
involving the self-interested agents.
4.2 Simulation Results
To test the benefits of cheating by the self-interested agents
we ran several experiments. In the first set of experiments,
we created a scenario, in which a small group of self-interested
agents spread lies on the same route, and tested its
effect on the journey length of all the agents in the network.
1
In other simulations we have run, in which there had been
several real congestions in the network, we indeed saw that
even when the roads are jammed, the self-interested agents
were less affected if they ignored all reported heavy traffic,
since by such they also discarded all lies roaming the network
Table 1: Normalized journey length values,
selfinterested agents with the same route
Iteration Self-Interested Gossip - Gossip - Regular
Number Agents SR Others Agents
1 1.38 1.27 1.06 1.06
2 0.95 1.56 1.18 1.14
3 1.00 1.86 1.28 1.17
4 1.06 2.93 1.35 1.16
5 1.13 2.00 1.40 1.17
6 1.08 2.02 1.43 1.18
Thus, several cars, which had the same origin and
destination points, were designated as self-interested agents. In this
simulation, we selected only 6 agents to be part of the group
of the self-interested agents, as we wanted to investigate the
effect achieved by only a small number of agents.
In each simulation in this experiment, 6 different agents
were randomly chosen to be part of the group of self-interested
agents, as described above. In addition, one road, on the
route of these agents, was randomly selected to be partially
blocked, letting only one car go through that road at each
time step. About 8,000 agents were randomly selected as
regular gossip agents, and the other 32,000 agents were
designated as regular agents.
We analyzed the average journey length of the self-interested
agents as opposed to the average journey length of other
regular gossip agents traveling along the same route. Table
1 summarizes the normalized results for the self-interested
agents, the gossip agents (those having the same origin and
destination points as the self-interested agents, denoted
Gossip - SR, and all other gossip agents, denoted Gossip -
Others) and the regular agents, as a function of the iteration
number.
We can see from the results that the first time the
selfinterested agents traveled the route while spreading the false
data about the roads did not help them (using the paired
t-test we show that those agents had significantly lower
journey lengths in the scenario in which they did not spread any
lies, with p < 0.01). This is mainly due to the fact that
the lies do not bypass the self-interested agent and reach
other cars that are ahead of the self-interested car on the
same route. Thus, spreading the lies in the first iteration
does not help the self-interested agent to free the route he
is about to travel, in the first iteration.
Only when the self-interested agents had repeated their
journey in the next iteration (iteration 2) did it help them
significantly (p = 0.04). The reason for this is that other
gossip agents received this data and used it to recalculate their
shortest path, thus avoiding entrance to the roads, for which
the self-interested agents had spread false information about
congestion. It is also interesting to note the large value
attained by the self-interested agents in the first iteration.
This is mainly due to several self-interested agents, who
entered the jammed road. This situation occurred since the
self-interested agents ignored all heavy traffic data, and thus
ignored the fact that the road was jammed. As they started
spreading lies about this road, more cars shifted from this
route, thus making the road free for the future iterations.
However, we also recall that the self-interested agents
ignore all information about the heavy traffic roads. Thus,
330 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 2: Normalized journey length values,
spreading lies for a beneficiary agent
Iteration Beneficiary Gossip - Gossip - Regular
Number Agent SR Others Agents
1 1.10 1.05 0.94 1.11
2 1.09 1.14 0.99 1.14
3 1.04 1.19 1.02 1.14
4 1.03 1.26 1.03 1.14
5 1.05 1.32 1.05 1.12
6 0.92 1.40 1.06 1.11
when the network becomes congested, more self-interested
cars are affected, since they might enter jammed roads,
which they would otherwise not have entered. This can be
seen, for example, in iterations 4-6, in which the normalized
value of the self-interested agents increased above 1.00.
Using the paired t-test to compare these values with the values
achieved by these agents when no lies are used, we see that
there is no significant difference between the two scenarios.
As opposed to the gossip agents, we can see how little
effect the self-interested agents have on the regular agents.
As compared to the gossip agents on the same route that
have traveled as much as 193% more, when self-interested
agents are introduced, the average journey length for the
regular agents has only increased by about 15%. This result
is even lower than the effect on other gossip agents in the
entire network.
Since we noticed that cheating by the self-interested agents
does not benefit them in the first iteration, we devised
another set of experiments. In the second set of experiments,
the self-interested agents have the objective to help another
agent, who is supposed to enter the network some time
after the self-interested agent entered. We refer to the latter
agent as the beneficiary agent. Just like a self-interested
agent, the beneficiary agent also ignores all data regarding
heavy traffic. In real-life this can be modeled, for
example, by a husband, who would like to help his wife find a
faster route to her destination. Table 2 summarizes the
normalized values for the different agents. As in the first set
of experiments, 5 simulations were run for each scenario,
with a total of 25 simulations. In each of these simulation
one agent was randomly selected as a self-interested agent,
and then another agent, with the same origin as the
selfinterested agent, was randomly selected as the beneficiary
agent. The other 8,000 and 32,000 agents were designated
as regular gossip agents and regular agents, respectively.
We can see that as the number of iterations advances, the
lower the normalized value for the beneficiary agent. In this
scenario, just like the previous one, in the first iterations,
not only does the beneficiary agent not avoid the jammed
roads, since he ignores all heavy traffic, he also does not
benefit from the lies spread by the self-interested agent. This
is due to the fact that the lies are not yet incorporated by
other gossip agents. Thus, if we compare the average journey
length in the first iteration when lies are spread and when
there are no lies, the average is significantly lower when there
are no lies (p < 0.03). On the other hand, if we compare
the average journey length in all of the iterations, there is
no significant difference between the two settings. Still, in
most of the iterations, the average journey length of the
beneficiary agent is longer than in the case when no lies are
spread.
We can also see the impact on the other agents in the
system. While the gossip agents, which are not on the
route of the beneficiary agent, virtually are not affected by
the self-interested agent, those on the route and the
regular agents are affected and have higher normalized values.
That is, even with just one self-interested car, we can see
that both the gossip agents that follow the same route as
the lies spread by the self-interested agents, and other
regular agents, increase their journey length by more than 14%.
In our third set of experiments we examined a setting in
which there was an increasing number of agents, and the
agents did not necessarily have the same origin and
destination points. To model this we randomly selected
selfinterested agents, whose objective was to minimize their
average journey length, assuming the cars were repeating
their journeys (that is, more than one iteration was made).
As opposed to the first set of experiments, in this set the
self-interested agents were selected randomly, and we did
not enforce the constraint that they will all have the same
origin and destination points.
As in the previous sets of experiments we ran 5
different simulations per scenario. In each simulation 11 runs
were made, each run with different numbers of self-interested
agents: 0 (no self-interested agents), 1, 2, 4, 8, and 16. Each
agent adopted the behavior modeled in Section 4.1. Figure
1 shows the normalized value achieved by the self-interested
agents as a function of their number. The figure shows these
values for iterations 2-6. The first iteration is not shown
intentionally, as we assume repeated journeys. Also, we have
seen in the previous set of experiments and we have
provided explanations as to why the self-interested agents do
not gain much from their behavior in the first iteration.
0.955
0.96
0.965
0.97
0.975
0.98
0.985
0.99
0.995
1
1.005
1.01
1.015
1.02
1.025
1.03
0 1 2 3 4 5 6 7 8 9 10111213141516
Self-Interested Agents Number
NormalizedValue
Iteration 2 Iteration 3 Iteration 4
Iteration 5 Iteration 6
Figure 1: Self-interested agents normalized values as
a function of the number of self-interested agents.
Using these simulations we examined what the threshold
could be for the number of randomly selected self-interested
agents in order to allow themselves to benefit from their
selfish behavior. We can see that up to 8 self-interested
agents, the average normalized value is below 1. That is,
they benefit from their malicious behavior. In the case of
one self-interested agent there is a significant difference
between the average journey length of when the agent spread
lies and when no lies are spread (p < 0.001), while when
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331
there are 2, 4, 8 and 16 self-interested agents there is no
significance difference. Yet, as the number of self-interested
agents increases, the normalized value also increases. In
such cases, the normalized value is larger than 1, and the
self-interested agents journey length becomes significantly
higher than their journey length, in cases where there are
no self-interested agents in the system.
In the next subsection we analyze the scenario as a game
and show that when in equilibrium the exchange of gossiping
between the agents becomes inefficient.
4.3 When Gossiping is Inefficient
We continued and modeled our scenario as a game, in
order to find the equilibrium. There are two possible types for
the agents: (a) regular gossip agents, and (b) self-interested
agents. Each of these agents is a representative of its group,
and thus all agents in the same group have similar behavior.
We note that the advantage of using gossiping in
transportation networks is to allow the agents to detect anomalies
in the network (e.g., traffic jams) and to quickly adapt to
them by recalculating their routes [14]. We also assume that
the objective of the self-interested agents is to minimize their
own journey length, thus they spread lies on their routes, as
described in Section 4.1. We also assume that sophisticated
methods for identifying the self-interested agents or
managing reputation are not used. This is mainly due to the
complexity of incorporating and maintaining such mechanisms,
as well as due to the dynamics of the network, in which
interactions between different agents are frequent, agents may
leave the network, and data about the road might change as
time progresses (e.g., a road might be reported by a regular
gossip agent as free at a given time, yet it may currently be
jammed due to heavy traffic on the road).
Let Tavg be the average time it takes to traverse an edge
in the transportation network (that is, the average load of
an edge). Let Tmax be the maximum time it takes to
traverse an edge. We will investigate the game, in which the
self-interested and the regular gossip agents can choose the
following actions. The self-interested agents can choose how
much to lie, that is, they can choose to spread how long (not
necessarily the true duration) it takes to traverse certain
roads. Since the objective of the self-interested agents is to
spread messages as though some roads are jammed, the
traversal time they report is obviously larger than the average
time. We denote the time the self-interested agents spread
as Ts, such that Tavg ≤ Ts ≤ Tmax. Motivated by the
results of the simulations we have described above, we saw that
the agents are less affected if they discard the heavy traffic
values. Thus, the regular gossip cars, attempting to
mitigate the effect of the liars, can choose a strategy to ignore
abnormal congestion values above a certain threshold, Tg.
Obviously, Tavg ≤ Tg ≤ Tmax. In order to prevent the
gossip agents from detecting the lies and just discarding those
values, the self-interested agents send lies in a given range,
[Ts, Tmax], with an inverse geometric distribution, that is,
the higher the T value, the higher its frequency.
Now we construct the utility functions for each type of
agents, which is defined by the values of Ts and Tg. If the
self-interested agents spread traversal times higher than or
equal to the regular gossip cars" threshold, they will not
benefit from those lies. Thus, the utility value of the
selfinterested agents in this case is 0. On the other hand, if the
self-interested agents spread traversal time which is lower
than the threshold, they will gain a positive utility value.
From the regular gossip agents point-of-view, if they accept
messages from the self-interested agents, then they
incorporate the lies in their calculation, thus they will lose utility
points. On the other hand, if they discard the false values
the self-interested agents send, that is, they do not
incorporate the lies, they will gain utility values. Formally, we use
us
to denote the utility of the self-interested agents and ug
to denote the utility of the regular gossip agents. We also
denote the strategy profile in the game as {Ts, Tg}. The
utility functions are defined as:
us
=
0 if Ts ≥ Tg
Ts − Tavg + 1 if Ts < Tg
(1)
ug
=
Tg − Tavg if Ts ≥ Tg
Ts − Tg if Ts < Tg
(2)
We are interested in finding the Nash equilibrium. We
recall from [12], that the Nash equilibrium is a strategy
profile, where no player has anything to gain by deviating from
his strategy, given that the other agent follows his strategy
profile. Formally, let (S, u) denote the game, where S is
the set of strategy profiles and u is the set of utility
functions. When each agent i ∈ {regular gossip, self-interested}
chooses a strategy Ti resulting in a strategy profile T =
(Ts, Tg) then agent i obtains a utility of ui
(T). A strategy
profile T∗
∈ S is a Nash equilibrium if no deviation in the
strategy by any single agent is profitable, that is, if for all i,
ui
(T∗
) ≥ ui
(Ti, T∗
−i). That is, (Ts, Tg) is a Nash equilibrium
if the self-interested agents have no other value Ts such that
us
(Ts, Tg) > us
(Ts, Tg), and similarly for the gossip agents.
We now have the following theorem.
Theorem 4.1. (Tavg, Tavg) is the only Nash equilibrium.
Proof. First we will show that (Tavg, Tavg) is a Nash
equilibrium. Assume, by contradiction, that the gossip agents
choose another value Tg > Tavg. Thus, ug
(Tavg, Tg ) =
Tavg − Tg < 0. On the other hand, ug
(Tavg, Tavg) = 0.
Thus, the regular gossip agents have no incentive to
deviate from this strategy. The self-interested agents also have
no incentive to deviate from this strategy. By
contradiction, again assume that the self-interested agents choose
another value Ts > Tavg. Thus, us
(Ts , Tavg) = 0, while
us
(Tavg, Tavg) = 0.
We will now show that the above solution is unique. We
will show that any other tuple (Ts, Tg), such that Tavg <
Tg ≤ Tmax and Tavg < Ts ≤ Tmax is not a Nash equilibrium.
We have three cases. In the first Tavg < Tg < Ts ≤ Tmax.
Thus, us
(Ts, Tg) = 0 and ug
(Ts, Tg) = Tg − Tavg. In this
case, the regular gossip agents have an incentive to deviate
and choose another strategy Tg + 1, since by doing so they
increase their own utility: ug
(Ts, Tg + 1) = Tg + 1 − Tavg.
In the second case we have Tavg < Ts < Tg ≤ Tmax. Thus,
ug
(Ts, Tg) = Ts − Tg < 0. Also, the regular gossip agents
have an incentive to deviate and choose another strategy
Tg −1, in which their utility value is higher: ug
(Ts, Tg −1) =
Ts − Tg + 1.
In the last case we have Tavg < Ts = Tg ≤ Tmax. Thus,
us
(Ts, Tg) = Ts − Tg = 0. In this case, the self-interested
agents have an incentive to deviate and choose another
strategy Tg − 1, in which their utility value is higher: us
(Tg −
1, Tg) = Tg − 1 − Tavg + 1 = Tg − Tavg > 0.
332 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 3: Normalized journey length values for the
first iteration
Self-Interested Self-Interested Gossip Regular
Agents Number Agents Agents Agents
1 0.98 1.01 1.05
2 1.09 1.02 1.05
4 1.07 1.02 1.05
8 1.06 1.04 1.05
16 1.03 1.08 1.06
32 1.07 1.17 1.08
50 1.12 1.28 1.1
64 1.14 1.4 1.13
80 1.15 1.5 1.14
100 1.17 1.63 1.16
Table 4: Normalized journey length values for all
iterations
Self-Interested Self-Interested Gossip Regular
Agents Number Agents Agents Agents
1 0.98 1.02 1.06
2 1.0 1.04 1.07
4 1.0 1.08 1.07
8 1.01 1.33 1.11
16 1.02 1.89 1.17
32 1.06 2.46 1.25
50 1.13 2.24 1.29
64 1.21 2.2 1.32
80 1.21 2.13 1.27
100 1.26 2.11 1.27
The above theorem proves that the equilibrium point is
reached only when the self-interested agents send the time
to traverse certain edges equals the average time, and on
the other hand the regular gossip agents discard all data
regarding roads that are associated with an average time or
higher. Thus, for this equilibrium point the exchange of
gossiping information between agents is inefficient, as the gossip
agents are unable to detect any anomalies in the network.
In the next section we describe another scenario for the
self-interested agents, in which they are not concerned with
their own utility, but rather interested in maximizing the
average journey length of other gossip agents.
5. SPREADING LIES, CAUSING CHAOS
Another possible behavior that can be adopted by
selfinterested agents is characterized by their goal to cause
disorder in the network. This can be achieved, for example, by
maximizing the average journey length of all agents, even at
the cost of maximizing their own journey length.
To understand the vulnerability of the gossip based
transportation support system, we ran 5 different simulations for
each scenario. In each simulation different agents were
randomly chosen (using a uniform distribution) to act as
gossip agents, among them self-interested agents were chosen.
Each self-interested agent behaved in the same manner as
described in Section 4.1.
Every simulation consisted of 11 runs with each run
comprising different numbers of self-interested agents: 0 (no
selfinterested agents), 1, 2, 4, 8, 16, 32, 50, 64, 80 and 100.
Also, in each run the number of self-interested agents was
increased incrementally. For example: the run with 50
selfinterested agents consisted of all the self-interested agents
that were used in the run with 32 self-interested agents, but
with an additional 18 self-interested agents.
Tables 3 and 4 summarize the normalized journey length
for the self-interested agents, the regular gossip agents and
the regular (non-gossip) agents. Table 3 summarizes the
data for the first iteration and Table 4 summarizes the data
for the average of all iterations. Figure 2 demonstrates
the changes in the normalized values for the regular gossip
agents and the regular agents, as a function of the iteration
number. Similar to the results in our first set of experiments,
described in Section 4.2, we can see that randomly selected
self-interested agents who follow different randomly selected
routes do not benefit from their malicious behavior (that is,
their average journey length does not decrease). However,
when only one self-interested agent is involved, it does
benefit from the malicious behavior, even in the first iteration.
The results also indicate that the regular gossip agents are
more sensitive to malicious behavior than regular
agentsthe average journey length for the gossip agents increases
significantly (e.g., with 32 self-interested agents the average
journey length for the gossip agents was 146% higher than
in the setting with no self-interested agents at all, as
opposed to an increase of only 25% for the regular agents). In
contrast, these results also indicate that the self-interested
agents do not succeed in causing a significant load in the
network by their malicious behavior.
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
1 2 3 4 5 6
Iteration Number
NormalizedValue
32 self-interested agents, gossip agents normalized value
100 self-interested agents, gossip agents normalized value
32 self-interested agents, regular agents normalized value
100 self-interested agents, regular agents normalized value
Figure 2: Gossip and regular agents normalized
values, as a function of the iteration.
Since the goal of the self-interested agents in this case is
to cause disorder in the network rather than use the lies for
their own benefits, the question arises as to why would the
behavior of the self-interested agents be to send lies about
their routes only. Furthermore, we hypothesize that if they
all send lies about the same major roads the damage they
might inflict on the entire network would be larger that had
each of them sent lies about its own route. To examine this
hypothesis, we designed another set of experiments. In this
set of experiments, all the self-interested agents spread lies
about the same 13 main roads in the network. However, the
results show quite a smaller impact on other gossip and
reguThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 333
Table 5: Normalized journey length values for all
iterations. Network with congestions.
Self-Interested Self-Interested Gossip Regular
Agents Number Agents Agents Agents
1 1.04 1.02 1.22
2 1.06 1.04 1.22
4 1.04 1.06 1.23
8 1.07 1.15 1.26
16 1.09 1.55 1.39
32 1.12 2.25 1.56
50 1.24 2.25 1.60
64 1.28 2.47 1.63
80 1.50 2.41 1.64
100 1.69 2.61 1.75
lar agents in the network. The average normalized value for
the gossip agents in these simulations was only about 1.07,
as opposed to 1.7 in the original scenario. When analyzing
the results we saw that although the false data was spread,
it did not cause other gossip cars to change their route. The
main reason was that the lies were spread on roads that were
not on the route of the self-interested agents. Thus, it took
the data longer to reach agents on the main roads, and when
the agents reached the relevant roads this data was too old
to be incorporated in the other agents calculations.
We also examined the impact of sending lies in order to
cause chaos when there are already congestions in the
network. To this end, we simulated a network in which 13 main
roads are jammed. The behavior of the self-interested agents
is as described in Section 4.1, and the self-interested agents
spread lies about their own route. The simulation results,
detailed in Table 5, show that there is a greater incentive
for the self-interested agents to cheat when the network is
already congested, as their cheating causes more damage
to the other agents in the network. For example, whereas
the average journey length of the regular agents increased
only by about 15% in the original scenario, in which the
network was not congested, in this scenario the average journey
length of the agents had increased by about 60%.
6. CONCLUSIONS
In this paper we investigated the benefits achieved by
self-interested agents in vehicular networks. Using
simulations we investigated two behaviors that might be taken
by self-interested agents: (a) trying to minimize their
journey length, and (b) trying to cause chaos in the network.
Our simulations indicate that in both behaviors the
selfinterested agents have only limited success achieving their
goal, even if no counter-measures are taken. This is in
contrast to the greater impact inflicted by self-interested agents
in other domains (e.g., E-Commerce). Some reasons for this
are the special characteristics of vehicular networks and their
dynamic nature. While the self-interested agents spread lies,
they cannot choose which agents with whom they will
interact. Also, by the time their lies reach other agents, they
might become irrelevant, as more recent data has reached
the same agents.
Motivated by the simulation results, future research in
this field will focus on modeling different behaviors of the
self-interested agents, which might cause more damage to
the network. Another research direction would be to find
ways of minimizing the effect of selfish-agents by using
distributed reputation or other measures.
7. REFERENCES
[1] A. Bejan and R. Lawrence. Peer-to-peer cooperative
driving. In Proceedings of ISCIS, pages 259-264,
Orlando, USA, October 2002.
[2] I. Chisalita and N. Shahmehri. A novel architecture for
supporting vehicular communication. In Proceedings of
VTC, pages 1002-1006, Canada, September 2002.
[3] S. Das, A. Nandan, and G. Pau. Spawn: A swarming
protocol for vehicular ad-hoc wireless networks. In
Proceedings of VANET, pages 93-94, 2004.
[4] A. Datta, S. Quarteroni, and K. Aberer. Autonomous
gossiping: A self-organizing epidemic algorithm for
selective information dissemination in mobile ad-hoc
networks. In Proceedings of IC-SNW, pages 126-143,
Maison des Polytechniciens, Paris, France, June 2004.
[5] D. Dolev, R. Reischuk, and H. R. Strong. Early
stopping in byzantine agreement. JACM,
37(4):720-741, 1990.
[6] GM. Threat assessment algorithm.
http://www.nhtsa.dot.gov/people/injury/research/pub/
acas/acas-fieldtest/, 2000.
[7] Honda.
http://world.honda.com/news/2005/c050902.html.
[8] Lamport, Shostak, and Pease. The byzantine generals
problem. In Advances in Ultra-Dependable Distributed
Systems, N. Suri, C. J. Walter, and M. M. Hugue
(Eds.). IEEE Computer Society Press, 1982.
[9] C. Leckie and R. Kotagiri. Policies for sharing
distributed probabilistic beliefs. In Proceedings of
ACSC, pages 285-290, Adelaide, Australia, 2003.
[10] D. Malkhi, E. Pavlov, and Y. Sella. Gossip with
malicious parties. Technical report: 2003-9, School of
Computer Science and Engineering - The Hebrew
University of Jerusalem, Israel, March 2003.
[11] Y. M. Minsky and F. B. Schneider. Tolerating
malicious gossip. Distributed Computing, 16(1):49-68,
February 2003.
[12] M. J. Osborne and A. Rubinstein. A Course In Game
Theory. MIT Press, Cambridge MA, 1994.
[13] R. Parshani. Routing in gossip networks. Master"s
thesis, Department of Computer Science, Bar-Ilan
University, Ramat-Gan, Israel, October 2004.
[14] R. Parshani, S. Kraus, and Y. Shavitt. A study of
gossiping in transportation networks. Submitted for
publication, 2006.
[15] Y. Shavitt and A. Shay. Optimal routing in gossip
networks. IEEE Transactions on Vehicular
Technology, 54(4):1473-1487, July 2005.
[16] N. Shibata, T. Terauchi, T. Kitani, K. Yasumoto,
M. Ito, and T. Higashino. A method for sharing traffic
jam information using inter-vehicle communication. In
Proceedings of V2VCOM, USA, 2006.
[17] W. Wang, X.-Y. Li, and Y. Wang. Truthful multicast
routing in selfish wireless networks. In Proceedings of
MobiCom, pages 245-259, USA, 2004.
[18] B. Yu and M. P. Singh. A social mechanism of
reputation management in electronic communities. In
Proceedings of CIA, 2000.
334 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | artificial social system;chaos;vehicular network;intelligent agent;journey length;social network;self-interested agent;selfinterested agent;agent-base deploy application |
train_I-61 | Distributed Agent-Based Air Traffic Flow Management | Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET - an air traffic flow simulator developed at NASA and used extensively by the FAA and industry - to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation). | 1. INTRODUCTION
The efficient, safe and reliable management of our ever
increasing air traffic is one of the fundamental challenges
facing the aerospace industry today. On a typical day, more
than 40,000 commercial flights operate within the US airspace
[14]. In order to efficiently and safely route this air traffic,
current traffic flow control relies on a centralized,
hierarchical routing strategy that performs flow projections ranging
from one to six hours. As a consequence, the system is
slow to respond to developing weather or airport conditions
leading potentially minor local delays to cascade into large
regional congestions. In 2005, weather, routing decisions
and airport conditions caused 437,667 delays, accounting for
322,272 hours of delays. The total cost of these delays was
estimated to exceed three billion dollars by industry [7].
Furthermore, as the traffic flow increases, the current
procedures increase the load on the system, the airports, and
the air traffic controllers (more aircraft per region)
without providing any of them with means to shape the traffic
patterns beyond minor reroutes. The Next Generation Air
Transportation Systems (NGATS) initiative aims to address
this issues and, not only account for a threefold increase in
traffic, but also for the increasing heterogeneity of aircraft
and decreasing restrictions on flight paths. Unlike many
other flow problems where the increasing traffic is to some
extent absorbed by improved hardware (e.g., more servers
with larger memories and faster CPUs for internet routing)
the air traffic domain needs to find mainly algorithmic
solutions, as the infrastructure (e.g., number of the airports) will
not change significantly to impact the flow problem. There
is therefore a strong need to explore new, distributed and
adaptive solutions to the air flow control problem.
An adaptive, multi-agent approach is an ideal fit to this
naturally distributed problem where the complex interaction
among the aircraft, airports and traffic controllers renders a
pre-determined centralized solution severely suboptimal at
the first deviation from the expected plan. Though a truly
distributed and adaptive solution (e.g., free flight where
aircraft can choose almost any path) offers the most potential
in terms of optimizing flow, it also provides the most
radical departure from the current system. As a consequence, a
shift to such a system presents tremendous difficulties both
in terms of implementation (e.g., scheduling and airport
capacity) and political fallout (e.g., impact on air traffic
controllers). In this paper, we focus on agent based system that
can be implemented readily. In this approach, we assign an
342
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
agent to a fix, a specific location in 2D. Because aircraft
flight plans consist of a sequence of fixes, this
representation allows localized fixes (or agents) to have direct impact
on the flow of air traffic1
. In this approach, the agents"
actions are to set the separation that approaching aircraft
are required to keep. This simple agent-action pair allows
the agents to slow down or speed up local traffic and allows
agents to a have significant impact on the overall air traffic
flow. Agents learn the most appropriate separation for their
location using a reinforcement learning (RL) algorithm [15].
In a reinforcement learning approach, the selection of the
agent reward has a large impact on the performance of the
system. In this work, we explore four different agent reward
functions, and compare them to simulating various changes
to the system and selecting the best solution (e.g,
equivalent to a Monte-Carlo search). The first explored reward
consisted of the system reward. The second reward was a
personalized agent reward based on collectives [3, 17, 18].
The last two rewards were personalized rewards based on
estimations to lower the computational burden of the
reward computation. All three personalized rewards aim to
align agent rewards with the system reward and ensure that
the rewards remain sensitive to the agents" actions.
Previous work in this domain fell into one of two distinct
categories: The first principles based modeling approaches
used by domain experts [5, 8, 10, 13] and the algorithmic
approaches explored by the learning and/or agents
community [6, 9, 12]. Though our approach comes from the second
category, we aim to bridge the gap by using FACET to test
our algorithms, a simulator introduced and widely used (i.e.,
over 40 organizations and 5000 users) by work in the first
category [4, 11].
The main contribution of this paper is to present a
distributed adaptive air traffic flow management algorithm that
can be readily implemented and test that algorithm using
FACET. In Section 2, we describe the air traffic flow problem
and the simulation tool, FACET. In Section 3, we present
the agent-based approach, focusing on the selection of the
agents and their action space along with the agents" learning
algorithms and reward structures. In Section 4 we present
results in domains with one and two congestions, explore
different trade-offs of the system objective function, discuss
the scaling properties of the different agent rewards and
discuss the computational cost of achieving certain levels of
performance. Finally, in Section 5, we discuss the
implications of these results and provide and map the required work
to enable the FAA to reach its stated goal of increasing the
traffic volume by threefold.
2. AIR TRAFFIC FLOW MANAGEMENT
With over 40,000 flights operating within the United States
airspace on an average day, the management of traffic flow
is a complex and demanding problem. Not only are there
concerns for the efficiency of the system, but also for
fairness (e.g., different airlines), adaptability (e.g., developing
weather patterns), reliability and safety (e.g., airport
management). In order to address such issues, the management
of this traffic flow occurs over four hierarchical levels:
1. Separation assurance (2-30 minute decisions);
1
We discuss how flight plans with few fixes can be handled
in more detail in Section 2.
2. Regional flow (20 minutes to 2 hours);
3. National flow (1-8 hours); and
4. Dynamic airspace configuration (6 hours to 1 year).
Because of the strict guidelines and safety concerns
surrounding aircraft separation, we will not address that control
level in this paper. Similarly, because of the business and
political impact of dynamic airspace configuration, we will
not address the outermost flow control level either. Instead,
we will focus on the regional and national flow management
problems, restricting our impact to decisions with time
horizons between twenty minutes and eight hours. The proposed
algorithm will fit between long term planning by the FAA
and the very short term decisions by air traffic controllers.
The continental US airspace consists of 20 regional centers
(handling 200-300 flights on a given day) and 830 sectors
(handling 10-40 flights). The flow control problem has to
address the integration of policies across these sectors and
centers, account for the complexity of the system (e.g., over
5200 public use airports and 16,000 air traffic controllers)
and handle changes to the policies caused by weather
patterns. Two of the fundamental problems in addressing the
flow problem are: (i) modeling and simulating such a large
complex system as the fidelity required to provide reliable
results is difficult to achieve; and (ii) establishing the method
by which the flow management is evaluated, as directly
minimizing the total delay may lead to inequities towards
particular regions or commercial entities. Below, we discuss
how we addressed both issues, namely, we present FACET
a widely used simulation tool and discuss our system
evaluation function.
Figure 1: FACET screenshot displaying traffic
routes and air flow statistics.
2.1 FACET
FACET (Future ATM Concepts Evaluation Tool), a physics
based model of the US airspace was developed to accurately
model the complex air traffic flow problem [4]. It is based on
propagating the trajectories of proposed flights forward in
time. FACET can be used to either simulate and display air
traffic (a 24 hour slice with 60,000 flights takes 15 minutes to
simulate on a 3 GHz, 1 GB RAM computer) or provide rapid
statistics on recorded data (4D trajectories for 10,000 flights
including sectors, airports, and fix statistics in 10 seconds
on the same computer) [11]. FACET is extensively used by
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343
the FAA, NASA and industry (over 40 organizations and
5000 users) [11].
FACET simulates air traffic based on flight plans and
through a graphical user interface allows the user to analyze
congestion patterns of different sectors and centers (Figure
1). FACET also allows the user to change the flow patterns
of the aircraft through a number of mechanisms, including
metering aircraft through fixes. The user can then observe
the effects of these changes to congestion. In this paper,
agents use FACET directly through batch mode, where
agents send scripts to FACET asking it to simulate air
traffic based on metering orders imposed by the agents. The
agents then produce their rewards based on receive feedback
from FACET about the impact of these meterings.
2.2 System Evaluation
The system performance evaluation function we select
focuses on delay and congestion but does not account for
fairness impact on different commercial entities. Instead it
focuses on the amount of congestion in a particular sector and
on the amount of measured air traffic delay. The linear
combination of these two terms gives the full system evaluation
function, G(z) as a function of the full system state z. More
precisely, we have:
G(z) = −((1 − α)B(z) + αC(z)) , (1)
where B(z) is the total delay penalty for all aircraft in the
system, and C(z) is the total congestion penalty. The
relative importance of these two penalties is determined by the
value of α, and we explore various trade-offs based on α in
Section 4.
The total delay, B, is a sum of delays over a set of sectors
S and is given by:
B(z) =
X
s∈S
Bs(z) (2)
where
Bs(z) =
X
t
Θ(t − τs)kt,s(t − τs) , (3)
where ks,t is the number of aircraft in sector s at a
particular time, τs is a predetermined time, and Θ(·) is the
step function that equals 1 when its argument is greater or
equal to zero, and has a value of zero otherwise. Intuitively,
Bs(z) provides the total number of aircraft that remain in
a sector s past a predetermined time τs, and scales their
contribution to count by the amount by which they are late.
In this manner Bs(z) provides a delay factor that not only
accounts for all aircraft that are late, but also provides a
scale to measure their lateness. This definition is based
on the assumption that most aircraft should have reached
the sector by time τs and that aircraft arriving after this
time are late. In this paper the value of τs is determined by
assessing aircraft counts in the sector in the absence of any
intervention or any deviation from predicted paths.
Similarly, the total congestion penalty is a sum over the
congestion penalties over the sectors of observation, S:
C(z) =
X
s∈S
Cs(z) (4)
where
Cs(z) = a
X
t
Θ(ks,t − cs)eb(ks,t−cs)
, (5)
where a and b are normalizing constants, and cs is the
capacity of sector s as defined by the FAA. Intuitively, Cs(z)
penalizes a system state where the number of aircraft in a
sector exceeds the FAAs official sector capacity. Each sector
capacity is computed using various metrics which include the
number of air traffic controllers available. The exponential
penalty is intended to provide strong feedback to return the
number of aircraft in a sector to below the FAA mandated
capacities.
3. AGENT BASED AIR TRAFFIC FLOW
The multi agent approach to air traffic flow management
we present is predicated on adaptive agents taking
independent actions that maximize the system evaluation function
discussed above. To that end, there are four critical
decisions that need to be made: agent selection, agent action
set selection, agent learning algorithm selection and agent
reward structure selection.
3.1 Agent Selection
Selecting the aircraft as agents is perhaps the most
obvious choice for defining an agent. That selection has the
advantage that agent actions can be intuitive (e.g., change
of flight plan, increase or decrease speed and altitude) and
offer a high level of granularity, in that each agent can have
its own policy. However, there are several problems with
that approach. First, there are in excess of 40,000 aircraft
in a given day, leading to a massively large multi-agent
system. Second, as the agents would not be able to sample their
state space sufficiently, learning would be prohibitively slow.
As an alternative, we assign agents to individual ground
locations throughout the airspace called fixes. Each agent is
then responsible for any aircraft going through its fix. Fixes
offer many advantages as agents:
1. Their number can vary depending on need. The
system can have as many agents as required for a given
situation (e.g., agents coming live around an area
with developing weather conditions).
2. Because fixes are stationary, collecting data and
matching behavior to reward is easier.
3. Because aircraft flight plans consist of fixes, agent will
have the ability to affect traffic flow patterns.
4. They can be deployed within the current air traffic
routing procedures, and can be used as tools to help air
traffic controllers rather than compete with or replace
them.
Figure 2 shows a schematic of this agent based system.
Agents surrounding a congestion or weather condition affect
the flow of traffic to reduce the burden on particular regions.
3.2 Agent Actions
The second issue that needs to be addressed, is
determining the action set of the agents. Again, an obvious choice
may be for fixes to bid on aircraft, affecting their flight
plans. Though appealing from a free flight perspective, that
approach makes the flight plans too unreliable and
significantly complicates the scheduling problem (e.g., arrival at
airports and the subsequent gate assignment process).
Instead, we set the actions of an agent to determining
the separation (distance between aircraft) that aircraft have
344 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
to maintain, when going through the agent"s fix. This is
known as setting the Miles in Trail or MIT. When an agent
sets the MIT value to d, aircraft going towards its fix are
instructed to line up and keep d miles of separation (though
aircraft will always keep a safe distance from each other
regardless of the value of d). When there are many aircraft
going through a fix, the effect of issuing higher MIT values
is to slow down the rate of aircraft that go through the fix.
By increasing the value of d, an agent can limit the amount
of air traffic downstream of its fix, reducing congestion at
the expense of increasing the delays upstream.
Figure 2: Schematic of agent architecture. The
agents corresponding to fixes surrounding a
possible congestion become live and start setting new
separation times.
3.3 Agent Learning
The objective of each agent is to learn the best values of
d that will lead to the best system performance, G. In this
paper we assume that each agent will have a reward
function and will aim to maximize its reward using its own
reinforcement learner [15] (though alternatives such as
evolving neuro-controllers are also effective [1]). For complex
delayed-reward problems, relatively sophisticated
reinforcement learning systems such as temporal difference may have
to be used. However, due to our agent selection and agent
action set, the air traffic congestion domain modeled in this
paper only needs to utilize immediate rewards. As a
consequence, simple table-based immediate reward reinforcement
learning is used. Our reinforcement learner is equivalent to
an -greedy Q-learner with a discount rate of 0 [15]. At every
episode an agent takes an action and then receives a reward
evaluating that action. After taking action a and receiving
reward R an agent updates its Q table (which contains its
estimate of the value for taking that action [15]) as follows:
Q (a) = (1 − l)Q(a) + l(R), (6)
where l is the learning rate. At every time step the agent
chooses the action with the highest table value with
probability 1 − and chooses a random action with probability
. In the experiments described in this paper, α is equal
to 0.5 and is equal to 0.25. The parameters were chosen
experimentally, though system performance was not overly
sensitive to these parameters.
3.4 Agent Reward Structure
The final issue that needs to be addressed is selecting the
reward structure for the learning agents. The first and most
direct approach is to let each agent receive the system
performance as its reward. However, in many domains such
a reward structure leads to slow learning. We will
therefore also set up a second set of reward structures based on
agent-specific rewards. Given that agents aim to maximize
their own rewards, a critical task is to create good agent
rewards, or rewards that when pursued by the agents lead
to good overall system performance. In this work we focus
on difference rewards which aim to provide a reward that is
both sensitive to that agent"s actions and aligned with the
overall system reward [2, 17, 18].
3.4.1 Difference Rewards
Consider difference rewards of the form [2, 17, 18]:
Di ≡ G(z) − G(z − zi + ci) , (7)
where zi is the action of agent i. All the components of
z that are affected by agent i are replaced with the fixed
constant ci
2
.
In many situations it is possible to use a ci that is
equivalent to taking agent i out of the system. Intuitively this
causes the second term of the difference reward to
evaluate the performance of the system without i and therefore
D evaluates the agent"s contribution to the system
performance. There are two advantages to using D: First, because
the second term removes a significant portion of the impact
of other agents in the system, it provides an agent with
a cleaner signal than G. This benefit has been dubbed
learnability (agents have an easier time learning) in
previous work [2, 17]. Second, because the second term does not
depend on the actions of agent i, any action by agent i that
improves D, also improves G. This term which measures the
amount of alignment between two rewards has been dubbed
factoredness in previous work [2, 17].
3.4.2 Estimates of Difference Rewards
Though providing a good compromise between aiming
for system performance and removing the impact of other
agents from an agent"s reward, one issue that may plague D
is computational cost. Because it relies on the computation
of the counterfactual term G(z − zi + ci) (i.e., the system
performance without agent i) it may be difficult or
impossible to compute, particularly when the exact mathematical
form of G is not known. Let us focus on G functions in the
following form:
G(z) = Gf (f(z)), (8)
where Gf () is non-linear with a known functional form and,
f(z) =
X
i
fi(zi) , (9)
where each fi is an unknown non-linear function. We
assume that we can sample values from f(z), enabling us to
compute G, but that we cannot sample from each fi(zi).
2
This notation uses zero padding and vector addition rather
than concatenation to form full state vectors from partial
state vectors. The vector zi in our notation would be ziei
in standard vector notation, where ei is a vector with a value
of 1 in the ith component and is zero everywhere else.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345
In addition, we assume that Gf is much easier to compute
than f(z), or that we may not be able to even compute
f(z) directly and must sample it from a black box
computation. This form of G matches our system evaluation
in the air traffic domain. When we arrange agents so that
each aircraft is typically only affected by a single agent, each
agent"s impact of the counts of the number of aircraft in a
sector, kt,s, will be mostly independent of the other agents.
These values of kt,s are the f(z)s in our formulation and
the penalty functions form Gf . Note that given aircraft
counts, the penalty functions (Gf ) can be easily computed
in microseconds, while aircraft counts (f) can only be
computed by running FACET taking on the order of seconds.
To compute our counterfactual G(z − zi + ci) we need to
compute:
Gf (f(z − zi + ci)) = Gf
0
@
X
j=i
fj(zj) + fi(ci)
1
A (10)
= Gf (f(z) − fi(zi) + fi(ci)) .(11)
Unfortunately, we cannot compute this directly as the values
of fi(zi) are unknown. However, if agents take actions
independently (it does not observe how other agents act before
taking its own action) we can take advantage of the linear
form of f(z) in the fis with the following equality:
E(f−i(z−i)|zi) = E(f−i(z−i)|ci) (12)
where E(f−i(z−i)|zi) is the expected value of all of the fs
other than fi given the value of zi and E(f−i(z−i)|ci) is the
expected value of all of the fs other than fi given the value
of zi is changed to ci. We can then estimate f(z − zi + ci):
f(z) − fi(zi) + fi(ci) = f(z) − fi(zi) + fi(ci)
+ E(f−i(z−i)|ci) − E(f−i(z−i)|zi)
= f(z) − E(fi(zi)|zi) + E(fi(ci)|ci)
+ E(f−i(z−i)|ci) − E(f−i(z−i)|zi)
= f(z) − E(f(z)|zi) + E(f(z)|ci) .
Therefore we can evaluate Di = G(z) − G(z − zi + ci) as:
Dest1
i = Gf (f(z)) − Gf (f(z) − E(f(z)|zi) + E(f(z)|ci)) , (13)
leaving us with the task of estimating the values of E(f(z)|zi)
and E(f(z)|ci)). These estimates can be computed by
keeping a table of averages where we average the values of the
observed f(z) for each value of zi that we have seen. This
estimate should improve as the number of samples increases. To
improve our estimates, we can set ci = E(z) and if we make
the mean squared approximation of f(E(z)) ≈ E(f(z)) then
we can estimate G(z) − G(z − zi + ci) as:
Dest2
i = Gf (f(z)) − Gf (f(z) − E(f(z)|zi) + E(f(z))) . (14)
This formulation has the advantage in that we have more
samples at our disposal to estimate E(f(z)) than we do to
estimate E(f(z)|ci)).
4. SIMULATION RESULTS
In this paper we test the performance of our agent based
air traffic optimization method on a series of simulations
using the FACET air traffic simulator. In all experiments
we test the performance of five different methods. The first
method is Monte Carlo estimation, where random policies
are created, with the best policy being chosen. The other
four methods are agent based methods where the agents are
maximizing one of the following rewards:
1. The system reward, G(z), as define in Equation 1.
2. The difference reward, Di(z), assuming that agents
can calculate counterfactuals.
3. Estimation to the difference reward, Dest1
i (z), where
agents estimate the counterfactual using E(f(z)|zi)
and E(f(z)|ci).
4. Estimation to the difference reward, Dest2
i (z), where
agents estimate the counterfactual using E(f(z)|zi)
and E(f(z)).
These methods are first tested on an air traffic domain with
300 aircraft, where 200 of the aircraft are going through
a single point of congestion over a four hour simulation.
Agents are responsible for reducing congestion at this single
point, while trying to minimize delay. The methods are then
tested on a more difficult problem, where a second point of
congestion is added with the 100 remaining aircraft going
through this second point of congestion.
In all experiments the goal of the system is to maximize
the system performance given by G(z) with the parameters,
a = 50, b = 0.3, τs1 equal to 200 minutes and τs1 equal to
175 minutes. These values of τ are obtained by examining
the time at which most of the aircraft leave the sectors, when
no congestion control is being performed. Except where
noted, the trade-off between congestion and lateness, α is
set to 0.5. In all experiments to make the agent results
comparable to the Monte Carlo estimation, the best policies
chosen by the agents are used in the results. All results are
an average of thirty independent trials with the differences
in the mean (σ/
√
n) shown as error bars, though in most
cases the error bars are too small to see.
Figure 3: Performance on single congestion
problem, with 300 Aircraft , 20 Agents and α = .5.
4.1 Single Congestion
In the first experiment we test the performance of the five
methods when there is a single point of congestion, with
twenty agents. This point of congestion is created by setting
up a series of flight plans that cause the number of aircraft in
346 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
the sector of interest to be significantly more than the
number allowed by the FAA. The results displayed in Figures
3 and 4 show the performance of all five algorithms on two
different system evaluations. In both cases, the agent based
methods significantly outperform the Monte Carlo method.
This result is not surprising since the agent based methods
intelligently explore their space, where as the Monte Carlo
method explores the space randomly.
Figure 4: Performance on single congestion
problem, with 300 Aircraft , 20 Agents and α = .75.
Among the agent based methods, agents using difference
rewards perform better than agents using the system
reward. Again this is not surprising, since with twenty agents,
an agent directly trying to maximize the system reward has
difficulty determining the effect of its actions on its own
reward. Even if an agent takes an action that reduces
congestion and lateness, other agents at the same time may
take actions that increase congestion and lateness, causing
the agent to wrongly believe that its action was poor. In
contrast agents using the difference reward have more
influence over the value of their own reward, therefore when an
agent takes a good action, the value of this action is more
likely to be reflected in its reward.
This experiment also shows that estimating the difference
reward is not only possible, but also quite effective, when
the true value of the difference reward cannot be computed.
While agents using the estimates do not achieve as high of
results as agents using the true difference reward, they still
perform significantly better than agents using the system
reward. Note, however, that the benefit of the estimated
difference rewards are only present later in learning. Earlier
in learning, the estimates are poor, and agents using the
estimated difference rewards perform no better then agents
using the system reward.
4.2 Two Congestions
In the second experiment we test the performance of the
five methods on a more difficult problem with two points of
congestion. On this problem the first region of congestion is
the same as in the previous problem, and the second region
of congestion is added in a different part of the country.
The second congestion is less severe than the first one, so
agents have to form different policies depending which point
of congestion they are influencing.
Figure 5: Performance on two congestion problem,
with 300 Aircraft, 20 Agents and α = .5.
Figure 6: Performance on two congestion problem,
with 300 Aircraft, 50 Agents and α = .5.
The results displayed in Figure 5 show that the relative
performance of the five methods is similar to the single
congestion case. Again agent based methods perform better
than the Monte Carlo method and the agents using
difference rewards perform better than agents using the system
reward. To verify that the performance improvement of our
methods is maintained when there are a different number of
agents, we perform additional experiments with 50 agents.
The results displayed in Figure 6 show that indeed the
relative performances of the methods are comparable when the
number of agents is increased to 50. Figure 7 shows scaling
results and demonstrates that the conclusions hold over a
wide range of number of agents. Agents using Dest2
perform slightly better than agents using Dest1
in all cases but
for 50 agents. This slight advantage stems from Dest2
providing the agents with a cleaner signal, since its estimate
uses more data points.
4.3 Penalty Tradeoffs
The system evaluation function used in the experiments is
G(z) = −((1−α)D(z)+αC(z)), which comprises of penalties
for both congestion and lateness. This evaluation function
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 347
Figure 7: Impact of number of agents on system
performance. Two congestion problem, with 300
Aircraft and α = .5.
forces the agents to tradeoff these relative penalties
depending on the value of α. With high α the optimization focuses
on reducing congestion, while with low α the system focuses
on reducing lateness. To verify that the results obtained
above are not specific to a particular value of α, we repeat
the experiment with 20 agents for α = .75. Figure 8 shows
that qualitatively the relative performance of the algorithms
remain the same.
Next, we perform a series of experiments where α ranges
from 0.0 to 1.0 . Figure 9 shows the results which lead to
three interesting observations:
• First, there is a zero congestion penalty solution. This
solution has agents enforce large MIT values to block
all air traffic, which appears viable when the system
evaluation does not account for delays. All algorithms
find this solution, though it is of little interest in
practice due to the large delays it would cause.
• Second, if the two penalties were independent, an
optimal solution would be a line from the two end points.
Therefore, unless D is far from being optimal, the two
penalties are not independent. Note that for α = 0.5
the difference between D and this hypothetical line is
as large as it is anywhere else, making α = 0.5 a
reasonable choice for testing the algorithms in a difficult
setting.
• Third, Monte Carlo and G are particularly poor at
handling multiple objectives. For both algorithms, the
performance degrades significantly for mid-ranges of α.
4.4 Computational Cost
The results in the previous section show the performance
of the different algorithms after a specific number of episodes.
Those results show that D is significantly superior to the
other algorithms. One question that arises, though, is what
computational overhead D puts on the system, and what
results would be obtained if the additional computational
expense of D is made available to the other algorithms.
The computation cost of the system evaluation, G
(Equation 1) is almost entirely dependent on the computation of
Figure 8: Performance on two congestion problem,
with 300 Aircraft, 20 Agents and α = .75.
Figure 9: Tradeoff Between Objectives on two
congestion problem, with 300 Aircraft and 20 Agents.
Note that Monte Carlo and G are particularly bad
at handling multiple objectives.
the airplane counts for the sectors kt,s, which need to be
computed using FACET. Except when D is used, the
values of k are computed once per episode. However, to
compute the counterfactual term in D, if FACET is treated as
a black box, each agent would have to compute their own
values of k for their counterfactual resulting in n + 1
computations of k per episode. While it may be possible to
streamline the computation of D with some knowledge of
the internals of FACET, given the complexity of the FACET
simulation, it is not unreasonable in this case to treat it as
a black box.
Table 1 shows the performance of the algorithms after
2100 G computations for each of the algorithms for the
simulations presented in Figure 5 where there were 20 agents,
2 congestions and α = .5. All the algorithms except the
fully computed D reach 2100 k computations at time step
2100. D however computes k once for the system, and then
once for each agent, leading to 21 computations per time
step. It therefore reaches 2100 computations at time step
100. We also show the results of the full D computation
at t=2100, which needs 44100 computations of k as D44K
.
348 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 1: System Performance for 20 Agents, 2
congestions and α = .5, after 2100 G evaluations (except
for D44K
which has 44100 G evaluations at t=2100).
Reward G σ/
√
n time
Dest2
-232.5 7.55 2100
Dest1
-234.4 6.83 2100
D -277.0 7.8 100
D44K
-219.9 4.48 2100
G -412.6 13.6 2100
MC -639.0 16.4 2100
Although D44K
provides the best result by a slight margin,
it is achieved at a considerable computational cost. Indeed,
the performance of the two D estimates is remarkable in this
case as they were obtained with about twenty times fewer
computations of k. Furthermore, the two D estimates,
significantly outperform the full D computation for a given
number of computations of k and validate the assumptions
made in Section 3.4.2. This shows that for this domain, in
practice it is more fruitful to perform more learning steps
and approximate D, than few learning steps with full D
computation when we treat FACET as a black box.
5. DISCUSSION
The efficient, safe and reliable management of air traffic
flow is a complex problem, requiring solutions that integrate
control policies with time horizons ranging from minutes
up to a year. The main contribution of this paper is to
present a distributed adaptive air traffic flow management
algorithm that can be readily implemented and to test that
algorithm using FACET, a simulation tool widely used by
the FAA, NASA and the industry. Our method is based on
agents representing fixes and having each agent determine
the separation between aircraft approaching its fix. It offers
the significant benefit of not requiring radical changes to
the current air flow management structure and is therefore
readily deployable. The agents use reinforcement learning to
learn control policies and we explore different agent reward
functions and different ways of estimating those functions.
We are currently extending this work in three directions.
First, we are exploring new methods of estimating agent
rewards, to further speed up the simulations. Second we are
investigating deployment strategies and looking for
modifications that would have larger impact. One such
modification is to extend the definition of agents from fixes to
sectors, giving agents more opportunity to control the
traffic flow, and allow them to be more efficient in eliminating
congestion. Finally, in cooperation with domain experts,
we are investigating different system evaluation functions,
above and beyond the delay and congestion dependent G
presented in this paper.
Acknowledgments: The authors thank Banavar
Sridhar for his invaluable help in describing both current air
traffic flow management and NGATS, and Shon Grabbe for
his detailed tutorials on FACET.
6. REFERENCES
[1] A. Agogino and K. Tumer. Efficient evaluation
functions for multi-rover systems. In The Genetic and
Evolutionary Computation Conference, pages 1-12,
Seatle, WA, June 2004.
[2] A. Agogino and K. Tumer. Multi agent reward
analysis for learning in noisy domains. In Proceedings
of the Fourth International Joint Conference on
Autonomous Agents and Multi-Agent Systems,
Utrecht, Netherlands, July 2005.
[3] A. K. Agogino and K. Tumer. Handling communiction
restrictions and team formation in congestion games.
Journal of Autonous Agents and Multi Agent Systems,
13(1):97-115, 2006.
[4] K. D. Bilimoria, B. Sridhar, G. B. Chatterji, K. S.
Shethand, and S. R. Grabbe. FACET: Future ATM
concepts evaluation tool. Air Traffic Control
Quarterly, 9(1), 2001.
[5] Karl D. Bilimoria. A geometric optimization approach
to aircraft conflict resolution. In AIAA Guidance,
Navigation, and Control Conf, Denver, CO, 2000.
[6] Martin S. Eby and Wallace E. Kelly III. Free flight
separation assurance using distributed algorithms. In
Proc of Aerospace Conf, 1999, Aspen, CO, 1999.
[7] FAA OPSNET data Jan-Dec 2005. US Department of
Transportation website.
[8] S. Grabbe and B. Sridhar. Central east pacific flight
routing. In AIAA Guidance, Navigation, and Control
Conference and Exhibit, Keystone, CO, 2006.
[9] Jared C. Hill, F. Ryan Johnson, James K. Archibald,
Richard L. Frost, and Wynn C. Stirling. A cooperative
multi-agent approach to free flight. In AAMAS "05:
Proceedings of the fourth international joint conference
on Autonomous agents and multiagent systems, pages
1083-1090, New York, NY, USA, 2005. ACM Press.
[10] P. K. Menon, G. D. Sweriduk, and B. Sridhar.
Optimal strategies for free flight air traffic conflict
resolution. Journal of Guidance, Control, and
Dynamics, 22(2):202-211, 1999.
[11] 2006 NASA Software of the Year Award Nomination.
FACET: Future ATM concepts evaluation tool. Case
no. ARC-14653-1, 2006.
[12] M. Pechoucek, D. Sislak, D. Pavlicek, and M. Uller.
Autonomous agents for air-traffic deconfliction. In
Proc of the Fifth Int Jt Conf on Autonomous Agents
and Multi-Agent Systems, Hakodate, Japan, May 2006.
[13] B. Sridhar and S. Grabbe. Benefits of direct-to in
national airspace system. In AIAA Guidance,
Navigation, and Control Conf, Denver, CO, 2000.
[14] B. Sridhar, T. Soni, K. Sheth, and G. B. Chatterji.
Aggregate flow model for air-traffic management.
Journal of Guidance, Control, and Dynamics,
29(4):992-997, 2006.
[15] R. S. Sutton and A. G. Barto. Reinforcement
Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[16] C. Tomlin, G. Pappas, and S. Sastry. Conflict
resolution for air traffic management. IEEE Tran on
Automatic Control, 43(4):509-521, 1998.
[17] K. Tumer and D. Wolpert, editors. Collectives and the
Design of Complex Systems. Springer, New York,
2004.
[18] D. H. Wolpert and K. Tumer. Optimal payoff
functions for members of collectives. Advances in
Complex Systems, 4(2/3):265-279, 2001.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 349 | reinforcement learning;air traffic control;optimization;new method of estimating agent reward;reinforcement learn;traffic flow;congestion;deployment strategy;multiagent system;future atm concept evaluation tool |
train_I-62 | A Q-decomposition and Bounded RTDP Approach to Resource Allocation | This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded rtdp) is used. Bounded rtdp concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function. Categories and Subject Descriptors | 1. INTRODUCTION
This paper aims to contribute to solve complex stochastic
resource allocation problems. In general, resource
allocation problems are known to be NP-Complete [12]. In such
problems, a scheduling process suggests the action (i.e.
resources to allocate) to undertake to accomplish certain tasks,
according to the perfectly observable state of the
environment. When executing an action to realize a set of tasks,
the stochastic nature of the problem induces probabilities
on the next visited state. In general, the number of states
is the combination of all possible specific states of each task
and available resources. In this case, the number of
possible actions in a state is the combination of each individual
possible resource assignment to the tasks. The very high
number of states and actions in this type of problem makes
it very complex.
There can be many types of resource allocation problems.
Firstly, if the resources are already shared among the agents,
and the actions made by an agent does not influence the
state of another agent, the globally optimal policy can be
computed by planning separately for each agent. A second
type of resource allocation problem is where the resources
are already shared among the agents, but the actions made
by an agent may influence the reward obtained by at least
another agent. To solve this problem efficiently, we adapt
Qdecomposition proposed by Russell and Zimdars [9]. In our
Q-decomposition approach, a planning agent manages each
task and all agents have to share the limited resources. The
planning process starts with the initial state s0. In s0, each
agent computes their respective Q-value. Then, the
planning agents are coordinated through an arbitrator to find
the highest global Q-value by adding the respective possible
Q-values of each agents. When implemented with heuristic
search, since the number of states and actions to consider
when computing the optimal policy is exponentially reduced
compared to other known approaches, Q-decomposition
allows to formulate the first optimal decomposed heuristic
search algorithm in a stochastic environments.
On the other hand, when the resources are available to
all agents, no Q-decomposition is possible. A common
way of addressing this large stochastic problem is by
using Markov Decision Processes (mdps), and in particular
real-time search where many algorithms have been
developed recently. For instance Real-Time Dynamic
Programming (rtdp) [1], lrtdp [4], hdp [3], and lao [5] are all
state-of-the-art heuristic search approaches in a stochastic
environment. Because of its anytime quality, an interesting
approach is rtdp introduced by Barto et al. [1] which
updates states in trajectories from an initial state s0 to a goal
state sg. rtdp is used in this paper to solve efficiently a
constrained resource allocation problem.
rtdp is much more effective if the action space can be
pruned of sub-optimal actions. To do this, McMahan et
1212
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
al. [6], Smith and Simmons [11], and Singh and Cohn [10]
proposed solving a stochastic problem using a rtdp type
heuristic search with upper and lower bounds on the value
of states. McMahan et al. [6] and Smith and Simmons [11]
suggested, in particular, an efficient trajectory of state
updates to further speed up the convergence, when given upper
and lower bounds. This efficient trajectory of state updates
can be combined to the approach proposed here since this
paper focusses on the definition of tight bounds, and efficient
state update for a constrained resource allocation problem.
On the other hand, the approach by Singh and Cohn is
suitable to our case, and extended in this paper using, in
particular, the concept of marginal revenue [7] to elaborate
tight bounds. This paper proposes new algorithms to define
upper and lower bounds in the context of a rtdp heuristic
search approach. Our marginal revenue bounds are
compared theoretically and empirically to the bounds proposed
by Singh and Cohn. Also, even if the algorithm used to
obtain the optimal policy is rtdp, our bounds can be used
with any other algorithm to solve an mdp. The only
condition on the use of our bounds is to be in the context of
stochastic constrained resource allocation. The problem is
now modelled.
2. PROBLEM FORMULATION
A simple resource allocation problem is one where there
are the following two tasks to realize: ta1 = {wash the
dishes}, and ta2 = {clean the floor}. These two tasks are
either in the realized state, or not realized state. To realize the
tasks, two type of resources are assumed: res1 = {brush},
and res2 = {detergent}. A computer has to compute the
optimal allocation of these resources to cleaner robots to realize
their tasks. In this problem, a state represents a
conjunction of the particular state of each task, and the available
resources. The resources may be constrained by the amount
that may be used simultaneously (local constraint), and in
total (global constraint). Furthermore, the higher is the
number of resources allocated to realize a task, the higher is
the expectation of realizing the task. For this reason, when
the specific states of the tasks change, or when the number
of available resources changes, the value of this state may
change.
When executing an action a in state s, the specific states
of the tasks change stochastically, and the remaining
resource are determined with the resource available in s,
subtracted from the resources used by action a, if the resource
is consumable. Indeed, our model may consider
consumable and non-consumable resource types. A consumable
resource type is one where the amount of available resource
is decreased when it is used. On the other hand, a
nonconsumable resource type is one where the amount of
available resource is unchanged when it is used. For example, a
brush is a non-consumable resource, while the detergent is
a consumable resource.
2.1 Resource Allocation as a MDPs
In our problem, the transition function and the reward
function are both known. A Markov Decision Process (mdp)
framework is used to model our stochastic resource
allocation problem. mdps have been widely adopted by researchers
today to model a stochastic process. This is due to the fact
that mdps provide a well-studied and simple, yet very
expressive model of the world. An mdp in the context of a
resource allocation problem with limited resources is defined
as a tuple Res, T a, S, A, P, W, R, , where:
• Res = res1, ..., res|Res| is a finite set of resource
types available for a planning process. Each resource
type may have a local resource constraint Lres on
the number that may be used in a single step, and
a global resource constraint Gres on the number that
may be used in total. The global constraint only
applies for consumable resource types (Resc) and the
local constraints always apply to consumable and
nonconsumable resource types.
• T a is a finite set of tasks with ta ∈ T a to be
accomplished.
• S is a finite set of states with s ∈ S. A state s is
a tuple T a, res1, ..., res|Resc| , which is the
characteristic of each unaccomplished task ta ∈ T a in the
environment, and the available consumable resources.
sta is the specific state of task ta. Also, S contains a
non empty set sg ⊆ S of goal states. A goal state is a
sink state where an agent stays forever.
• A is a finite set of actions (or assignments). The
actions a ∈ A(s) applicable in a state are the
combination of all resource assignments that may be executed,
according to the state s. In particular, a is simply an
allocation of resources to the current tasks, and ata is
the resource allocation to task ta. The possible actions
are limited by Lres and Gres.
• Transition probabilities Pa(s |s) for s ∈ S and a ∈
A(s).
• W = [wta] is the relative weight (criticality) of each
task.
• State rewards R = [rs] :
ta∈T a
rsta ← sta × wta. The
relative reward of the state of a task rsta is the product
of a real number sta by the weight factor wta. For
our problem, a reward of 1 × wta is given when the
state of a task (sta) is in an achieved state, and 0 in
all other cases.
• A discount (preference) factor γ, which is a real
number between 0 and 1.
A solution of an mdp is a policy π mapping states s into
actions a ∈ A(s). In particular, πta(s) is the action (i.e.
resources to allocate) that should be executed on task ta,
considering the global state s. In this case, an optimal
policy is one that maximizes the expected total reward for
accomplishing all tasks. The optimal value of a state, V (s), is
given by:
V (s) = R(s) + max
a∈A(s)
γ
s ∈S
Pa(s |s)V (s ) (1)
where the remaining consumable resources in state s are
Resc \ res(a), where res(a) are the consumable resources
used by action a. Indeed, since an action a is a resource
assignment, Resc \ res(a) is the new set of available resources
after the execution of action a. Furthermore, one may
compute the Q-Values Q(a, s) of each state action pair using the
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1213
following equation:
Q(a, s) = R(s) + γ
s ∈S
Pa(s |s) max
a ∈A(s )
Q(a , s ) (2)
where the optimal value of a state is V (s) = max
a∈A(s)
Q(a, s).
The policy is subjected to the local resource constraints
res(π(s)) ≤ Lres∀ s ∈ S , and ∀ res ∈ Res. The global
constraint is defined according to all system trajectories
tra ∈ T RA. A system trajectory tra is a possible sequence
of state-action pairs, until a goal state is reached under the
optimal policy π. For example, state s is entered, which may
transit to s or to s , according to action a. The two
possible system trajectories are (s, a), (s ) and (s, a), (s ) .
The global resource constraint is res(tra) ≤ Gres∀ tra ∈
T RA ,and ∀ res ∈ Resc where res(tra) is a function which
returns the resources used by trajectory tra. Since the
available consumable resources are represented in the state space,
this condition is verified by itself. In other words, the model
is Markovian as the history has not to be considered in the
state space. Furthermore, the time is not considered in the
model description, but it may also include a time horizon
by using a finite horizon mdp. Since resource allocation in
a stochastic environment is NP-Complete, heuristics should
be employed. Q-decomposition which decomposes a
planning problem to many agents to reduce the computational
complexity associated to the state and/or action spaces is
now introduced.
2.2 Q-decomposition for Resource Allocation
There can be many types of resource allocation problems.
Firstly, if the resources are already shared among the agents,
and the actions made by an agent does not influence the
state of another agent, the globally optimal policy can be
computed by planning separately for each agent.
A second type of resource allocation problem is where
the resources are already shared among the agents, but the
actions made by an agent may influence the reward obtained
by at least another agent. For instance, a group of agents
which manages the oil consummated by a country falls in
this group. These agents desire to maximize their specific
reward by consuming the right amount of oil. However, all
the agents are penalized when an agent consumes oil because
of the pollution it generates. Another example of this type
comes from our problem of interest, explained in Section
3, which is a naval platform which must counter incoming
missiles (i.e. tasks) by using its resources (i.e. weapons,
movements). In some scenarios, it may happens that the
missiles can be classified in two types: Those requiring a
set of resources Res1 and those requiring a set of resources
Res2. This can happen depending on the type of missiles,
their range, and so on. In this case, two agents can plan for
both set of tasks to determine the policy. However, there
are interaction between the resource of Res1 and Res2, so
that certain combination of resource cannot be assigned. IN
particular, if an agent i allocate resources Resi to the first
set of tasks T ai, and agent i allocate resources Resi to
second set of tasks T ai , the resulting policy may include
actions which cannot be executed together.
To result these conflicts, we use Q-decomposition
proposed by Russell and Zimdars [9] in the context of
reinforcement learning. The primary assumption underlying
Qdecomposition is that the overall reward function R can be
additively decomposed into separate rewards Ri for each
distinct agent i ∈ Ag, where |Ag| is the number of agents. That
is, R = i∈Ag Ri. It requires each agent to compute a value,
from its perspective, for every action. To coordinate with
each other, each agent i reports its action values Qi(ai, si)
for each state si ∈ Si to an arbitrator at each learning
iteration. The arbitrator then chooses an action maximizing
the sum of the agent Q-values for each global state s ∈ S.
The next time state s is updated, an agent i considers the
value as its respective contribution, or Q-value, to the global
maximal Q-value. That is, Qi(ai, si) is the value of a state
such that it maximizes maxa∈A(s) i∈Ag Qi(ai, si). The fact
that the agents use a determined Q-value as the value of a
state is an extension of the Sarsa on-policy algorithm [8] to
Q-decomposition. Russell and Zimdars called this approach
local Sarsa. In this way, an ideal compromise can be found
for the agents to reach a global optimum. Indeed, rather
than allowing each agent to choose the successor action, each
agent i uses the action ai executed by the arbitrator in the
successor state si:
Qi(ai, si) = Ri(si) + γ
si∈Si
Pai (si|si)Qi(ai, si)
(3)
where the remaining consumable resources in state si are
Resci \ resi(ai) for a resource allocation problem. Russell
and Zimdars [9] demonstrated that local Sarsa converges
to the optimum. Also, in some cases, this form of agent
decomposition allows the local Q-functions to be expressed
by a much reduced state and action space.
For our resource allocation problem described briefly in
this section, Q-decomposition can be applied to generate an
optimal solution. Indeed, an optimal Bellman backup can
be applied in a state as in Algorithm 1. In Line 5 of the
Qdec-backup function, each agent managing a task
computes its respective Q-value. Here, Qi (ai, s ) determines the
optimal Q-value of agent i in state s . An agent i uses as
the value of a possible state transition s the Q-value for
this agent which determines the maximal global Q-value for
state s as in the original Q-decomposition approach. In
brief, for each visited states s ∈ S, each agent computes its
respective Q-values with respect to the global state s. So
the state space is the joint state space of all agents. Some
of the gain in complexity to use Q-decomposition resides in
the
si∈Si
Pai (si|s) part of the equation. An agent considers
as a possible state transition only the possible states of the
set of tasks it manages. Since the number of states is
exponential with the number of tasks, using Q-decomposition
should reduce the planning time significantly. Furthermore,
the action space of the agents takes into account only their
available resources which is much less complex than a
standard action space, which is the combination of all possible
resource allocation in a state for all agents.
Then, the arbitrator functionalities are in Lines 8 to 20.
The global Q-value is the sum of the Q-values produced by
each agent managing each task as shown in Line 11,
considering the global action a. In this case, when an action of an
agent i cannot be executed simultaneously with an action
of another agent i , the global action is simply discarded
from the action space A(s). Line 14 simply allocate the
current value with respect to the highest global Q-value, as in
a standard Bellman backup. Then, the optimal policy and
Q-value of each agent is updated in Lines 16 and 17 to the
sub-actions ai and specific Q-values Qi(ai, s) of each agent
1214 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
for action a.
Algorithm 1 The Q-decomposition Bellman Backup.
1: Function Qdec-backup(s)
2: V (s) ← 0
3: for all i ∈ Ag do
4: for all ai ∈ Ai(s) do
5: Qi(ai, s) ← Ri(s) + γ
si
∈Si
Pai (si|s)Qi (ai, s )
{where Qi (ai, s ) = hi(s ) when s is not yet visited,
and s has Resci \ resi(ai) remaining consumable
resources for each agent i}
6: end for
7: end for
8: for all a ∈ A(s) do
9: Q(a, s) ← 0
10: for all i ∈ Ag do
11: Q(a, s) ← Q(a, s) + Qi(ai, s)
12: end for
13: if Q(a, s) > V (s) then
14: V (s) ← Q(a, s)
15: for all i ∈ Ag do
16: πi(s) ← ai
17: Qi (ai, s) ← Qi(ai, s)
18: end for
19: end if
20: end for
A standard Bellman backup has a complexity of O(|A| ×
|SAg|), where |SAg| is the number of joint states for all agents
excluding the resources, and |A| is the number of joint
actions. On the other hand, the Q-decomposition Bellman
backup has a complexity of O((|Ag| × |Ai| × |Si)|) + (|A| ×
|Ag|)), where |Si| is the number of states for an agent i,
excluding the resources and |Ai| is the number of actions
for an agent i. Since |SAg| is combinatorial with the
number of tasks, so |Si| |S|. Also, |A| is combinatorial with
the number of resource types. If the resources are already
shared among the agents, the number of resource type for
each agent will usually be lower than the set of all
available resource types for all agents. In these circumstances,
|Ai| |A|. In a standard Bellman backup, |A| is multiplied
by |SAg|, which is much more complex than multiplying |A|
by |Ag| with the Q-decomposition Bellman backup. Thus,
the Q-decomposition Bellman backup is much less complex
than a standard Bellman backup. Furthermore, the
communication cost between the agents and the arbitrator is
null since this approach does not consider a geographically
separated problem.
However, when the resources are available to all agents,
no Q-decomposition is possible. In this case, Bounded
RealTime Dynamic Programming (bounded-rtdp) permits to
focuss the search on relevant states, and to prune the action
space A by using lower and higher bound on the value of
states. bounded-rtdp is now introduced.
2.3 Bounded-RTDP
Bonet and Geffner [4] proposed lrtdp as an improvement
to rtdp [1]. lrtdp is a simple dynamic programming
algorithm that involves a sequence of trial runs, each starting in
the initial state s0 and ending in a goal or a solved state.
Each lrtdp trial is the result of simulating the policy π while
updating the values V (s) using a Bellman backup (Equation
1) over the states s that are visited. h(s) is a heuristic which
define an initial value for state s. This heuristic has to be
admissible - The value given by the heuristic has to
overestimate (or underestimate) the optimal value V (s) when
the objective function is maximized (or minimized). For
example, an admissible heuristic for a stochastic shortest
path problem is the solution of a deterministic shortest path
problem. Indeed, since the problem is stochastic, the
optimal value is lower than for the deterministic version. It has
been proven that lrtdp, given an admissible initial
heuristic on the value of states cannot be trapped in loops, and
eventually yields optimal values [4]. The convergence is
accomplished by means of a labeling procedure called
checkSolved(s, ). This procedure tries to label as solved each
traversed state in the current trajectory. When the initial
state is labelled as solved, the algorithm has converged.
In this section, a bounded version of rtdp
(boundedrtdp) is presented in Algorithm 2 to prune the action space
of sub-optimal actions. This pruning enables to speed up the
convergence of lrtdp. bounded-rtdp is similar to rtdp
except there are two distinct initial heuristics for unvisited
states s ∈ S; hL(s) and hU (s). Also, the checkSolved(s,
) procedure can be omitted because the bounds can provide
the labeling of a state as solved. On the one hand, hL(s)
defines a lower bound on the value of s such that the optimal
value of s is higher than hL(s). For its part, hU (s) defines an
upper bound on the value of s such that the optimal value
of s is lower than hU (s).
The values of the bounds are computed in Lines 3 and
4 of the bounded-backup function. Computing these two
Q-values is made simultaneously as the state transitions are
the same for both Q-values. Only the values of the state
transitions change. Thus, having to compute two Q-values
instead of one does not augment the complexity of the
approach. In fact, Smith and Simmons [11] state that the
additional time to compute a Bellman backup for two bounds,
instead of one, is no more than 10%, which is also what we
obtained. In particular, L(s) is the lower bound of state s,
while U(s) is the upper bound of state s. Similarly, QL(a, s)
is the Q-value of the lower bound of action a in state s, while
QU (a, s) is the Q-value of the upper bound of action a in
state s. Using these two bounds allow significantly
reducing the action space A. Indeed, in Lines 5 and 6 of the
bounded-backup function, if QU (a, s) ≤ L(s) then action
a may be pruned from the action space of s. In Line 13
of this function, a state can be labeled as solved if the
difference between the lower and upper bounds is lower than
. When the execution goes back to the bounded-rtdp
function, the next state in Line 10 has a fixed number of
consumable resources available Resc, determined in Line 9.
In brief, pickNextState(res) selects a none-solved state
s reachable under the current policy which has the highest
Bellman error (|U(s) − L(s)|). Finally, in Lines 12 to 15, a
backup is made in a backward fashion on all visited state
of a trajectory, when this trajectory has been made. This
strategy has been proven as efficient [11] [6].
As discussed by Singh and Cohn [10], this type of
algorithm has a number of desirable anytime characteristics: if
an action has to be picked in state s before the algorithm
has converged (while multiple competitive actions remains),
the action with the highest lower bound is picked. Since
the upper bound for state s is known, it may be estimated
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215
Algorithm 2 The bounded-rtdp algorithm. Adapted
from [4] and [10].
1: Function bounded-rtdp(S)
2: returns a value function V
3: repeat
4: s ← s0
5: visited ← null
6: repeat
7: visited.push(s)
8: bounded-backup(s)
9: Resc ← Resc \ {π(s)}
10: s ← s.pickNextState(Resc)
11: until s is a goal
12: while visited = null do
13: s ← visited.pop()
14: bounded-backup(s)
15: end while
16: until s0 is solved or |A(s)| = 1 ∀ s ∈ S reachable from
s0
17: return V
Algorithm 3 The bounded Bellman backup.
1: Function bounded-backup(s)
2: for all a ∈ A(s) do
3: QU (a, s) ← R(s) + γ
s ∈S
Pa(s |s)U(s )
4: QL(a, s) ← R(s) + γ
s ∈S
Pa(s |s)L(s )
{where L(s ) ← hL(s ) and U(s ) ← hU (s ) when s
is not yet visited and s has Resc \ res(a) remaining
consumable resources}
5: if QU (a, s) ≤ L(s) then
6: A(s) ← A(s) \ res(a)
7: end if
8: end for
9: L(s) ← max
a∈A(s)
QL(a, s)
10: U(s) ← max
a∈A(s)
QU (a, s)
11: π(s) ← arg max
a∈A(s)
QL(a, s)
12: if |U(s) − L(s)| < then
13: s ← solved
14: end if
how far the lower bound is from the optimal. If the
difference between the lower and upper bound is too high, one
can choose to use another greedy algorithm of one"s choice,
which outputs a fast and near optimal solution.
Furthermore, if a new task dynamically arrives in the environment,
it can be accommodated by redefining the lower and
upper bounds which exist at the time of its arrival. Singh
and Cohn [10] proved that an algorithm that uses
admissible lower and upper bounds to prune the action space is
assured of converging to an optimal solution.
The next sections describe two separate methods to define
hL(s) and hU (s). First of all, the method of Singh and Cohn
[10] is briefly described. Then, our own method proposes
tighter bounds, thus allowing a more effective pruning of
the action space.
2.4 Singh and Cohn"s Bounds
Singh and Cohn [10] defined lower and upper bounds to
prune the action space. Their approach is pretty
straightforward. First of all, a value function is computed for all
tasks to realize, using a standard rtdp approach. Then,
using these task-value functions, a lower bound hL, and
upper bound hU can be defined. In particular, hL(s) =
max
ta∈T a
Vta(sta), and hU (s) =
ta∈T a
Vta(sta). For readability,
the upper bound by Singh and Cohn is named SinghU, and
the lower bound is named SinghL. The admissibility of these
bounds has been proven by Singh and Cohn, such that, the
upper bound always overestimates the optimal value of each
state, while the lower bound always underestimates the
optimal value of each state. To determine the optimal policy
π, Singh and Cohn implemented an algorithm very similar
to bounded-rtdp, which uses the bounds to initialize L(s)
and U(s). The only difference between bounded-rtdp, and
the rtdp version of Singh and Cohn is in the stopping
criteria. Singh and Cohn proposed that the algorithm terminates
when only one competitive action remains for each state, or
when the range of all competitive actions for any state are
bounded by an indifference parameter . bounded-rtdp
labels states for which |U(s) − L(s)| < as solved and the
convergence is reached when s0 is solved or when only one
competitive action remains for each state. This stopping
criteria is more effective since it is similar to the one used
by Smith and Simmons [11] and McMahan et al. brtdp [6].
In this paper, the bounds defined by Singh and Cohn and
implemented using bounded-rtdp define the Singh-rtdp
approach. The next sections propose to tighten the bounds
of Singh-rtdp to permit a more effective pruning of the
action space.
2.5 Reducing the Upper Bound
SinghU includes actions which may not be possible to
execute because of resource constraints, which overestimates
the upper bound. To consider only possible actions, our
upper bound, named maxU is introduced:
hU (s) = max
a∈A(s)
ta∈T a
Qta(ata, sta) (4)
where Qta(ata, sta) is the Q-value of task ta for state sta,
and action ata computed using a standard lrtdp approach.
Theorem 2.1. The upper bound defined by Equation 4 is
admissible.
Proof: The local resource constraints are satisfied because
the upper bound is computed using all global possible
actions a. However, hU (s) still overestimates V (s) because
the global resource constraint is not enforced. Indeed, each
task may use all consumable resources for its own purpose.
Doing this produces a higher value for each task, than the
one obtained when planning for all tasks globally with the
shared limited resources.
Computing the maxU bound in a state has a
complexity of O(|A| × |T a|), and O(|T a|) for SinghU. A standard
Bellman backup has a complexity of O(|A| × |S|). Since
|A|×|T a| |A|×|S|, the computation time to determine the
upper bound of a state, which is done one time for each
visited state, is much less than the computation time required
to compute a standard Bellman backup for a state, which is
usually done many times for each visited state. Thus, the
computation time of the upper bound is negligible.
1216 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
2.6 Increasing the Lower Bound
The idea to increase SinghL is to allocate the resources
a priori among the tasks. When each task has its own set
of resources, each task may be solved independently. The
lower bound of state s is hL(s) =
ta∈T a
Lowta(sta), where
Lowta(sta) is a value function for each task ta ∈ T a, such
that the resources have been allocated a priori. The
allocation a priori of the resources is made using marginal revenue,
which is a highly used concept in microeconomics [7], and
has recently been used for coordination of a Decentralized
mdp [2]. In brief, marginal revenue is the extra revenue that
an additional unit of product will bring to a firm. Thus,
for a stochastic resource allocation problem, the marginal
revenue of a resource is the additional expected value it
involves. The marginal revenue of a resource res for a task ta
in a state sta is defined as following:
mrta(sta) = max
ata∈A(sta)
Qta(ata, sta)−
max
ata∈A(sta)
Qta(ata|res /∈ ata, sta) (5)
The concept of marginal revenue of a resource is used in
Algorithm 4 to allocate the resources a priori among the
tasks which enables to define the lower bound value of a
state. In Line 4 of the algorithm, a value function is
computed for all tasks in the environment using a standard
lrtdp [4] approach. These value functions, which are also
used for the upper bound, are computed considering that
each task may use all available resources. The Line 5
initializes the valueta variable. This variable is the estimated
value of each task ta ∈ T a. In the beginning of the
algorithm, no resources are allocated to a specific task, thus the
valueta variable is initialized to 0 for all ta ∈ T a. Then, in
Line 9, a resource type res (consumable or non-consumable)
is selected to be allocated. Here, a domain expert may
separate all available resources in many types or parts to be
allocated. The resources are allocated in the order of its
specialization. In other words, the more a resource is
efficient on a small group of tasks, the more it is allocated early.
Allocating the resources in this order improves the quality
of the resulting lower bound. The Line 12 computes the
marginal revenue of a consumable resource res for each task
ta ∈ T a. For a non-consumable resource, since the resource
is not considered in the state space, all other reachable states
from sta consider that the resource res is still usable. The
approach here is to sum the difference between the real value
of a state to the maximal Q-value of this state if resource res
cannot be used for all states in a trajectory given by the
policy of task ta. This heuristic proved to obtain good results,
but other ones may be tried, for example Monte-Carlo
simulation. In Line 21, the marginal revenue is updated in
function of the resources already allocated to each task. R(sgta )
is the reward to realize task ta. Thus, Vta(sta)−valueta
R(sgta )
is
the residual expected value that remains to be achieved,
knowing current allocation to task ta, and normalized by
the reward of realizing the tasks. The marginal revenue is
multiplied by this term to indicate that, the more a task
has a high residual value, the more its marginal revenue is
going to be high. Then, a task ta is selected in Line 23 with
the highest marginal revenue, adjusted with residual value.
In Line 24, the resource type res is allocated to the group
of resources Resta of task ta. Afterwards, Line 29
recomAlgorithm 4 The marginal revenue lower bound algorithm.
1: Function revenue-bound(S)
2: returns a lower bound LowT a
3: for all ta ∈ T a do
4: Vta ←lrtdp(Sta)
5: valueta ← 0
6: end for
7: s ← s0
8: repeat
9: res ← Select a resource type res ∈ Res
10: for all ta ∈ T a do
11: if res is consumable then
12: mrta(sta) ← Vta(sta) − Vta(sta(Res \ res))
13: else
14: mrta(sta) ← 0
15: repeat
16: mrta(sta) ← mrta(sta) +
Vta(sta)max
(ata∈A(sta)|res/∈ata)
Qta(ata, sta)
17: sta ← sta.pickNextState(Resc)
18: until sta is a goal
19: s ← s0
20: end if
21: mrrvta(sta) ← mrta(sta) × Vta(sta)−valueta
R(sgta )
22: end for
23: ta ← Task ta ∈ T a which maximize mrrvta(sta)
24: Resta ← Resta {res}
25: temp ← ∅
26: if res is consumable then
27: temp ← res
28: end if
29: valueta ← valueta + ((Vta(sta) − valueta)×
max
ata∈A(sta,res)
Qta(ata,sta(temp))
Vta(sta)
)
30: until all resource types res ∈ Res are assigned
31: for all ta ∈ T a do
32: Lowta ←lrtdp(Sta, Resta)
33: end for
34: return LowT a
putes valueta. The first part of the equation to compute
valueta represents the expected residual value for task ta.
This term is multiplied by
max
ata∈A(sta)
Qta(ata,sta(res))
Vta(sta)
, which
is the ratio of the efficiency of resource type res. In other
words, valueta is assigned to valueta + (the residual value
× the value ratio of resource type res). For a consumable
resource, the Q-value consider only resource res in the state
space, while for a non-consumable resource, no resources are
available.
All resource types are allocated in this manner until Res is
empty. All consumable and non-consumable resource types
are allocated to each task. When all resources are allocated,
the lower bound components Lowta of each task are
computed in Line 32. When the global solution is computed,
the lower bound is as follow:
hL(s) = max(SinghL, max
a∈A(s)
ta∈T a
Lowta(sta)) (6)
We use the maximum of the SinghL bound and the sum
of the lower bound components Lowta, thus
marginalrevenue ≥ SinghL. In particular, the SinghL bound may
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217
be higher when a little number of tasks remain. As the
components Lowta are computed considering s0; for example, if
in a subsequent state only one task remains, the bound of
SinghL will be higher than any of the Lowta components.
The main difference of complexity between SinghL and
revenue-bound is in Line 32 where a value for each task
has to be computed with the shared resource. However,
since the resource are shared, the state space and action
space is greatly reduced for each task, reducing greatly the
calculus compared to the value functions computed in Line
4 which is done for both SinghL and revenue-bound.
Theorem 2.2. The lower bound of Equation 6 is
admissible.
Proof: Lowta(sta) is computed with the resource being
shared. Summing the Lowta(sta) value functions for each
ta ∈ T a does not violates the local and global resource
constraints. Indeed, as the resources are shared, the tasks
cannot overuse them. Thus, hL(s) is a realizable policy, and an
admissible lower bound.
3. DISCUSSION AND EXPERIMENTS
The domain of the experiments is a naval platform which
must counter incoming missiles (i.e. tasks) by using its
resources (i.e. weapons, movements). For the experiments,
100 randomly resource allocation problems were generated
for each approach, and possible number of tasks. In our
problem, |Sta| = 4, thus each task can be in four distinct
states. There are two types of states; firstly, states where
actions modify the transition probabilities; and then, there
are goal states. The state transitions are all stochastic
because when a missile is in a given state, it may always transit
in many possible states. In particular, each resource type
has a probability to counter a missile between 45% and 65%
depending on the state of the task. When a missile is not
countered, it transits to another state, which may be
preferred or not to the current state, where the most preferred
state for a task is when it is countered. The effectiveness
of each resource is modified randomly by ±15% at the start
of a scenario. There are also local and global resource
constraints on the amount that may be used. For the local
constraints, at most 1 resource of each type can be
allocated to execute tasks in a specific state. This constraint is
also present on a real naval platform because of sensor and
launcher constraints and engagement policies. Furthermore,
for consumable resources, the total amount of available
consumable resource is between 1 and 2 for each type. The
global constraint is generated randomly at the start of a
scenario for each consumable resource type. The number of
resource type has been fixed to 5, where there are 3
consumable resource types and 2 non-consumable resources types.
For this problem a standard lrtdp approach has been
implemented. A simple heuristic has been used where the
value of an unvisited state is assigned as the value of a goal
state such that all tasks are achieved. This way, the value
of each unvisited state is assured to overestimate its real
value since the value of achieving a task ta is the highest
the planner may get for ta. Since this heuristic is pretty
straightforward, the advantages of using better heuristics are
more evident. Nevertheless, even if the lrtdp approach uses
a simple heuristic, still a huge part of the state space is not
visited when computing the optimal policy. The approaches
described in this paper are compared in Figures 1 and 2.
Lets summarize these approaches here:
• Qdec-lrtdp: The backups are computed using the
Qdec-backup function (Algorithm 1), but in a lrtdp
context. In particular the updates made in the
checkSolved function are also made using the the
Qdecbackup function.
• lrtdp-up: The upper bound of maxU is used for
lrtdp.
• Singh-rtdp: The SinghL and SinghU bounds are
used for bounded-rtdp.
• mr-rtdp: The revenue-bound and maxU bounds
are used for bounded-rtdp.
To implement Qdec-lrtdp, we divided the set of tasks
in two equal parts. The set of task T ai, managed by agent
i, can be accomplished with the set of resources Resi, while
the second set of task T ai , managed by agent Agi , can be
accomplished with the set of resources Resi . Resi had one
consumable resource type and one non-consumable resource
type, while Resi had two consumable resource types and
one non-consumable resource type. When the number of
tasks is odd, one more task was assigned to T ai . There are
constraint between the group of resource Resi and Resi
such that some assignments are not possible. These
constraints are managed by the arbitrator as described in
Section 2.2. Q-decomposition permits to diminish the planning
time significantly in our problem settings, and seems a very
efficient approach when a group of agents have to allocate
resources which are only available to themselves, but the
actions made by an agent may influence the reward obtained
by at least another agent.
To compute the lower bound of revenue-bound, all
available resources have to be separated in many types or
parts to be allocated. For our problem, we allocated each
resource of each type in the order of of its specialization like
we said when describing the revenue-bound function.
In terms of experiments, notice that the lrtdp lrtdp-up
and approaches for resource allocation, which doe not prune
the action space, are much more complex. For instance, it
took an average of 1512 seconds to plan for the lrtdp-up
approach with six tasks (see Figure 1). The Singh-rtdp
approach diminished the planning time by using a lower
and upper bound to prune the action space. mr-rtdp
further reduce the planning time by providing very tight
initial bounds. In particular, Singh-rtdp needed 231 seconds
in average to solve problem with six tasks and mr-rtdp
required 76 seconds. Indeed, the time reduction is quite
significant compared to lrtdp-up, which demonstrates the
efficiency of using bounds to prune the action space.
Furthermore, we implemented mr-rtdp with the SinghU
bound, and this was slightly less efficient than with the
maxU bound. We also implemented mr-rtdp with the
SinghL bound, and this was slightly more efficient than
Singh-rtdp. From these results, we conclude that the
difference of efficiency between mr-rtdp and Singh-rtdp is
more attributable to the marginal-revenue lower bound
that to the maxU upper bound. Indeed, when the number
of task to execute is high, the lower bounds by Singh-rtdp
takes the values of a single task. On the other hand, the
lower bound of mr-rtdp takes into account the value of all
1218 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
0.01
0.1
1
10
100
1000
10000
100000
1 2 3 4 5 6 7 8 9 10 11 12 13
Timeinseconds
Number of tasks
LRTDP
QDEC-LRTDP
Figure 1: Efficiency of Q-decomposition LRTDP
and LRTDP.
0.01
0.1
1
10
100
1000
10000
1 2 3 4 5 6 7 8
Timeinseconds
Number of tasks
LRTDP
LRTDP-up
Singh-RTDP
MR-RTDP
Figure 2: Efficiency of MR-RTDP compared to
SINGH-RTDP.
task by using a heuristic to distribute the resources.
Indeed, an optimal allocation is one where the resources are
distributed in the best way to all tasks, and our lower bound
heuristically does that.
4. CONCLUSION
The experiments have shown that Q-decomposition seems
a very efficient approach when a group of agents have to
allocate resources which are only available to themselves,
but the actions made by an agent may influence the reward
obtained by at least another agent.
On the other hand, when the available resource are
shared, no Q-decomposition is possible and we proposed
tight bounds for heuristic search. In this case, the
planning time of bounded-rtdp, which prunes the action space,
is significantly lower than for lrtdp. Furthermore, The
marginal revenue bound proposed in this paper compares
favorably to the Singh and Cohn [10] approach.
boundedrtdp with our proposed bounds may apply to a wide range
of stochastic environments. The only condition for the use
our bounds is that each task possesses consumable and/or
non-consumable limited resources.
An interesting research avenue would be to experiment
our bounds with other heuristic search algorithms. For
instance, frtdp [11], and brtdp [6] are both efficient
heuristic search algorithms. In particular, both these approaches
proposed an efficient state trajectory updates, when given
upper and lower bounds. Our tight bounds would enable,
for both frtdp and brtdp, to reduce the number of backup
to perform before convergence. Finally, the bounded-rtdp
function prunes the action space when QU (a, s) ≤ L(s), as
Singh and Cohn [10] suggested. frtdp and brtdp could
also prune the action space in these circumstances to
further reduce their planning time.
5. REFERENCES
[1] A. Barto, S. Bradtke, and S. Singh. Learning to act
using real-time dynamic programming. Artificial
Intelligence, 72(1):81-138, 1995.
[2] A. Beynier and A. I. Mouaddib. An iterative algorithm
for solving constrained decentralized markov decision
processes. In Proceeding of the Twenty-First National
Conference on Artificial Intelligence (AAAI-06), 2006.
[3] B. Bonet and H. Geffner. Faster heuristic search
algorithms for planning with uncertainty and full
feedback. In Proceedings of the Eighteenth
International Joint Conference on Artificial
Intelligence (IJCAI-03), August 2003.
[4] B. Bonet and H. Geffner. Labeled lrtdp approach:
Improving the convergence of real-time dynamic
programming. In Proceeding of the Thirteenth
International Conference on Automated Planning &
Scheduling (ICAPS-03), pages 12-21, Trento, Italy,
2003.
[5] E. A. Hansen and S. Zilberstein. lao : A heuristic
search algorithm that finds solutions with loops.
Artificial Intelligence, 129(1-2):35-62, 2001.
[6] H. B. McMahan, M. Likhachev, and G. J. Gordon.
Bounded real-time dynamic programming: rtdp with
monotone upper bounds and performance guarantees.
In ICML "05: Proceedings of the Twenty-Second
International Conference on Machine learning, pages
569-576, New York, NY, USA, 2005. ACM Press.
[7] R. S. Pindyck and D. L. Rubinfeld. Microeconomics.
Prentice Hall, 2000.
[8] G. A. Rummery and M. Niranjan. On-line Q-learning
using connectionist systems. Technical report
CUED/FINFENG/TR 166, Cambridge University
Engineering Department, 1994.
[9] S. J. Russell and A. Zimdars. Q-decomposition for
reinforcement learning agents. In ICML, pages
656-663, 2003.
[10] S. Singh and D. Cohn. How to dynamically merge
markov decision processes. In Advances in Neural
Information Processing Systems, volume 10, pages
1057-1063, Cambridge, MA, USA, 1998. MIT Press.
[11] T. Smith and R. Simmons. Focused real-time dynamic
programming for mdps: Squeezing more out of a
heuristic. In Proceedings of the Twenty-First National
Conference on Artificial Intelligence (AAAI), Boston,
USA, 2006.
[12] W. Zhang. Modeling and solving a resource allocation
problem with soft constraint techniques. Technical
report: wucs-2002-13, Washington University,
Saint-Louis, Missouri, 2002.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1219 | planning agent;real-time dynamic programming;heuristic search;reward separated agent;stochastic environment;marginal revenue;markov decision process;complex stochastic resource allocation problem;marginal revenue bound;resource management;resource allocation;q-decomposition |
train_I-63 | Combinatorial Resource Scheduling for Multiagent MDPs | Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain. | 1. INTRODUCTION
The tasks of optimal resource allocation and scheduling
are ubiquitous in multiagent systems, but solving such
optimization problems can be computationally difficult, due to
a number of factors. In particular, when the value of a set of
resources to an agent is not additive (as is often the case with
resources that are substitutes or complements), the utility
function might have to be defined on an exponentially large
space of resource bundles, which very quickly becomes
computationally intractable. Further, even when each agent has
a utility function that is nonzero only on a small subset of
the possible resource bundles, obtaining optimal allocation
is still computationally prohibitive, as the problem becomes
NP-complete [14].
Such computational issues have recently spawned several
threads of work in using compact models of agents"
preferences. One idea is to use any structure present in utility
functions to represent them compactly, via, for example,
logical formulas [15, 10, 4, 3]. An alternative is to directly model
the mechanisms that define the agents" utility functions and
perform resource allocation directly with these models [9]. A
way of accomplishing this is to model the processes by which
an agent might utilize the resources and define the utility
function as the payoff of these processes. In particular, if
an agent uses resources to act in a stochastic environment,
its utility function can be naturally modeled with a Markov
decision process, whose action set is parameterized by the
available resources. This representation can then be used to
construct very efficient resource-allocation algorithms that
lead to an exponential speedup over a straightforward
optimization problem with flat representations of combinatorial
preferences [6, 7, 8].
However, this existing work on resource allocation with
preferences induced by resource-parameterized MDPs makes
an assumption that the resources are only allocated once and
are then utilized by the agents independently within their
infinite-horizon MDPs. This assumption that no reallocation
of resources is possible can be limiting in domains where
agents arrive and depart dynamically.
In this paper, we extend the work on resource allocation
under MDP-induced preferences to discrete-time scheduling
problems, where agents are present in the system for finite
time intervals and can only use resources within these
intervals. In particular, agents arrive and depart at arbitrary
(predefined) times and within these intervals use resources
to execute tasks in finite-horizon MDPs. We address the
problem of globally optimal resource scheduling, where the
objective is to find an allocation of resources to the agents
across time that maximizes the sum of the expected rewards
that they obtain.
In this context, our main contribution is a
mixed-integerprogramming formulation of the scheduling problem that
chooses globally optimal resource assignments, starting times,
and execution horizons for all agents (within their
arrival1220
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
departure intervals). We analyze and empirically compare
two flavors of the scheduling problem: one, where agents
have static resource assignments within their finite-horizon
MDPs, and another, where resources can be dynamically
reallocated between agents at every time step.
In the rest of the paper, we first lay down the necessary
groundwork in Section 2 and then introduce our model and
formal problem statement in Section 3. In Section 4.2, we
describe our main result, the optimization program for
globally optimal resource scheduling. Following the discussion of
our experimental results on a job-scheduling problem in
Section 5, we conclude in Section 6 with a discussion of possible
extensions and generalizations of our method.
2. BACKGROUND
Similarly to the model used in previous work on
resourceallocation with MDP-induced preferences [6, 7], we define
the value of a set of resources to an agent as the value of the
best MDP policy that is realizable, given those resources.
However, since the focus of our work is on scheduling
problems, and a large part of the optimization problem is to
decide how resources are allocated in time among agents
with finite arrival and departure times, we model the agents"
planning problems as finite-horizon MDPs, in contrast to
previous work that used infinite-horizon discounted MDPs.
In the rest of this section, we first introduce some
necessary background on finite-horizon MDPs and present a
linear-programming formulation that serves as the basis for
our solution algorithm developed in Section 4. We also
outline the standard methods for combinatorial resource
scheduling with flat resource values, which serve as a comparison
benchmark for the new model developed here.
2.1 Markov Decision Processes
A stationary, finite-domain, discrete-time MDP (see, for
example, [13] for a thorough and detailed development) can
be described as S, A, p, r , where: S is a finite set of
system states; A is a finite set of actions that are available to
the agent; p is a stationary stochastic transition function,
where p(σ|s, a) is the probability of transitioning to state σ
upon executing action a in state s; r is a stationary reward
function, where r(s, a) specifies the reward obtained upon
executing action a in state s.
Given such an MDP, a decision problem under a finite
horizon T is to choose an optimal action at every time step
to maximize the expected value of the total reward accrued
during the agent"s (finite) lifetime. The agent"s optimal
policy is then a function of current state s and the time until
the horizon. An optimal policy for such a problem is to act
greedily with respect to the optimal value function, defined
recursively by the following system of finite-time Bellman
equations [2]:
v(s, t) = max
a
r(s, a) +
X
σ
p(σ|s, a)v(σ, t + 1),
∀s ∈ S, t ∈ [1, T − 1];
v(s, T) = 0, ∀s ∈ S;
where v(s, t) is the optimal value of being in state s at time
t ∈ [1, T].
This optimal value function can be easily computed using
dynamic programming, leading to the following optimal
policy π, where π(s, a, t) is the probability of executing action
a in state s at time t:
π(s, a, t) =
(
1, a = argmaxa r(s, a) +
P
σ p(σ|s, a)v(σ, t + 1),
0, otherwise.
The above is the most common way of computing the
optimal value function (and therefore an optimal policy) for
a finite-horizon MDP. However, we can also formulate the
problem as the following linear program (similarly to the
dual LP for infinite-horizon discounted MDPs [13, 6, 7]):
max
X
s
X
a
r(s, a)
X
t
x(s, a, t)
subject to:
X
a
x(σ, a, t + 1) =
X
s,a
p(σ|s, a)x(s, a, t) ∀σ, t ∈ [1, T − 1];
X
a
x(s, a, 1) = α(s), ∀s ∈ S;
(1)
where α(s) is the initial distribution over the state space, and
x is the (non-stationary) occupation measure (x(s, a, t) ∈
[0, 1] is the total expected number of times action a is
executed in state s at time t). An optimal (non-stationary)
policy is obtained from the occupation measure as follows:
π(s, a, t) = x(s, a, t)/
X
a
x(s, a, t) ∀s ∈ S, t ∈ [1, T]. (2)
Note that the standard unconstrained finite-horizon MDP,
as described above, always has a uniformly-optimal
solution (optimal for any initial distribution α(s)). Therefore,
an optimal policy can be obtained by using an arbitrary
constant α(s) > 0 (in particular, α(s) = 1 will result in
x(s, a, t) = π(s, a, t)).
However, for MDPs with resource constraints (as defined
below in Section 3), uniformly-optimal policies do not in
general exist. In such cases, α becomes a part of the
problem input, and a resulting policy is only optimal for that
particular α. This result is well known for infinite-horizon
MDPs with various types of constraints [1, 6], and it also
holds for our finite-horizon model, which can be easily
established via a line of reasoning completely analogous to the
arguments in [6].
2.2 Combinatorial Resource Scheduling
A straightforward approach to resource scheduling for a
set of agents M, whose values for the resources are induced
by stochastic planning problems (in our case, finite-horizon
MDPs) would be to have each agent enumerate all possible
resource assignments over time and, for each one, compute
its value by solving the corresponding MDP. Then, each
agent would provide valuations for each possible resource
bundle over time to a centralized coordinator, who would
compute the optimal resource assignments across time based
on these valuations.
When resources can be allocated at different times to
different agents, each agent must submit valuations for
every combination of possible time horizons. Let each agent
m ∈ M execute its MDP within the arrival-departure time
interval τ ∈ [τa
m, τd
m]. Hence, agent m will execute an MDP
with time horizon no greater than Tm = τd
m−τa
m+1. Let bτ be
the global time horizon for the problem, before which all of
the agents" MDPs must finish. We assume τd
m < bτ, ∀m ∈ M.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221
For the scheduling problem where agents have static
resource requirements within their finite-horizon MDPs, the
agents provide a valuation for each resource bundle for each
possible time horizon (from [1, Tm]) that they may use. Let
Ω be the set of resources to be allocated among the agents.
An agent will get at most one resource bundle for one of the
time horizons. Let the variable ψ ∈ Ψm enumerate all
possible pairs of resource bundles and time horizons for agent
m, so there are 2|Ω|
× Tm values for ψ (the space of bundles
is exponential in the number of resource types |Ω|).
The agent m must provide a value vψ
m for each ψ, and
the coordinator will allocate at most one ψ (resource, time
horizon) pair to each agent. This allocation is expressed as
an indicator variable zψ
m ∈ {0, 1} that shows whether ψ is
assigned to agent m. For time τ and resource ω, the function
nm(ψ, τ, ω) ∈ {0, 1} indicates whether the bundle in ψ uses
resource ω at time τ (we make the assumption that agents
have binary resource requirements). This allocation problem
is NP-complete, even when considering only a single time
step, and its difficulty increases significantly with multiple
time steps because of the increasing number of values of ψ.
The problem of finding an optimal allocation that satisfies
the global constraint that the amount of each resource ω
allocated to all agents does not exceed the available amount
bϕ(ω) can be expressed as the following integer program:
max
X
m∈M
X
ψ∈Ψm
zψ
mvψ
m
subject to:
X
ψ∈Ψm
zψ
m ≤ 1, ∀m ∈ M;
X
m∈M
X
ψ∈Ψm
zψ
mnm(ψ, τ, ω) ≤ bϕ(ω), ∀τ ∈ [1, bτ], ∀ω ∈ Ω;
(3)
The first constraint in equation 3 says that no agent can
receive more than one bundle, and the second constraint
ensures that the total assignment of resource ω does not, at
any time, exceed the resource bound.
For the scheduling problem where the agents are able to
dynamically reallocate resources, each agent must specify
a value for every combination of bundles and time steps
within its time horizon. Let the variable ψ ∈ Ψm in this case
enumerate all possible resource bundles for which at most
one bundle may be assigned to agent m at each time step.
Therefore, in this case there are
P
t∈[1,Tm](2|Ω|
)t
∼ 2|Ω|Tm
possibilities of resource bundles assigned to different time
slots, for the Tm different time horizons.
The same set of equations (3) can be used to solve this
dynamic scheduling problem, but the integer program is
different because of the difference in how ψ is defined. In this
case, the number of ψ values is exponential in each agent"s
planning horizon Tm, resulting in a much larger program.
This straightforward approach to solving both of these
scheduling problems requires an enumeration and solution
of either 2|Ω|
Tm (static allocation) or
P
t∈[1,Tm] 2|Ω|t
(dynamic reallocation) MDPs for each agent, which very quickly
becomes intractable with the growth of the number of
resources |Ω| or the time horizon Tm.
3. MODEL AND PROBLEM STATEMENT
We now formally introduce our model of the
resourcescheduling problem. The problem input consists of the
following components:
• M, Ω, bϕ, τa
m, τd
m, bτ are as defined above in Section 2.2.
• {Θm} = {S, A, pm, rm, αm} are the MDPs of all agents
m ∈ M. Without loss of generality, we assume that state
and action spaces of all agents are the same, but each has
its own transition function pm, reward function rm, and
initial conditions αm.
• ϕm : A×Ω → {0, 1} is the mapping of actions to resources
for agent m. ϕm(a, ω) indicates whether action a of agent
m needs resource ω. An agent m that receives a set of
resources that does not include resource ω cannot execute
in its MDP policy any action a for which ϕm(a, ω) = 0. We
assume all resource requirements are binary; as discussed
below in Section 6, this assumption is not limiting.
Given the above input, the optimization problem we
consider is to find the globally optimal-maximizing the sum
of expected rewards-mapping of resources to agents for all
time steps: Δ : τ × M × Ω → {0, 1}. A solution is feasible
if the corresponding assignment of resources to the agents
does not violate the global resource constraint:
X
m
Δm(τ, ω) ≤ bϕ(ω), ∀ω ∈ Ω, τ ∈ [1, bτ]. (4)
We consider two flavors of the resource-scheduling
problem. The first formulation restricts resource assignments to
the space where the allocation of resources to each agent is
static during the agent"s lifetime. The second formulation
allows reassignment of resources between agents at every time
step within their lifetimes.
Figure 1 depicts a resource-scheduling problem with three
agents M = {m1, m2, m3}, three resources Ω = {ω1, ω2, ω3},
and a global problem horizon of bτ = 11. The agents" arrival
and departure times are shown as gray boxes and are {1, 6},
{3, 7}, and {2, 11}, respectively. A solution to this problem
is shown via horizontal bars within each agents" box, where
the bars correspond to the allocation of the three resource
types. Figure 1a shows a solution to a static scheduling
problem. According to the shown solution, agent m1 begins the
execution of its MDP at time τ = 1 and has a lock on all
three resources until it finishes execution at time τ = 3. Note
that agent m1 relinquishes its hold on the resources before
its announced departure time of τd
m1
= 6, ostensibly because
other agents can utilize the resources more effectively. Thus,
at time τ = 4, resources ω1 and ω3 are allocated to agent
m2, who then uses them to execute its MDP (using only
actions supported by resources ω1 and ω3) until time τ = 7.
Agent m3 holds resource ω3 during the interval τ ∈ [4, 10].
Figure 1b shows a possible solution to the dynamic version
of the same problem. There, resources can be reallocated
between agents at every time step. For example, agent m1
gives up its use of resource ω2 at time τ = 2, although it
continues the execution of its MDP until time τ = 6. Notice
that an agent is not allowed to stop and restart its MDP, so
agent m1 is only able to continue executing in the interval
τ ∈ [3, 4] if it has actions that do not require any resources
(ϕm(a, ω) = 0).
Clearly, the model and problem statement described above
make a number of assumptions about the problem and the
desired solution properties. We discuss some of those
assumptions and their implications in Section 6.
1222 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
(a) (b)
Figure 1: Illustration of a solution to a resource-scheduling problem with three agents and three resources: a) static
resource assignments (resource assignments are constant within agents" lifetimes; b) dynamic assignment (resource
assignments are allowed to change at every time step).
4. RESOURCE SCHEDULING
Our resource-scheduling algorithm proceeds in two stages.
First, we perform a preprocessing step that augments the
agent MDPs; this process is described in Section 4.1.
Second, using these augmented MDPs we construct a global
optimization problem, which is described in Section 4.2.
4.1 Augmenting Agents" MDPs
In the model described in the previous section, we assume
that if an agent does not possess the necessary resources to
perform actions in its MDP, its execution is halted and the
agent leaves the system. In other words, the MDPs cannot
be paused and resumed. For example, in the problem
shown in Figure 1a, agent m1 releases all resources after time
τ = 3, at which point the execution of its MDP is halted.
Similarly, agents m2 and m3 only execute their MDPs in the
intervals τ ∈ [4, 6] and τ ∈ [4, 10], respectively. Therefore, an
important part of the global decision-making problem is to
decide the window of time during which each of the agents
is active (i.e., executing its MDP).
To accomplish this, we augment each agent"s MDP with
two new states (start and finish states sb
, sf
,
respectively) and a new start/stop action a∗
, as illustrated in
Figure 2. The idea is that an agent stays in the start state
sb
until it is ready to execute its MDP, at which point it
performs the start/stop action a∗
and transitions into the
state space of the original MDP with the transition
probability that corresponds to the original initial distribution
α(s). For example, in Figure 1a, for agent m2 this would
happen at time τ = 4. Once the agent gets to the end of its
activity window (time τ = 6 for agent m2 in Figure 1a), it
performs the start/stop action, which takes it into the sink
finish state sf
at time τ = 7.
More precisely, given an MDP S, A, pm, rm, αm , we
define an augmented MDP S , A , pm, rm, αm as follows:
S = S ∪ sb
∪ sf
; A = A ∪ a∗
;
p (s|sb
, a∗
) = α(s), ∀s ∈ S; p (sb
|sb
, a) = 1.0, ∀a ∈ A;
p (sf
|s, a∗
) = 1.0, ∀s ∈ S;
p (σ|s, a) = p(σ|s, a), ∀s, σ ∈ S, a ∈ A;
r (sb
, a) = r (sf
, a) = 0, ∀a ∈ A ;
r (s, a) = r(s, a), ∀s ∈ S, a ∈ A;
α (sb
) = 1; α (s) = 0, ∀s ∈ S;
where all non-specified transition probabilities are assumed
to be zero. Further, in order to account for the new starting
state, we begin the MDP one time-step earlier, setting τa
m ←
τa
m − 1. This will not affect the resource allocation due to
the resource constraints only being enforced for the original
MDP states, as will be discussed in the next section. For
example, the augmented MDPs shown in Figure 2b (which
starts in state sb
at time τ = 2) would be constructed from
an MDP with original arrival time τ = 3. Figure 2b also
shows a sample trajectory through the state space: the agent
starts in state sb
, transitions into the state space S of the
original MDP, and finally exists into the sink state sf
.
Note that if we wanted to model a problem where agents
could pause their MDPs at arbitrary time steps (which might
be useful for domains where dynamic reallocation is
possible), we could easily accomplish this by including an extra
action that transitions from each state to itself with zero
reward.
4.2 MILP for Resource Scheduling
Given a set of augmented MDPs, as defined above, the
goal of this section is to formulate a global optimization
program that solves the resource-scheduling problem. In this
section and below, all MDPs are assumed to be the
augmented MDPs as defined in Section 4.1.
Our approach is similar to the idea used in [6]: we
begin with the linear-program formulation of agents" MDPs
(1) and augment it with constraints that ensure that the
corresponding resource allocation across agents and time is
valid. The resulting optimization problem then
simultaneously solves the agents" MDPs and resource-scheduling
problems. In the rest of this section, we incrementally develop a
mixed integer program (MILP) that achieves this.
In the absence of resource constraints, the agents"
finitehorizon MDPs are completely independent, and the globally
optimal solution can be trivially obtained via the following
LP, which is simply an aggregation of single-agent
finitehorizon LPs:
max
X
m
X
s
X
a
rm(s, a)
X
t
xm(s, a, t)
subject to:
X
a
xm(σ, a, t + 1) =
X
s,a
pm(σ|s, a)xm(s, a, t),
∀m ∈ M, σ ∈ S, t ∈ [1, Tm − 1];
X
a
xm(s, a, 1) = αm(s), ∀m ∈ M, s ∈ S;
(12)
where xm(s, a, t) is the occupation measure of agent m, and
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1223
(a) (b)
Figure 2: Illustration of augmenting an MDP to allow for variable starting and stopping times: a) (left) the original
two-state MDP with a single action; (right) the augmented MDP with new states sb and sf and the new action a∗
(note that the origianl transitions are not changed in the augmentation process); b) the augmented MDP displayed as
a trajectory through time (grey lines indicate all transitions, while black lines indicate a given trajectory.
Objective Function
(sum of expected rewards over all agents)
max
X
m
X
s
X
a
rm(s, a)
X
t
xm(s, a, t) (5)
Meaning Implication Linear Constraints
Tie x to θ. Agent is
only active when
occupation measure is nonzero
in original MDP states.
θm(τ) = 0 =⇒ xm(s, a, τ −τa
m+1) = 0
∀s /∈ {sb
, sf
}, a ∈ A
X
s/∈{sb,sf }
X
a
xm(s, a, t) ≤ θm(τa
m + t − 1)
∀m ∈ M, ∀t ∈ [1, Tm]
(6)
Agent can only be active
in τ ∈ (τa
m, τd
m) θm(τ) = 0 ∀m ∈ M, τ /∈ (τa
m, τd
m) (7)
Cannot use resources
when not active
θm(τ) = 0 =⇒ Δm(τ, ω) = 0
∀τ ∈ [0, bτ], ω ∈ Ω Δm(τ, ω) ≤ θm(τ) ∀m ∈ M, τ ∈ [0, bτ], ω ∈ Ω (8)
Tie x to Δ (nonzero x
forces corresponding Δ
to be nonzero.)
Δm(τ, ω) = 0, ϕm(a, ω) = 1 =⇒
xm(s, a, τ − τa
m + 1) = 0
∀s /∈ {sb
, sf
}
1/|A|
X
a
ϕm(a, ω)
X
s/∈{sb,sf }
xm(s, a, t) ≤ Δm(t + τa
m − 1, ω)
∀m ∈ M, ω ∈ Ω, t ∈ [1, Tm]
(9)
Resource bounds
X
m
Δm(τ, ω) ≤ bϕ(ω) ∀ω ∈ Ω, τ ∈ [0, bτ] (10)
Agent cannot change
resources while
active. Only enabled for
scheduling with static
assignments.
θm(τ) = 1 and θm(τ + 1) = 1 =⇒
Δm(τ, ω) = Δm(τ + 1, ω)
Δm(τ, ω) − Z(1 − θm(τ + 1)) ≤
Δm(τ + 1, ω) + Z(1 − θm(τ))
Δm(τ, ω) + Z(1 − θm(τ + 1)) ≥
Δm(τ + 1, ω) − Z(1 − θm(τ))
∀m ∈ M, ω ∈ Ω, τ ∈ [0, bτ]
(11)
Table 1: MILP for globally optimal resource scheduling.
Tm = τd
m − τa
m + 1 is the time horizon for the agent"s MDP.
Using this LP as a basis, we augment it with constraints
that ensure that the resource usage implied by the agents"
occupation measures {xm} does not violate the global
resource requirements bϕ at any time step τ ∈ [0, bτ]. To
formulate these resource constraints, we use the following binary
variables:
• Δm(τ, ω) = {0, 1}, ∀m ∈ M, τ ∈ [0, bτ], ω ∈ Ω, which
serve as indicator variables that define whether agent m
possesses resource ω at time τ. These are analogous to
the static indicator variables used in the one-shot static
resource-allocation problem in [6].
• θm = {0, 1}, ∀m ∈ M, τ ∈ [0, bτ] are indicator variables
that specify whether agent m is active (i.e., executing
its MDP) at time τ.
The meaning of resource-usage variables Δ is illustrated in
Figure 1: Δm(τ, ω) = 1 only if resource ω is allocated to
agent m at time τ. The meaning of the activity
indicators θ is illustrated in Figure 2b: when agent m is in either
the start state sb
or the finish state sf
, the corresponding
θm = 0, but once the agent becomes active and enters one
of the other states, we set θm = 1 . This meaning of θ can be
enforced with a linear constraint that synchronizes the
values of the agents" occupation measures xm and the activity
1224 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
indicators θ, as shown in (6) in Table 1.
Another constraint we have to add-because the activity
indicators θ are defined on the global timeline τ-is to
enforce the fact that the agent is inactive outside of its
arrivaldeparture window. This is accomplished by constraint (7) in
Table 1.
Furthermore, agents should not be using resources while
they are inactive. This constraint can also be enforced via a
linear inequality on θ and Δ, as shown in (8).
Constraint (6) sets the value of θ to match the policy
defined by the occupation measure xm. In a similar fashion,
we have to make sure that the resource-usage variables Δ are
also synchronized with the occupation measure xm. This is
done via constraint (9) in Table 1, which is nearly identical
to the analogous constraint from [6].
After implementing the above constraint, which enforces
the meaning of Δ, we add a constraint that ensures that the
agents" resource usage never exceeds the amounts of
available resources. This condition is also trivially expressed as
a linear inequality (10) in Table 1.
Finally, for the problem formulation where resource
assignments are static during a lifetime of an agent, we add a
constraint that ensures that the resource-usage variables Δ
do not change their value while the agent is active (θ = 1).
This is accomplished via the linear constraint (11), where
Z ≥ 2 is a constant that is used to turn off the constraints
when θm(τ) = 0 or θm(τ + 1) = 0. This constraint is not
used for the dynamic problem formulation, where resources
can be reallocated between agents at every time step.
To summarize, Table 1 together with the
conservationof-flow constraints from (12) defines the MILP that
simultaneously computes an optimal resource assignment for all
agents across time as well as optimal finite-horizon MDP
policies that are valid under that resource assignment.
As a rough measure of the complexity of this MILP, let
us consider the number of optimization variables and
constraints. Let TM =
P
Tm =
P
m(τa
m − τd
m + 1) be the sum
of the lengths of the arrival-departure windows across all
agents. Then, the number of optimization variables is:
TM + bτ|M||Ω| + bτ|M|,
TM of which are continuous (xm), and bτ|M||Ω| + bτ|M| are
binary (Δ and θ). However, notice that all but TM|M| of
the θ are set to zero by constraint (7), which also
immediately forces all but TM|M||Ω| of the Δ to be zero via the
constraints (8). The number of constraints (not including
the degenerate constraints in (7)) in the MILP is:
TM + TM|Ω| + bτ|Ω| + bτ|M||Ω|.
Despite the fact that the complexity of the MILP is, in the
worst case, exponential1
in the number of binary variables,
the complexity of this MILP is significantly (exponentially)
lower than that of the MILP with flat utility functions,
described in Section 2.2. This result echos the efficiency gains
reported in [6] for single-shot resource-allocation problems,
but is much more pronounced, because of the explosion of
the flat utility representation due to the temporal aspect of
the problem (recall the prohibitive complexity of the
combinatorial optimization in Section 2.2). We empirically analyze
the performance of this method in Section 5.
1
Strictly speaking, solving MILPs to optimality is
NPcomplete in the number of integer variables.
5. EXPERIMENTAL RESULTS
Although the complexity of solving MILPs is in the worst
case exponential in the number of integer variables, there
are many efficient methods for solving MILPs that allow
our algorithm to scale well for parameters common to
resource allocation and scheduling problems. In particular,
this section introduces a problem domain-the repairshop
problem-used to empirically evaluate our algorithm"s
scalability in terms of the number of agents |M|, the number of
shared resources |Ω|, and the varied lengths of global time
bτ during which agents may enter and exit the system.
The repairshop problem is a simple parameterized MDP
adopting the metaphor of a vehicular repair shop. Agents
in the repair shop are mechanics with a number of
independent tasks that yield reward only when completed. In our
MDP model of this system, actions taken to advance through
the state space are only allowed if the agent holds certain
resources that are publicly available to the shop. These
resources are in finite supply, and optimal policies for the shop
will determine when each agent may hold the limited
resources to take actions and earn individual rewards. Each
task to be completed is associated with a single action,
although the agent is required to repeat the action numerous
times before completing the task and earning a reward.
This model was parameterized in terms of the number
of agents in the system, the number of different types of
resources that could be linked to necessary actions, a global
time during which agents are allowed to arrive and depart,
and a maximum length for the number of time steps an agent
may remain in the system.
All datapoints in our experiments were obtained with 20
evaluations using CPLEX to solve the MILPs on a
Pentium4 computer with 2Gb of RAM. Trials were conducted on
both the static and the dynamic version of the
resourcescheduling problem, as defined earlier.
Figure 3 shows the runtime and policy value for
independent modifications to the parameter set. The top row
shows how the solution time for the MILP scales as we
increase the number of agents |M|, the global time horizon bτ,
and the number of resources |Ω|. Increasing the number of
agents leads to exponential complexity scaling, which is to
be expected for an NP-complete problem. However,
increasing the global time limit bτ or the total number of resource
types |Ω|-while holding the number of agents
constantdoes not lead to decreased performance. This occurs because
the problems get easier as they become under-constrained,
which is also a common phenomenon for NP-complete
problems. We also observe that the solution to the dynamic
version of the problem can often be computed much faster than
the static version.
The bottom row of Figure 3 shows the joint policy value
of the policies that correspond to the computed optimal
resource-allocation schedules. We can observe that the
dynamic version yields higher reward (as expected, since the
reward for the dynamic version is always no less than the
reward of the static version). We should point out that these
graphs should not be viewed as a measure of performance of
two different algorithms (both algorithms produce optimal
solutions but to different problems), but rather as
observations about how the quality of optimal solutions change as
more flexibility is allowed in the reallocation of resources.
Figure 4 shows runtime and policy value for trials in which
common input variables are scaled together. This allows
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225
2 4 6 8 10
10
−3
10
−2
10
−1
10
0
10
1
10
2
10
3
10
4
Number of Agents |M|
CPUTime,sec
|Ω| = 5, τ = 50
static
dynamic
50 100 150 200
10
−2
10
−1
10
0
10
1
10
2
10
3
Global Time Boundary τ
CPUTime,sec
|M| = 5, |Ω| = 5
static
dynamic
10 20 30 40 50
10
−2
10
−1
10
0
10
1
10
2
Number of Resources |Ω|
CPUTime,sec
|M| = 5, τ = 50
static
dynamic
2 4 6 8 10
200
400
600
800
1000
1200
1400
1600
Number of Agents |M|
Value
|Ω| = 5, τ = 50
static
dynamic
50 100 150 200
400
500
600
700
800
900
1000
1100
1200
1300
1400
Global Time Boundary τ
Value
|M| = 5, |Ω| = 5
static
dynamic
10 20 30 40 50
500
600
700
800
900
1000
1100
1200
1300
1400
Number of Resources |Ω|
Value
|M| = 5, τ = 50
static
dynamic
Figure 3: Evaluation of our MILP for variable numbers of agents (column 1), lengths of global-time window (column
2), and numbers of resource types (column 3). Top row shows CPU time, and bottom row shows the joint reward of
agents" MDP policies. Error bars show the 1st and 3rd quartiles (25% and 75%).
2 4 6 8 10
10
−3
10
−2
10
−1
10
0
10
1
10
2
10
3
Number of Agents |M|
CPUTime,sec
τ = 10|M|
static
dynamic
2 4 6 8 10
10
−3
10
−2
10
−1
10
0
10
1
10
2
10
3
10
4
Number of Agents |M|
CPUTime,sec
|Ω| = 2|M|
static
dynamic
2 4 6 8 10
10
−3
10
−2
10
−1
10
0
10
1
10
2
10
3
10
4
Number of Agents |M|
CPUTime,sec
|Ω| = 5|M|
static
dynamic
2 4 6 8 10
200
400
600
800
1000
1200
1400
1600
1800
2000
2200
Number of Agents |M|
Value
τ = 10|M|
static
dynamic
2 4 6 8 10
200
400
600
800
1000
1200
1400
1600
1800
2000
Number of Agents |M|
Value
|Ω| = 2|M|
static
dynamic
2 4 6 8 10
0
500
1000
1500
2000
2500
Number of Agents |M|
Value
|Ω| = 5|M|
static
dynamic
Figure 4: Evaluation of our MILP using correlated input variables. The left column tracks the performance and CPU
time as the number of agents and global-time window increase together (bτ = 10|M|). The middle and the right column
track the performance and CPU time as the number of resources and the number of agents increase together as
|Ω| = 2|M| and |Ω| = 5|M|, respectively. Error bars show the 1st and 3rd quartiles (25% and 75%).
1226 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
us to explore domains where the total number of agents
scales proportionally to the total number of resource types
or the global time horizon, while keeping constant the
average agent density (per unit of global time) or the average
number of resources per agent (which commonly occurs in
real-life applications).
Overall, we believe that these experimental results
indicate that our MILP formulation can be used to effectively
solve resource-scheduling problems of nontrivial size.
6. DISCUSSION AND CONCLUSIONS
Throughout the paper, we have made a number of
assumptions in our model and solution algorithm; we discuss
their implications below.
• Continual execution. We assume that once an agent
stops executing its MDP (transitions into state sf
), it
exits the system and cannot return. It is easy to relax
this assumption for domains where agents" MDPs can be
paused and restarted. All that is required is to include an
additional pause action which transitions from a given
state back to itself, and has zero reward.
• Indifference to start time. We used a reward model
where agents" rewards depend only on the time horizon
of their MDPs and not the global start time. This is a
consequence of our MDP-augmentation procedure from
Section 4.1. It is easy to extend the model so that the
agents incur an explicit penalty for idling by assigning a
non-zero negative reward to the start state sb
.
• Binary resource requirements. For simplicity, we have
assumed that resource costs are binary: ϕm(a, ω) = {0, 1},
but our results generalize in a straightforward manner to
non-binary resource mappings, analogously to the
procedure used in [5].
• Cooperative agents. The optimization procedure
discussed in this paper was developed in the context of
cooperative agents, but it can also be used to design a
mechanism for scheduling resources among self-interested agents.
This optimization procedure can be embedded in a
VickreyClarke-Groves auction, completely analogously to the way
it was done in [7]. In fact, all the results of [7] about the
properties of the auction and information privacy directly
carry over to the scheduling domain discussed in this
paper, requiring only slight modifications to deal with
finitehorizon MDPs.
• Known, deterministic arrival and departure times.
Finally, we have assumed that agents" arrival and
departure times (τa
m and τd
m) are deterministic and known a
priori. This assumption is fundamental to our solution
method. While there are many domains where this
assumption is valid, in many cases agents arrive and
depart dynamically and their arrival and departure times
can only be predicted probabilistically, leading to online
resource-allocation problems. In particular, in the case of
self-interested agents, this becomes an interesting version
of an online-mechanism-design problem [11, 12].
In summary, we have presented an MILP formulation for
the combinatorial resource-scheduling problem where agents"
values for possible resource assignments are defined by
finitehorizon MDPs. This result extends previous work ([6, 7])
on static one-shot resource allocation under MDP-induced
preferences to resource-scheduling problems with a temporal
aspect. As such, this work takes a step in the direction of
designing an online mechanism for agents with combinatorial
resource preferences induced by stochastic planning
problems. Relaxing the assumption about deterministic arrival
and departure times of the agents is a focus of our future
work.
We would like to thank the anonymous reviewers for their
insightful comments and suggestions.
7. REFERENCES
[1] E. Altman and A. Shwartz. Adaptive control of
constrained Markov chains: Criteria and policies.
Annals of Operations Research, special issue on
Markov Decision Processes, 28:101-134, 1991.
[2] R. Bellman. Dynamic Programming. Princeton
University Press, 1957.
[3] C. Boutilier. Solving concisely expressed combinatorial
auction problems. In Proc. of AAAI-02, pages
359-366, 2002.
[4] C. Boutilier and H. H. Hoos. Bidding languages for
combinatorial auctions. In Proc. of IJCAI-01, pages
1211-1217, 2001.
[5] D. Dolgov. Integrated Resource Allocation and
Planning in Stochastic Multiagent Environments. PhD
thesis, Computer Science Department, University of
Michigan, February 2006.
[6] D. A. Dolgov and E. H. Durfee. Optimal resource
allocation and policy formulation in loosely-coupled
Markov decision processes. In Proc. of ICAPS-04,
pages 315-324, June 2004.
[7] D. A. Dolgov and E. H. Durfee. Computationally
efficient combinatorial auctions for resource allocation
in weakly-coupled MDPs. In Proc. of AAMAS-05,
New York, NY, USA, 2005. ACM Press.
[8] D. A. Dolgov and E. H. Durfee. Resource allocation
among agents with preferences induced by factored
MDPs. In Proc. of AAMAS-06, 2006.
[9] K. Larson and T. Sandholm. Mechanism design and
deliberative agents. In Proc. of AAMAS-05, pages
650-656, New York, NY, USA, 2005. ACM Press.
[10] N. Nisan. Bidding and allocation in combinatorial
auctions. In Electronic Commerce, 2000.
[11] D. C. Parkes and S. Singh. An MDP-based approach
to Online Mechanism Design. In Proc. of the
Seventeenths Annual Conference on Neural
Information Processing Systems (NIPS-03), 2003.
[12] D. C. Parkes, S. Singh, and D. Yanovsky.
Approximately efficient online mechanism design. In
Proc. of the Eighteenths Annual Conference on Neural
Information Processing Systems (NIPS-04), 2004.
[13] M. L. Puterman. Markov Decision Processes. John
Wiley & Sons, New York, 1994.
[14] M. H. Rothkopf, A. Pekec, and R. M. Harstad.
Computationally manageable combinational auctions.
Management Science, 44(8):1131-1147, 1998.
[15] T. Sandholm. An algorithm for optimal winner
determination in combinatorial auctions. In Proc. of
IJCAI-99, pages 542-547, San Francisco, CA, USA,
1999. Morgan Kaufmann Publishers Inc.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1227 | multiagent system;multiagent plan;resource-scheduling;optimal allocation;markov decision process;utility function;resource;resource allocation;scheduling;discrete-time scheduling problem;resource-scheduling algorithm;task and resource allocation in agent system;optimal resource scheduling;optimization problem;combinatorial resource scheduling |
train_I-64 | Organizational Self-Design in Semi-dynamic Environments | Organizations are an important basis for coordination in multiagent systems. However, there is no best way to organize and all ways of organizing are not equally effective. Attempting to optimize an organizational structure depends strongly on environmental features including problem characteristics, available resources, and agent capabilities. If the environment is dynamic, the environmental conditions or the problem task structure may change over time. This precludes the use of static, design-time generated, organizational structures in such systems. On the other hand, for many real environments, the problems are not totally unique either: certain characteristics and conditions change slowly, if at all, and these can have an important effect in creating stable organizational structures. Organizational-Self Design (OSD) has been proposed as an approach for constructing suitable organizational structures at runtime. We extend the existing OSD approach to include worthoriented domains, model other resources in addition to only processor resources and build in robustness into the organization. We then evaluate our approach against the contract-net approach and show that our OSD agents perform better, are more efficient, and more flexible to changes in the environment. | 1. INTRODUCTION
In this paper, we are primarily interested in the organizational
design of a multiagent system - the roles enacted by the agents,
∗Primary author is a student
the coordination between the roles and the number and assignment
of roles and resources to the individual agents. The organizational
design is complicated by the fact that there is no best way to
organize and all ways of organizing are not equally effective [2].
Instead, the optimal organizational structure depends both on the
problem at hand and the environmental conditions under which the
problem needs to be solved. The environmental conditions may
not be known a priori, or may change over time, which would
preclude the use of a static organizational structure. On the other hand,
all problem instances and environmental conditions are not always
unique, which would render inefficient the use of a new, bespoke
organizational structure for every problem instance.
Organizational Self-Design (OSD) [4, 10] has been proposed
as an approach to designing organizations at run-time in which
the agents are responsible for generating their own organizational
structures. We believe that OSD is especially suited to the above
scenario in which the environment is semi-dynamic as the agents
can adapt to changes in the task structures and environmental
conditions, while still being able to generate relatively stable
organizational structures that exploit the common characteristics across
problem instances.
In our approach (as in [10]), we define two operators for OSD
- agent spawning and composition - when an agent becomes
overloaded, it spawns off a new agent to handle part of its task
load/responsibility; when an agent lies idle for an extended period
of time, it may decide to compose with another agent.
We use TÆMS as the underlying representation for our
problem solving requests. TÆMS [11] (Task Analysis, Environment
Modeling and Simulation) is a computational framework for
representing and reasoning about complex task environments in which
tasks (problems) are represented using extended hierarchical task
structures [3]. The root node of the task structure represents the
high-level goal that the agent is trying to achieve. The sub-nodes of
a node represent the subtasks and methods that make up the
highlevel task. The leaf nodes are at the lowest level of abstraction and
represent executable methods - the primitive actions that the agents
can perform. The executable methods, themselves, may have
multiple outcomes, with different probabilities and different
characteristics such as quality, cost and duration. TÆMS also allows
various mechanisms for specifying subtask variations and alternatives,
i.e. each node in TÆMS is labeled with a characteristic
accumulation function that describes how many or which subgoals or sets of
subgoals need to be achieved in order to achieve a particular
higherlevel goal. TÆMS has been used to model many different
problemsolving environments including distributed sensor networks,
information gathering, hospital scheduling, EMS, and military planning.
[5, 6, 3, 15].
The main contributions of this paper are as follows:
1. We extend existing OSD approaches to use TÆMS as the
underlying problem representation, which allows us to model
and use OSD for worth-oriented domains. This in turn
allows us to reason about (1) alternative task and role
assignments that make different quality/cost tradeoffs and generate
different organizational structures and (2) uncertainties in the
execution of tasks.
2. We model the use of resources other than only processor
resources.
3. We incorporate robustness into the organizational structures.
2. RELATED WORK
The concept of OSD is not new and has been around since
the work of Corkill and Lesser on the DVMT system[4], even
though the concept was not fully developed by them. More
recently Dignum et. al.[8] have described OSD in the context of the
reorganization of agent societies and attempt to classify the various
kinds of reorganization possible according to the the reason for
reorganization, the type of reorganization and who is responsible for
the reorganization decision. According to their scheme, the type of
reorganization done by our agents falls into the category of
structural changes and the reorganization decision can be described as
shared command.
Our research primarily builds on the work done by Gasser and
Ishida [10], in which they use OSD in the context of a
production system in order to perform adaptive work allocation and load
balancing. In their approach, they define two organizational
primitives - composition and decomposition, which are similar to our
organizational primitives for agent spawning and composition. The
main difference between their work and our work is that we use
TÆMS as the underlying representation for our problems, which
allows, firstly, the representation of a larger, more general class of
problems and, secondly, quantitative reasoning over task structures.
The latter also allows us to incorporate different design-to-criteria
schedulers [16].
Horling and Lesser [9] present a different, top-down approach to
OSD that also uses TÆMS as the underlying representation.
However, their approach assumes a fixed number of agents with
designated (and fixed) roles. OSD is used in their work to change the
interaction patterns between the agents and results in the agents
using different subtasks or different resources to achieve their goals.
We also extend on the work done by Sycara et. al.,[13] on Agent
Cloning, which is another approach to resource allocation and load
balancing. In this approach, the authors present agent cloning as
a possible response to agent overload - if an agent detects that it
is overloaded and that there are spare (unused) resources in the
system, the agent clones itself and gives its clone some part of its
task load. Hence, agent cloning can be thought of as akin to agent
spawning in our approach. However, the two approaches are
different in that there is no specialization of the agents in the
formerthe cloned agents are perfect replicas of the original agents and
fulfill the same roles and responsibilities as the original agents. In our
approach, on the other hand, the spawned agents are specialized on
a subpart of the spawning agent"s task structure, which is no longer
the responsibility of the spawning agent. Hence, our approach also
deals with explicit organization formation and the coordination of
the agents" tasks which are not handled by their approach.
Other approaches to OSD include the work of So and Durfee
[14], who describe a top-down model of OSD in the context of
Cooperative Distributive Problem Solving (CDPS) and Barber and
Martin [1], who describe an adaptive decision making framework
in which agents are able to reorganize decision-making groups by
dynamically changing (1) who makes the decisions for a particular
goal and (2) who must carry out these decisions.The latter work is
primarily concerned with coordination decisions and can be used
to complement our OSD work, which primarily deals with task and
resource allocation.
3. TASK AND RESOURCE MODEL
To ground our discussion of OSD, we now formally describe
our task and resource model. In our model, the primary input to
the multi-agent system (MAS) is an ordered set of problem
solving requests or task instances, < P1, P2, P3, ..., Pn >, where each
problem solving request, Pi, can be represented using the tuple
< ti, ai, di >. In this scheme, ti is the underlying TÆMS task
structure, ai ∈ N+
is the arrival time and di ∈ N+
is the deadline
of the ith
task instance1
. The MAS has no prior knowledge about
the task ti before the arrival time, ai. In order for the MAS to
accrue quality, the task ti must be completed before the deadline,
di.
Furthermore, every underlying task structure, ti, can be
represented using the tuple < T, τ, M, Q, E, R, ρ, C >, where:
• T is the set of tasks. The tasks are non-leaf nodes in a
TÆMS task structure and are used to denote goals that the
agents must achieve. Tasks have a characteristic
accumulation function (see below) and are themselves composed of
other subtasks and/or methods that need to be achieved in
order to achieve the goal represented by that task. Formally,
each task Tj can be represented using the pair (qj, sj), where
qj ∈ Q and sj ⊂ (T ∪ M). For our convenience, we
define two functions SUBTASKS(Task) : T → P(T ∪ M) and
SUPERTASKS(TÆMS node) : T ∪ M → P(T), that return
the subtasks and supertasks of a TÆMS node respectively2
.
• τ ∈ T, is the root of the task structure, i.e. the highest level
goal that the organization is trying to achieve. The quality
accrued on a problem is equal to the quality of task τ.
• M is the set executable methods, i.e., M =
{m1, m2, ..., mn}, where each method, mk,
is represented using the outcome distribution,
{(o1, p1), (o2, p2), ..., (om, pm)}. In the pair (ol, pl),
ol is an outcome and pl is the probability that executing mk
will result in the outcome ol. Furthermore, each outcome,
ol is represented using the triple (ql, cl, dl), where ql is the
quality distribution, cl is the cost distribution and dl is the
duration distribution of outcome ol. Each discrete
distribution is itself a set of pairs, {(n1, p1), (n2, p2), ..., (nn, pn)},
where pi ∈ +
is the probability that the outcome will have
a quality/cost/duration of nl ∈ N depending on the type of
distribution and
Pm
i=1 pl = 1.
• Q is the set of quality/characteristic accumulation functions
(CAFs). The CAFs determine how a task group accrues
quality given the quality accrued by its subtasks/methods. For
our research, we use four CAFs: MIN, MAX, SUM and
EXACTLY ONE. See [5] for formal definitions.
• E is the set of (non-local) effects. Again, see [5] for formal
definitions.
• R is the set of resources.
• ρ is a mapping from an executable method and resource to
the quantity of that resource needed (by an agent) to
schedule/execute that method. That is ρ(method, resource) :
M × R → N.
1
N is the set of natural numbers including zero and N+
is the set
of positive natural numbers excluding zero.
2
P is the power set of set, i.e., the set of all subsets of a set
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1229
• C is a mapping from a resource to the cost of that resource,
that is C(resource) : R → N+
We also make the following set of assumptions in our research:
1. The agents in the MAS are drawn from the infinite set A =
{a1, a2, a3, ...}. That is, we do not assume a fixed set of
agents - instead agents are created (spawned) and destroyed
(combined) as needed.
2. All problem solving requests have the same underlying task
structure, i.e. ∃t∀iti = t, where t is the task structure of
the problem that the MAS is trying to solve. We believe that
this assumption holds for many of the practical problems that
we have in mind because TÆMS task structures are
basically high-level plans for achieving some goal in which the
steps required for achieving the goal-as well as the possible
contingency situations-have been pre-computed offline and
represented in the task structure. Because it represents many
contingencies, alternatives, uncertain characteristics and
runtime flexible choices, the same underlying task structure
can play out very differently across specific instances.
3. All resources are exclusive, i.e., only one agent may use a
resource at any given time. Furthermore, we assume that
each agent has to own the set of resources that it
needseven though the resource ownership can change during the
evolution of the organization.
4. All resources are non-consumable.
4. ORGANIZATIONAL SELF DESIGN
4.1 Agent Roles and Relationships
The organizational structure is primarily composed of roles and
the relationships between the roles. One or more agents may enact
a particular role and one or more roles must be enacted by every
agent. The roles may be thought of as the parts played by the agents
enacting the roles in the solution to the problem and reflect the
long-term commitments made by the agents in question to a certain
course of action (that includes task responsibility, authority, and
mechanisms for coordination). The relationships between the roles
are the coordination relationships that exist between the subparts of
a problem.
In our approach, the organizational design is directly contingent
on the task structure and the environmental conditions under which
the problems need to be solved. We define a role as a TÆMS
subtree rooted at a particular node. Hence, the set (T ∪ M)
encompasses the space of all possible roles. Note, by definition, a role
may consist of one or more other (sub-) roles as a particular TÆMS
node may itself be made up of one or more subtrees. Hence, we will
use the terms role, task node and task interchangeably.
We, also, differentiate between local and managed (non-local)
roles. Local roles are roles that are the sole responsibility of a
single agent, that is, the agent concerned is responsible for solving all
the subproblems of the tree rooted at that node. For such roles, the
agent concerned can do one or more subtasks, solely at its
discretion and without consultation with any other agent. Managed roles,
on the other hand, must be coordinated between two or more agents
as such roles will have two or more descendent local roles that are
the responsibility of two or more separate agents. Any of the
existing coordination mechanisms (such as GPGP [11]) can be used to
achieve this coordination.
Formally, if the function TYPE(Agent, TÆMS Node) : A×(T ∪
M) → {Local, Managed, Unassigned}, returns the type of the
responsibility of the agent towards the specified role, then
TYPE(a, r) = Local ⇐⇒
∀ri∈SUBTASKS(r)TYPE(a, ri) = Local
TYPE(a, r) = Managed ⇐⇒
[∃a1∃r1(r1 ∈ SUBTASKS(r)) ∧ (TYPE(a1, r1) = Managed)] ∨
[∃a2∃a3∃r2∃r3(a2 = a3) ∧ (r2 = r3) ∧
(r2 ∈ SUBTASKS(r)) ∧ (r3 ∈ SUBTASKS(r)) ∧
(TYPE(a2, r2) = Local) ∧ (TYPE(a3, r3) = Local)]
4.2 Organization Formation and Adaptation
To form or adapt their organizational structure, the agents use
two organizational primitives: agent spawning and composition.
These two primitives result in a change in the assignment of roles
to the agents. Agent spawning is the generation of a new agent to
handle a subset of the roles of the spawning agent. Agent
composition, on the other hand, is orthogonal to agent spawning and
involves the merging of two or more agents together - the
combined agent is responsible for enacting all the roles of the agents
being merged.
In order to participate in the formation and adaption of an
organization, the agents need to explicitly represent and reason about
the role assignments. Hence, as a part of its organizational
knowledge, each agent keeps a list of the local roles that it is enacting and
the non-local roles that it is managing. Note that each agent only
has limited organizational knowledge and is individually
responsible for spawning off or combining with another agent, as needed,
based on its estimate of its performance so far.
To see how the organizational primitives work, we first describe
four rules that can be thought of as the organizational invariants
which will always hold before and after any organizational change:
1. For a local role, all the descendent nodes of that role will be
local.
TYPE(a, r) = Local =⇒
∀ri∈SUBTASKS(r)TYPE(a, ri) = Local
2. Similarly, for a managed (non-local) role, all the ascendent
nodes of that role will be managed.
TYPE(a, r) = Managed =⇒
∀ri∈SUPERTASKS(r)∃ai(ai ∈ A) ∧ (TYPE(ai, ri) = Managed)
3. If two local roles that are enacted by two different agents
share a common ancestor, that ancestor will be a managed
role.
(TYPE(a1, r1) = Local) ∧ (TYPE(a2, r2) = Local)∧
(a1 = a2) ∧ (r1 = r2) =⇒
∀ri∈(SUPERTASKS(r1)∩SUPERTASKS(r2))∃ai(ai ∈ A)∧
(TYPE(ai, ri) = Managed)
4. If all the direct descendants of a role are local and the sole
responsibility of a single agent, that role will be a local role.
∃a∃r∀ri∈SUBTASKS(r)(a ∈ A) ∧ (r ∈ (T ∪ M))∧
(TYPE(a, ri) = Local) =⇒
(TYPE(a, r) = Local)
When a new agent is spawned, the agent doing the spawning will
assign one or more of its local roles to the newly spawned agent
(Algorithm 1). To preserve invariant rules 2 and 3, the spawning
agent will change the type of all the ascendent roles of the nodes
assigned to the newly spawned agent from local to managed. Note
that the spawning agent is only changing its local organizational
knowledge and not the global organizational knowledge. At the
1230 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
same time, the spawning agent is taking on the task of managing
the previously local roles. Similarly, the newly spawned agent will
only know of its just assigned local roles.
When an agent (the composing agent) decides to compose with
another agent (the composed agent), the organizational knowledge
of the composing agent is merged with the organizational
knowledge of the composed agent. To do this, the composed agent takes
on the roles of all the local and managed tasks of the composing
agent. Care is taken to preserve the organizational invariant rules 1
and 4.
Algorithm 1 SpawnAgent(SpawningAgent) : A → A
1: LocalRoles ← {r ⊆ (T ∪ M) | TYPE(SpawningAgent,
r)= Local}
2: NewAgent ← CREATENEWAGENT()
3: NewAgentRoles ← FINDROLESFORSPAWNEDAGENT
(LocalRoles)
4: for role in NewAgentRoles do
5: TYPE(NewAgent, role) ← Local
6: TYPE(SpawningAgent, role) ← Unassigned
7: PRESERVEORGANIZATIONALINVARIANTS()
8: return NewAgent
Algorithm 2 FINDROLESFORSPAWNEDAGENT
(SpawningAgentRoles) : (T ∪ M) → (T ∪ M)
1: R ← SpawningAgentRoles
2: selectedRoles ← nil
3: for roleSet in [P(R) − {φ, R}] do
4: if COST(R, roleSet) < COST(R, selectedRoles) then
5: selectedRoles ← roleSet
6: return selectedRoles
Algorithm 3 GETRESOURCECOST(Roles) : (T ∪ M) →
1: M ← (Roles ∩ M)
2: cost ← 0
3: for resource in R do
4: maxResourceUsage ← 0
5: for method in M do
6: if ρ(method, resource) > maxResourceUsage then
7: max ← ρ(method, resource)
8: cost ← cost +
[C(resource) × maxResourceUsage]
9: return cost
4.2.1 Role allocation during spawning
One of the key questions that the agent doing the spawning needs
to answer is - which of its local-roles should it assign to the newly
spawned agent and which of its local roles should it keep to
itself? The onus of answering this question falls on the
FINDROLESFORSPAWNEDAGENT() function, shown in Algorithm 2 above. This
function takes the set of local roles that are the responsibility of the
spawning agent and returns a subset of those roles for allocation
to the newly spawned agent. This subset is selected based on the
results of a cost function as is evident from line 4 of the algorithm.
Since the use of different cost functions will result in different
organizational structures and since we have no a priori reason to believe
that one cost function will out-perform the other, we evaluated the
performance of three different cost functions based on the
following three different heuristics:
Algorithm 4 GETEXPECTEDDURATION(Roles) : (T ∪ M) → N+
1: M ← (Roles ∩ M)
2: exptDuration ← 0
3: for [outcome =< (q, c, d), outcomeProb >] in M do
4: exptOutcomeDuration ← 0
5: for (n,p) in d do
6: exptOutcomeDuration ← n × p
7: exptDuration ← exptDuration +
[exptOutcomeDuration × outcomeProb]
8: return exptDuration
Allocating top-most roles first: This heuristic always breaks
up at the top-most nodes first. That is, if the nodes of a task
structure were numbered, starting from the root, in a breadth-first
fashion, then this heuristic would select the local-role of the spawning
agent that had the lowest number and breakup that node (by
allocating one of its subtasks to the newly spawned agent). We
selected this heuristic because (a) it is the simplest to implement, (b)
fastest to run (the role allocation can be done in constant time
without the need of a search through the task structure) and (c) it makes
sense from a human-organizational perspective as this heuristic
corresponds to dividing an organization along functional lines.
Minimizing total resources: This heuristic attempts to
minimize the total cost of the resources needed by the agents in the
organization to execute their roles. If R be the local roles of the
spawning agent and R be the subset of roles being evaluated for
allocation to the newly spawned agent, the cost function for this
heuristic is given by: COST(R, R ) ← GETRESOURCECOST(R −
R )+GETRESOURCECOST(R )
Balancing execution time: This heuristic attempts to allocate
roles in a way that tries to ensure that each agent has an equal
amount of work to do. For each potential role allocation, this
heuristic works by calculating the absolute value of the difference
between the expected duration of its own roles after spawning and
the expected duration of the roles of the newly spawned agent.
If this difference is close to zero, then the both the agents have
roughly the same amount of work to do. Formally, if R be the
local roles of the spawning agent and R be the subset of roles
being evaluated for allocation to the newly spawned agent, then
the cost function for this heuristic is given by: COST(R, R ) ←
|GETEXPECTEDDURATION(R−R )−GETEXPECTEDDURATION(R )|
To evaluate these heuristics, we ran a series of experiments that
tested the performance of the resultant organization on randomly
generated task structures. The results are given in Section 6.
4.3 Reasons for Organizational Change
As organizational change is expensive (requiring clock cycles,
allocation/deallocation of resources, etc.) we want a stable
organizational structure that is suited to the task and environmental
conditions at hand. Hence, we wish to change the organizational
structure only if the task structure and/or environmental conditions
change. Also to allow temporary changes to the environmental
conditions to be overlooked, we want the probability of an
organizational change to be inversely proportional to the time since the last
organizational change. If this time is relatively short, the agents are
still adjusting to the changes in the environment - hence the
probability of an agent initiating an organizational change should be
high. Similarly, if the time since the last organizational change is
relatively large, we wish to have a low probability of organizational
change.
To allow this variation in probability of organizational change,
we use simulated annealing to determine the probability of
keepThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1231
ing an existing organizational structure. This probability is
calculated using the annealing formula: p = e− ΔE
kT where ΔE is the
amount of overload/underload, T is the time since the last
organizational change and k is a constant. The mechanism of
computing ΔE is different for agent spawning than for agent composition
and is described below. From this formula, if T is large, p, or the
probability of keeping the existing organizational structure is large.
Note that the value of p is capped at a certain threshold in order to
prevent the organization from being too sluggish in its reaction to
environmental change.
To compute if agent spawning is necessary, we use the annealing
equation with ΔE = 1
α∗Slack
where α is a constant and Slack is
the difference between the total time available for completion of
the outstanding tasks and the sum of the expected time required for
completion of each task on the task queue. Also, if the amount of
Slack is negative, immediate agent spawning will occur without use
of the annealing equation.
To calculate if agent composition is necessary, we again use the
simulated annealing equation. However, in this case, ΔE = β ∗
Idle Time, where β is a constant and Idle Time is the amount
of time for which the agent was idle. If the agent has been sitting
idle for a long period of time, ΔE is large, which implies that p,
the probability of keeping the existing organizational structure, is
low.
5. ORGANIZATION AND ROBUSTNESS
There are two approaches commonly used to achieve robustness
in multiagent systems:
1. the Survivalist Approach [12], which involves replicating
domain agents in order to allow the replicas to take over should
the original agents fail; and
2. the Citizen Approach [7], which involves the use of special
monitoring agents (called Sentinel Agents) in order to detect
agent failure and dynamically startup new agents in lieu of
the failed ones.
The advantage of the survivalist approach is that recovery is
relatively fast, since the replicas are pre-existing in the organization
and can take over as soon as a failure is detected. The advantages
of the citizen approach are that it requires fewer resources, little
modification to the existing organizational structure and
coordination protocol and is simpler to implement.
Both of these approaches can be applied to achieve robustness in
our OSD agents and it is not clear which approach would be better.
Rather a thorough empirical evaluation of both approaches would
be required. In this paper, we present the citizen approach as it has
been shown by [7], to have a better performance than the survivalist
approach in the Contract Net protocol, and leave the presentation
and evaluation of the survivalist approach to a future paper.
To implement the citizen approach, we designed special
monitoring agents, that periodically poll the domain agents by sending
them are you alive messages that the agents must respond to. If
an agent fails, it will not respond to such messages - the
monitoring agents can then create a new agent and delegate the
responsibilities of the dead agent to the new agent.
This delegation of responsibilities is non-trivial as the
monitoring agents do not have access to the internal state of the domain
agents, which is itself composed of two components - the
organizational knowledge and the task information. The former consists
of the information about the local and managerial roles of the agent
while the latter is composed of the methods that are being
scheduled and executed and the tasks that have been delegated to other
agents. This state information can only be deduced by monitoring
and recording the messages being sent and received by the domain
agents. For example, in order to deduce the organizational
knowledge, the monitoring agents need to keep a track of the spawn and
compose messages sent by the agents in order to trigger the
spawning and composition operations respectively. The deduction
process is particularly complicated in the case of the task information
as the monitoring agents do not have access to the private
schedules of the domain agents. The details are beyond the scope of this
paper.
6. EVALUATION
To evaluate our approach, we ran a series of experiments that
simulated the operation of both the OSD agents and the Contract
Net agents on various task structures with varied arrival rates and
deadlines. At the start of each experiment, a random TÆMS task
structure was generated with a specified depth and branching
factor. During the course of the experiment, a series of task instances
(problems) arrive at the organization and must be completed by the
agents before their specified deadlines.
To directly compare the OSD approach with the Contract Net
approach, each experiment was repeated several times - using OSD
agents on the first run and a different number of Contract Net agents
on each subsequent run. We were careful to use the same task
structure, task arrival times, task deadlines and random numbers for each
of these trials.
We divided the experiments into two groups: experiments in
which the environment was static (fixed task arrival rates and
deadlines) and experiments in which the environment was dynamic
(varying arrival rates and/or deadlines).
The two graphs in Figure 1, show the average performance of the
OSD organization against the Contract Net organizations with 8,
10, 12 and 14 agents. The results shown are the averages of running
40 experiments. 20 of those experiments had a static environment
with a fixed task arrival time of 15 cycles and a deadline window of
20 cycles. The remaining 20 experiments had a varying task arrival
rate - the task arrival rate was changed from 15 cycles to 30 cycles
and back to 15 cycles after every 20 tasks. In all the experiments,
the task structures were randomly generated with a maximum depth
of 4 and a maximum branching factor of 3. The runtime of all the
experiments was 2500 cycles.
We tested several hypotheses relating to the comparative
performance of our OSD approach using the Wilcoxon Matched-Pair
Signed-Rank tests. Matched-Pair signifies that we are comparing
the performance of each system on precisely the same randomized
task set within each separate experiment. The tested hypothesis are:
The OSD organization requires fewer agents to complete an
equal or larger number of tasks when compared to the
Contract Net organization: To test this hypothesis, we tested the
stronger null hypothesis that states that the contract net agents
complete more tasks. This null hypothesis is rejected for all contract
net organizations with less than 14 agents (static: p < 0.0003;
dynamic: p < 0.03). For large contract net organizations, the number
of tasks completed is statistically equivalent to the number
completed by the OSD agents, however the number of agents used by
the OSD organization is smaller: 9.59 agents (in the static case) and
7.38 agents (in the dynamic case) versus 14 contract net agents3
.
Thus the original hypothesis, that OSD requires fewer agents to
3
These values should not be construed as an indication of the
scalability of our approach. We have tested our approach on
organizations with more than 300 agents, which is significantly greater than
the number of agents needed for the kind of applications that we
have in mind (i.e. web service choreography, efficient dynamic use
of grid computing, distributed information gathering, etc.).
1232 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 1: Graph comparing the average performance of the
OSD organization with the Contract Net organizations (with
8, 10, 12 and 14 agents). The error bars show the standard
deviations.
complete an equal or larger number of tasks, is upheld.
The OSD organizations achieve an equal or greater average
quality than the Contract Net organizations: The null
hypothesis is that the Contract Net agents achieve a greater average quality.
We can reject the null hypothesis for contract net organizations with
less than 12 agents (static: p < 0.01; dynamic: p < 0.05). For
larger contract net organizations, the average quality is statistically
equivalent to that achieved by OSD.
The OSD agents have a lower average response time as
compared to the Contract Net agents: The null hypothesis that OSD
has the same or higher response time is rejected for all contract net
organizations (static: p < 0.0002; dynamic: p < 0.0004).
The OSD agents send less messages than the Contract Net
Agents: The null hypothesis that OSD sends the same or more
messages is rejected for all contract net organizations (p < .0003
in all cases except 8 contract net agents in a static environment
where p < 0.02)
Hence, as demonstrated by the above tests, our agents perform
better than the contract net agents as they complete a larger number
of tasks, achieve a greater quality and also have a lower response
time and communication overhead. These results make intuitive
sense given our goals for the OSD approach. We expected the OSD
organizations to have a faster average response time and to send
less messages because the agents in the OSD organization are not
wasting time and messages sending bid requests and replying to
bids. The quality gained on the tasks is directly dependent on the
Criteria/Heuristic BET TF MR Rand
Number of Agents 572 567 100 139
No-Org-Changes 641 51 5 177
Total-Messages-Sent 586 499 13 11
Resource-Cost 346 418 337 66
Tasks-Completed 427 560 154 166
Average-Quality 367 492 298 339
Average-Response-Time 356 321 370 283
Average-Runtime 543 323 74 116
Average-Turnaround-Time 560 314 74 126
Table 1: The number of times that each heuristic performed
the best or statistically equivalent to the best for each of the
performance criteria. Heuristic Key: BET is Balancing
Execution Time, TF is Topmost First, MR is Minimizing Resources and
Rand is a random allocation strategy, in which every TÆMS
node has a uniform probability of being selected for allocation.
number of tasks completed, hence the more the number of tasks
completed, the greater average quality. The results of testing the
first hypothesis were slightly more surprising. It appears that due
to the inherent inefficiency of the contract net protocol in bidding
for each and every task instance, a greater number of agents are
needed to complete an equal number of tasks.
Next, we evaluated the performance of the three heuristics for
allocating tasks. Some preliminary experiments (that are not reported
here due to space constraints) demonstrated the lack of a clear
winner amongst the three heuristics for most of the performance
criteria that we evaluated. We suspected this to be the case because
different heuristics are better for different task structures and
environmental conditions, and since each experiment starts with a different
random task structure, we couldn"t find one allocation strategy that
always dominated the other for all the performance criteria.
To determine which heuristic performs the best, given a set of
task structures, environmental conditions and performance criteria,
we performed a series of experiments that were controlled using
the following five variables:
• The depth of the task structure was varied from 3 to 5.
• The branching factor was varied from 3 to 5.
• The probability of any given task node having a MIN CAF
was varied from 0.0 to 1.0 in increments of 0.2. The
probability of any node having a SUM CAF was in turn modified
to ensure that the probabilities add up to 14
.
• The arrival rate: from 10 to 40 cycles in increments of 10.
• The deadline slack: from 5 to 15 in increments of 5.
Each experiment was repeated 20 times, with a new task
structure being generated each time - these 20 experiments formed an
experimental set. Hence, all the experiments in an experimental set
had the same values for the exogenous variables that were used to
control the experiment. Note that a static environment was used in
each of these experiments, as we wanted to see the performance of
the arrival rate and deadline slack on each of the three heuristics.
Also the results of any experiment in which the OSD organization
consisted of a single agent ware culled from the results. Similarly,
4
Since our preliminary analysis led is to believe that the number
of MAX and EXACTLY ONE CAFs in a task structure have a
minimal effect on the performance of the allocation strategies being
evaluated, we set the probabilities of the MAX and EXACTLY ONE
CAFs to 0 in order to reduce the combinatorial explosion of the full
factorial experimental design.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1233
experiments in which the generated task structures were
unsatisfiable (given the deadline constraints), were removed from the final
results. If any experimental set had more than 15 experiments thus
removed, the whole set was ignored for performing the evaluation.
The final evaluation was done on 673 experimental sets.
We tested the potential of these three heuristics on the following
performance criteria:
1. The average number of agents used.
2. The total number of organizational changes.
3. The total messages sent by all the agents.
4. The total resource cost of the organization.
5. The number of tasks completed.
6. The average quality accrued. The average quality is defined
as the total quality accrued during the experimental run
divided by the sum of the number of tasks completed and the
number of tasks failed.
7. The average response time of the organization. The response
time of a task is defined as the difference between the time
at which any agent in the organization starts working on
the task (the start time) and the time at which the task was
generated (the generation time). Hence, the response time
is equivalent to the wait time. For tasks that are never
attempted/started, the response time is set at final runtime
minus the generation time.
8. The average runtime of the tasks attempted by the
organization. This time is defined as the difference between the time
at which the task completed or failed and the start time. For
tasks that were never stated, this time is set to zero.
9. The turnaround time is defined as the sum of the response
time and runtime of a task.
Except for the number of tasks completed and the average
quality accrued, lower values for the various performance criteria
indicate better performance. Again we ran the Wilcoxon Matched-Pair
Signed-Rank tests on the experiments in each of the experimental
sets. The null hypothesis in each case was that there is no
difference between the pair of heuristics for the performance criteria
under consideration. We were interested in the cases in which we
could reject the null hypothesis with 95% confidence (p < 0.05).
We noted the number of times that a heuristic performed the best
or was in a group that performed statistically better than the rest.
These counts are given in Tables 1 and 2.
The number of experimental sets in which each heuristic
performed the best or statistically equivalent to the best is shown in
Table 1. The breakup of these numbers into (1) the number of times
that each heuristic performed better than all the other heuristics and
(2) the number of times each heuristic was statistically equivalent
to another group of heuristics, all of which performed the best, is
shown in Table 2. Both of these tables allow us to glean important
information about the performance of the three heuristics.
Particularly interesting were the following results:
• Whereas Balancing Execution Time (BET) used the
lowest number of agents in largest number of experimental sets
(572), in most of these cases (337 experimental sets) it was
statistically equivalent to Topmost First (TF). When these
two heuristics didn"t perform equally, there was an almost
even split between the number of experimental sets in which
one outperformed the other.
We believe this was the case because BET always bifurcates
the agents into two agents that have a more or less equal task
load. This often results in organizations that have an even
Figure 2: Graph demonstrating the robustness of the citizen
approach. The baseline shows the number of tasks completed
in the absence of any failure.
number of agents - none of which are small5
enough to
combine into a larger agent. With TF, on the other hand, a
large agent can successively spawn off smaller agents until it
and the spawned agents are small enough to complete their
tasks before the deadlines - this often results in
organizations with an odd number of agents that is less than those
used by BET.
• As expected, BET achieved the lowest number of
organizational changes in the largest number of experimental sets. In
fact, it was over ten times as good as its second best
competitor (TF). This shows that if the agents are conscientious
in their initial task allocation, there is a lesser need for
organizational change later on, especially for static environments.
• A particularly interesting, yet easily explainable, result was
that of the average response time. We found that the
Minimizing Resources (MR) heuristic performed the best when it
came to minimizing the average response time! This can be
explained by the fact the MR heuristic is extremely greedy
and prefers to spawn off small agents that have a tiny
resource footprint (so as to minimize the total increase in the
resource cost to the organization at the time of spawning).
Whereas most of these small agents might compose with
other agents over time, the presence of a single small agent
is sufficient to reduce the response time.
In fact the MR heuristic is not the most effective heuristic
when it comes to minimizing the resource-cost of the
organization - in fact, it only outperforms a random task/resource
allocation. We believe this is in part due to the greedy
nature of this heuristic and in part because of the fact that all
spawning and composition operations only use local
information. We believe that using some non-local information
about the resource allocation might help in making better
decisions, something that we plan to look at in the future.
Finally we evaluated the performance of the citizens approach to
robustness as applied to our OSD mechanism (Figure 2). As
expected, as the probability of failure increases, the number of agents
failing during a run also increases. This results in a slight decrease
in the number of tasks completed, which can be explained by the
fact that whenever an agent fails, its looses whatever work it was
doing at the time. The newly created agent that fills in for the failed
5
For this discussion small agents are agents that have a low
expected duration for their local roles (as calculated by Algorithm 4).
1234 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Criteria/Heuristic BET TF MR Rand BET+TF BET+Rand MR+Rand TF+MR BET+TF+MR All
Number of Agents 94 88 3 7 337 2 0 0 12 85
No-Org-Changes 480 0 0 29 16 113 0 0 0 5
Total-Messages-Sent 170 85 0 2 399 1 0 0 7 5
Resource-Cost 26 100 170 42 167 0 7 6 128 15
Tasks-Completed 77 197 4 28 184 1 3 9 36 99
Average-Quality 38 147 26 104 76 0 11 11 34 208
Average-Response-Time 104 74 162 43 31 20 16 8 7 169
Average-Runtime 322 110 0 12 121 13 1 1 1 69
Average-Turnaround-Time 318 94 1 11 125 26 1 0 7 64
Table 2: Table showing the number of times that each individual heuristic performed the best and the number of times that a certain
group of statistically equivalent heuristics performed the best. Only the more interesting heuristic groupings are shown. All shows
the number of experimental sets in which there was no statistical difference between the three heuristics and a random allocation
strategy
one must redo the work, thus wasting precious time which might
not be available close to a deadline.
As a part of our future research, we wish to, firstly, evaluate the
survivalist approach to robustness. The survivalist approach might
actually be better than the citizen approach for higher
probabilities of agent failure, as the replicated agents may be processing the
task structures in parallel and can take over the moment the
original agents fail - thus saving time around tight deadlines. Also,
we strongly believe that the optimal organizational structure may
vary, depending on the probability of failure and the desired level
of robustness. For example, one way of achieving a higher level
of robustness in the survivalist approach, given a large numbers of
agent failures, would be to relax the task deadlines. However, such
a relaxation would result in the system using fewer agents in order
to conserve resources, which in turn would have a detrimental
effect on the robustness. Therefore, towards this end, we have begun
exploring the robustness properties of task structures and the ways
in which the organizational design can be modified to take such
properties into account.
7. CONCLUSION
In this paper, we have presented a run-time approach to
organization in which the agents use Organizational Self-Design to come up
with a suitable organizational structure. We have also evaluated the
performance of the organizations generated by the agents following
our approach with the bespoke organization formation that takes
place in the Contract Net protocol and have demonstrated that our
approach is better than the Contract Net approach as evident by the
larger number of tasks completed, larger quality achieved and lower
response time. Finally, we tested the performance of three different
resource allocation heuristics on various performance metrics and
also evaluated the robustness of our approach.
8. REFERENCES
[1] K. S. Barber and C. E. Martin. Dynamic reorganization of
decision-making groups. In AGENTS "01, pages 513-520,
New York, NY, USA, 2001.
[2] K. M. Carley and L. Gasser. Computational organization
theory. In G. Wiess, editor, Multiagent Systems: A Modern
Approach to Distributed Artificial Intelligence, pages
299-330, MIT Press, 1999.
[3] W. Chen and K. S. Decker. The analysis of coordination in
an information system application - emergency medical
services. In Lecture Notes in Computer Science (LNCS),
number 3508, pages 36-51. Springer-Verlag, May 2005.
[4] D. Corkill and V. Lesser. The use of meta-level control for
coordination in a distributed problem solving network.
Proceedings of the Eighth International Joint Conference on
Artificial Intelligence, pages 748-756, August 1983.
[5] K. S. Decker. Environment centered analysis and design of
coordination mechanisms. Ph.D. Thesis, Dept. of Comp.
Science, University of Massachusetts, Amherst, May 1995.
[6] K. S. Decker and J. Li. Coordinating mutually exclusive
resources using GPGP. Autonomous Agents and Multi-Agent
Systems, 3(2):133-157, 2000.
[7] C. Dellarocas and M. Klein. An experimental evaluation of
domain-independent fault handling services in open
multi-agent systems. Proceedings of the International
Conference on Multi-Agent Systems (ICMAS-2000), July
2000.
[8] V. Dignum, F. Dignum, and L. Sonenberg. Towards Dynamic
Reorganization of Agent Societies. In Proceedings of CEAS:
Workshop on Coordination in Emergent Agent Societies at
ECAI, pages 22-27, Valencia, Spain, September 2004.
[9] B. Horling, B. Benyo, and V. Lesser. Using self-diagnosis to
adapt organizational structures. In AGENTS "01, pages
529-536, New York, NY, USA, 2001. ACM Press.
[10] T. Ishida, L. Gasser, and M. Yokoo. Organization self-design
of distributed production systems. IEEE Transactions on
Knowledge and Data Engineering, 4(2):123-134, 1992.
[11] V. R. Lesser et. al. Evolution of the gpgp/tæms
domain-independent coordination framework. Autonomous
Agents and Multi-Agent Systems, 9(1-2):87-143, 2004.
[12] O. Marin, P. Sens, J. Briot, and Z. Guessoum. Towards
adaptive fault tolerance for distributed multi-agent systems.
Proceedings of ERSADS 2001, May 2001.
[13] O. Shehory, K. Sycara, et. al. Agent cloning: an approach to
agent mobility and resource allocation. IEEE
Communications Magazine, 36(7):58-67, 1998.
[14] Y. So and E. Durfee. An organizational self-design model for
organizational change. In AAAI-93 Workshop on AI and
Theories of Groups and Organizations, pages 8-15,
Washington, D.C., July 1993.
[15] T. Wagner. Coordination decision support assistants
(coordinators). Technical Report 04-29, BAA, 2004.
[16] T. Wagner and V. Lesser. Design-to-criteria scheduling:
Real-time agent control. Proc. of AAAI 2000 Spring
Symposium on Real-Time Autonomous Systems, 89-96.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1235 | organization;organizational-self design;task and resource allocation;agent spawning;robustness;organizational structure;extended hierarchical task structure;environment modeling;composition;task analysis;coordination;simulation;multiagent system;organizational self-design |
train_I-65 | Graphical Models for Online Solutions to Interactive POMDPs | We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness. | 1. INTRODUCTION
Interactive partially observable Markov decision processes
(IPOMDPs) [9] provide a framework for sequential decision-making
in partially observable multiagent environments. They generalize
POMDPs [13] to multiagent settings by including the other agents"
computable models in the state space along with the states of the
physical environment. The models encompass all information
influencing the agents" behaviors, including their preferences,
capabilities, and beliefs, and are thus analogous to types in Bayesian
games [11]. I-POMDPs adopt a subjective approach to
understanding strategic behavior, rooted in a decision-theoretic framework that
takes a decision-maker"s perspective in the interaction.
In [15], Polich and Gmytrasiewicz introduced interactive
dynamic influence diagrams (I-DIDs) as the computational
representations of I-POMDPs. I-DIDs generalize DIDs [12], which may
be viewed as computational counterparts of POMDPs, to
multiagents settings in the same way that I-POMDPs generalize POMDPs.
I-DIDs contribute to a growing line of work [19] that includes
multi-agent influence diagrams (MAIDs) [14], and more recently,
networks of influence diagrams (NIDs) [8]. These formalisms seek
to explicitly model the structure that is often present in real-world
problems by decomposing the situation into chance and decision
variables, and the dependencies between the variables. MAIDs
provide an alternative to normal and extensive game forms using
a graphical formalism to represent games of imperfect information
with a decision node for each agent"s actions and chance nodes
capturing the agent"s private information. MAIDs objectively
analyze the game, efficiently computing the Nash equilibrium profile
by exploiting the independence structure. NIDs extend MAIDs to
include agents" uncertainty over the game being played and over
models of the other agents. Each model is a MAID and the network
of MAIDs is collapsed, bottom up, into a single MAID for
computing the equilibrium of the game keeping in mind the different
models of each agent. Graphical formalisms such as MAIDs and NIDs
open up a promising area of research that aims to represent
multiagent interactions more transparently. However, MAIDs provide an
analysis of the game from an external viewpoint and the
applicability of both is limited to static single play games. Matters are more
complex when we consider interactions that are extended over time,
where predictions about others" future actions must be made using
models that change as the agents act and observe. I-DIDs address
this gap by allowing the representation of other agents" models as
the values of a special model node. Both, other agents" models and
the original agent"s beliefs over these models are updated over time
using special-purpose implementations.
In this paper, we improve on the previous preliminary
representation of the I-DID shown in [15] by using the insight that the static
I-ID is a type of NID. Thus, we may utilize NID-specific language
constructs such as multiplexers to represent the model node, and
subsequently the I-ID, more transparently. Furthermore, we clarify
the semantics of the special purpose policy link introduced in the
representation of I-DID by [15], and show that it could be replaced
by traditional dependency links. In the previous representation of
the I-DID, the update of the agent"s belief over the models of others
as the agents act and receive observations was denoted using a
special link called the model update link that connected the model
nodes over time. We explicate the semantics of this link by
showing how it can be implemented using the traditional dependency
links between the chance nodes that constitute the model nodes.
The net result is a representation of I-DID that is significantly more
transparent, semantically clear, and capable of being implemented
using the standard algorithms for solving DIDs. We show how
IDIDs may be used to model an agent"s uncertainty over others"
models, that may themselves be I-DIDs. Solution to the I-DID is
a policy that prescribes what the agent should do over time, given
its beliefs over the physical state and others" models. Analogous to
DIDs, I-DIDs may be used to compute the policy of an agent online
as the agent acts and observes in a setting that is populated by other
interacting agents.
2. BACKGROUND: FINITELY NESTED
IPOMDPS
Interactive POMDPs generalize POMDPs to multiagent settings
by including other agents" models as part of the state space [9].
Since other agents may also reason about others, the interactive
state space is strategically nested; it contains beliefs about other
agents" models and their beliefs about others. For simplicity of
presentation we consider an agent, i, that is interacting with one
other agent, j.
A finitely nested I-POMDP of agent i with a strategy level l is
defined as the tuple:
I-POMDPi,l = ISi,l, A, Ti, Ωi, Oi, Ri
where: • ISi,l denotes a set of interactive states defined as, ISi,l =
S × Mj,l−1, where Mj,l−1 = {Θj,l−1 ∪ SMj}, for l ≥ 1, and
ISi,0 = S, where S is the set of states of the physical
environment. Θj,l−1 is the set of computable intentional models of agent
j: θj,l−1 = bj,l−1, ˆθj where the frame, ˆθj = A, Ωj, Tj, Oj, Rj,
OCj . Here, j is Bayes rational and OCj is j"s optimality criterion.
SMj is the set of subintentional models of j. Simple examples of
subintentional models include a no-information model [10] and a
fictitious play model [6], both of which are history independent.
We give a recursive bottom-up construction of the interactive state
space below.
ISi,0 = S, Θj,0 = { bj,0, ˆθj | bj,0 ∈ Δ(ISj,0)}
ISi,1 = S × {Θj,0 ∪ SMj}, Θj,1 = { bj,1, ˆθj | bj,1 ∈ Δ(ISj,1)}
.
.
.
.
.
.
ISi,l = S × {Θj,l−1 ∪ SMj}, Θj,l = { bj,l, ˆθj | bj,l ∈ Δ(ISj,l)}
Similar formulations of nested spaces have appeared in [1, 3].
• A = Ai × Aj is the set of joint actions of all agents in the
environment; • Ti : S ×A×S → [0, 1], describes the effect of the
joint actions on the physical states of the environment; • Ωi is the
set of observations of agent i; • Oi : S × A × Ωi → [0, 1] gives
the likelihood of the observations given the physical state and joint
action; • Ri : ISi × A → R describes agent i"s preferences over
its interactive states. Usually only the physical states will matter.
Agent i"s policy is the mapping, Ω∗
i → Δ(Ai), where Ω∗
i is
the set of all observation histories of agent i. Since belief over the
interactive states forms a sufficient statistic [9], the policy can also
be represented as a mapping from the set of all beliefs of agent i to
a distribution over its actions, Δ(ISi) → Δ(Ai).
2.1 Belief Update
Analogous to POMDPs, an agent within the I-POMDP
framework updates its belief as it acts and observes. However, there are
two differences that complicate the belief update in multiagent
settings when compared to single agent ones. First, since the state of
the physical environment depends on the actions of both agents, i"s
prediction of how the physical state changes has to be made based
on its prediction of j"s actions. Second, changes in j"s models have
to be included in i"s belief update. Specifically, if j is intentional
then an update of j"s beliefs due to its action and observation has
to be included. In other words, i has to update its belief based on
its prediction of what j would observe and how j would update
its belief. If j"s model is subintentional, then j"s probable
observations are appended to the observation history contained in the
model. Formally, we have:
Pr(ist
|at−1
i , bt−1
i,l ) = β ISt−1:mt−1
j =θt
j
bt−1
i,l (ist−1
)
× at−1
j
Pr(at−1
j |θt−1
j,l−1)Oi(st
, at−1
i , at−1
j , ot
i)
×Ti(st−1
, at−1
i , at−1
j , st
) ot
j
Oj(st
, at−1
i , at−1
j , ot
j)
×τ(SEθt
j
(bt−1
j,l−1, at−1
j , ot
j) − bt
j,l−1)
(1)
where β is the normalizing constant, τ is 1 if its argument is 0
otherwise it is 0, Pr(at−1
j |θt−1
j,l−1) is the probability that at−1
j is
Bayes rational for the agent described by model θt−1
j,l−1, and SE(·)
is an abbreviation for the belief update. For a version of the belief
update when j"s model is subintentional, see [9].
If agent j is also modeled as an I-POMDP, then i"s belief update
invokes j"s belief update (via the term SEθt
j
( bt−1
j,l−1 , at−1
j , ot
j)),
which in turn could invoke i"s belief update and so on. This
recursion in belief nesting bottoms out at the 0th
level. At this level, the
belief update of the agent reduces to a POMDP belief update. 1
For
illustrations of the belief update, additional details on I-POMDPs,
and how they compare with other multiagent frameworks, see [9].
2.2 Value Iteration
Each belief state in a finitely nested I-POMDP has an associated
value reflecting the maximum payoff the agent can expect in this
belief state:
Un
( bi,l, θi ) = max
ai∈Ai is∈ISi,l
ERi(is, ai)bi,l(is)+
γ
oi∈Ωi
Pr(oi|ai, bi,l)Un−1
( SEθi
(bi,l, ai, oi), θi )
(2)
where, ERi(is, ai) = aj
Ri(is, ai, aj)Pr(aj|mj,l−1) (since
is = (s, mj,l−1)). Eq. 2 is a basis for value iteration in I-POMDPs.
Agent i"s optimal action, a∗
i , for the case of finite horizon with
discounting, is an element of the set of optimal actions for the belief
state, OPT(θi), defined as:
OPT( bi,l, θi ) = argmax
ai∈Ai is∈ISi,l
ERi(is, ai)bi,l(is)
+γ
oi∈Ωi
Pr(oi|ai, bi,l)Un
( SEθi
(bi,l, ai, oi), θi )
(3)
3. INTERACTIVEINFLUENCEDIAGRAMS
A naive extension of influence diagrams (IDs) to settings
populated by multiple agents is possible by treating other agents as
automatons, represented using chance nodes. However, this approach
assumes that the agents" actions are controlled using a probability
distribution that does not change over time. Interactive influence
diagrams (I-IDs) adopt a more sophisticated approach by
generalizing IDs to make them applicable to settings shared with other
agents who may act and observe, and update their beliefs.
3.1 Syntax
In addition to the usual chance, decision, and utility nodes,
IIDs include a new type of node called the model node. We show a
general level l I-ID in Fig. 1(a), where the model node (Mj,l−1) is
denoted using a hexagon. We note that the probability distribution
over the chance node, S, and the model node together represents
agent i"s belief over its interactive states. In addition to the model
1
The 0th
level model is a POMDP: Other agent"s actions are treated
as exogenous events and folded into the T, O, and R functions.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 815
Figure 1: (a) A generic level l I-ID for agent i situated with one other agent j. The hexagon is the model node (Mj,l−1) whose structure we show in
(b). Members of the model node are I-IDs themselves (m1
j,l−1, m2
j,l−1; diagrams not shown here for simplicity) whose decision nodes are mapped to
the corresponding chance nodes (A1
j , A2
j ). Depending on the value of the node, Mod[Mj], the distribution of each of the chance nodes is assigned to
the node Aj. (c) The transformed I-ID with the model node replaced by the chance nodes and the relationships between them.
node, I-IDs differ from IDs by having a dashed link (called the
policy link in [15]) between the model node and a chance node,
Aj, that represents the distribution over the other agent"s actions
given its model. In the absence of other agents, the model node and
the chance node, Aj, vanish and I-IDs collapse into traditional IDs.
The model node contains the alternative computational models
ascribed by i to the other agent from the set, Θj,l−1 ∪ SMj, where
Θj,l−1 and SMj were defined previously in Section 2. Thus, a
model in the model node may itself be an I-ID or ID, and the
recursion terminates when a model is an ID or subintentional. Because
the model node contains the alternative models of the other agent
as its values, its representation is not trivial. In particular, some of
the models within the node are I-IDs that when solved generate the
agent"s optimal policy in their decision nodes. Each decision node
is mapped to the corresponding chance node, say A1
j , in the
following way: if OPT is the set of optimal actions obtained by solving
the I-ID (or ID), then Pr(aj ∈ A1
j ) = 1
|OP T |
if aj ∈ OPT, 0
otherwise.
Borrowing insights from previous work [8], we observe that the
model node and the dashed policy link that connects it to the
chance node, Aj, could be represented as shown in Fig. 1(b). The
decision node of each level l − 1 I-ID is transformed into a chance
node, as we mentioned previously, so that the actions with the
largest value in the decision node are assigned uniform
probabilities in the chance node while the rest are assigned zero probability.
The different chance nodes (A1
j , A2
j ), one for each model, and
additionally, the chance node labeled Mod[Mj] form the parents of the
chance node, Aj. Thus, there are as many action nodes (A1
j , A2
j )
in Mj,l−1 as the number of models in the support of agent i"s
beliefs. The conditional probability table of the chance node, Aj,
is a multiplexer that assumes the distribution of each of the action
nodes (A1
j , A2
j ) depending on the value of Mod[Mj]. The values of
Mod[Mj] denote the different models of j. In other words, when
Mod[Mj] has the value m1
j,l−1, the chance node Aj assumes the
distribution of the node A1
j , and Aj assumes the distribution of A2
j
when Mod[Mj] has the value m2
j,l−1. The distribution over the
node, Mod[Mj], is the agent i"s belief over the models of j given a
physical state. For more agents, we will have as many model nodes
as there are agents. Notice that Fig. 1(b) clarifies the semantics of
the policy link, and shows how it can be represented using the
traditional dependency links.
In Fig. 1(c), we show the transformed I-ID when the model node
is replaced by the chance nodes and relationships between them. In
contrast to the representation in [15], there are no special-purpose
policy links, rather the I-ID is composed of only those types of
nodes that are found in traditional IDs and dependency
relationships between the nodes. This allows I-IDs to be represented and
implemented using conventional application tools that target IDs.
Note that we may view the level l I-ID as a NID. Specifically, each
of the level l − 1 models within the model node are blocks in the
NID (see Fig. 2). If the level l = 1, each block is a traditional ID,
otherwise if l > 1, each block within the NID may itself be a NID.
Note that within the I-IDs (or IDs) at each level, there is only a
single decision node. Thus, our NID does not contain any MAIDs.
Figure 2: A level l I-ID represented as a NID. The probabilities
assigned to the blocks of the NID are i"s beliefs over j"s models
conditioned on a physical state.
3.2 Solution
The solution of an I-ID proceeds in a bottom-up manner, and is
implemented recursively. We start by solving the level 0 models,
which, if intentional, are traditional IDs. Their solutions provide
probability distributions over the other agents" actions, which are
entered in the corresponding chance nodes found in the model node
of the level 1 I-ID. The mapping from the level 0 models" decision
nodes to the chance nodes is carried out so that actions with the
largest value in the decision node are assigned uniform
probabilities in the chance node while the rest are assigned zero probability.
Given the distributions over the actions within the different chance
nodes (one for each model of the other agent), the level 1 I-ID is
transformed as shown in Fig. 1(c). During the transformation, the
conditional probability table (CPT) of the node, Aj, is populated
such that the node assumes the distribution of each of the chance
nodes depending on the value of the node, Mod[Mj]. As we
mentioned previously, the values of the node Mod[Mj] denote the
different models of the other agent, and its distribution is the agent i"s
belief over the models of j conditioned on the physical state. The
transformed level 1 I-ID is a traditional ID that may be solved
us816 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
(a) (b)
Figure 3: (a) A generic two time-slice level l I-DID for agent i in a setting with one other agent j. Notice the dotted model update link that denotes
the update of the models of j and the distribution over the models over time. (b) The semantics of the model update link.
ing the standard expected utility maximization method [18]. This
procedure is carried out up to the level l I-ID whose solution gives
the non-empty set of optimal actions that the agent should perform
given its belief. Notice that analogous to IDs, I-IDs are suitable for
online decision-making when the agent"s current belief is known.
4. INTERACTIVE DYNAMIC INFLUENCE
DIAGRAMS
Interactive dynamic influence diagrams (I-DIDs) extend I-IDs
(and NIDs) to allow sequential decision-making over several time
steps. Just as DIDs are structured graphical representations of POMDPs,
I-DIDs are the graphical online analogs for finitely nested I-POMDPs.
I-DIDs may be used to optimize over a finite look-ahead given
initial beliefs while interacting with other, possibly similar, agents.
4.1 Syntax
We depict a general two time-slice I-DID in Fig. 3(a). In
addition to the model nodes and the dashed policy link, what
differentiates an I-DID from a DID is the model update link shown as a
dotted arrow in Fig. 3(a). We explained the semantics of the model
node and the policy link in the previous section; we describe the
model updates next.
The update of the model node over time involves two steps: First,
given the models at time t, we identify the updated set of models
that reside in the model node at time t + 1. Recall from Section 2
that an agent"s intentional model includes its belief. Because the
agents act and receive observations, their models are updated to
reflect their changed beliefs. Since the set of optimal actions for
a model could include all the actions, and the agent may receive
any one of |Ωj| possible observations, the updated set at time step
t + 1 will have at most |Mt
j,l−1||Aj||Ωj| models. Here, |Mt
j,l−1|
is the number of models at time step t, |Aj| and |Ωj| are the largest
spaces of actions and observations respectively, among all the
models. Second, we compute the new distribution over the updated
models given the original distribution and the probability of the
agent performing the action and receiving the observation that led
to the updated model. These steps are a part of agent i"s belief
update formalized using Eq. 1.
In Fig. 3(b), we show how the dotted model update link is
implemented in the I-DID. If each of the two level l − 1 models
ascribed to j at time step t results in one action, and j could make
one of two possible observations, then the model node at time step
t + 1 contains four updated models (mt+1,1
j,l−1 ,mt+1,2
j,l−1 , mt+1,3
j,l−1 , and
mt+1,4
j,l−1 ). These models differ in their initial beliefs, each of which
is the result of j updating its beliefs due to its action and a possible
observation. The decision nodes in each of the I-DIDs or DIDs that
represent the lower level models are mapped to the corresponding
Figure 4: Transformed I-DID with the model nodes and model update
link replaced with the chance nodes and the relationships (in bold).
chance nodes, as mentioned previously. Next, we describe how the
distribution over the updated set of models (the distribution over the
chance node Mod[Mt+1
j ] in Mt+1
j,l−1) is computed. The probability
that j"s updated model is, say mt+1,1
j,l−1 , depends on the probability
of j performing the action and receiving the observation that led to
this model, and the prior distribution over the models at time step
t. Because the chance node At
j assumes the distribution of each
of the action nodes based on the value of Mod[Mt
j ], the
probability of the action is given by this chance node. In order to obtain
the probability of j"s possible observation, we introduce the chance
node Oj, which depending on the value of Mod[Mt
j ] assumes the
distribution of the observation node in the lower level model
denoted by Mod[Mt
j ]. Because the probability of j"s observations
depends on the physical state and the joint actions of both agents,
the node Oj is linked with St+1
, At
j, and At
i. 2
Analogous to At
j,
the conditional probability table of Oj is also a multiplexer
modulated by Mod[Mt
j ]. Finally, the distribution over the prior models
at time t is obtained from the chance node, Mod[Mt
j ] in Mt
j,l−1.
Consequently, the chance nodes, Mod[Mt
j ], At
j, and Oj, form the
parents of Mod[Mt+1
j ] in Mt+1
j,l−1. Notice that the model update
link may be replaced by the dependency links between the chance
nodes that constitute the model nodes in the two time slices. In
Fig. 4 we show the two time-slice I-DID with the model nodes
replaced by the chance nodes and the relationships between them.
Chance nodes and dependency links that not in bold are standard,
usually found in DIDs.
Expansion of the I-DID over more time steps requires the
repetition of the two steps of updating the set of models that form the
2
Note that Oj represents j"s observation at time t + 1.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817
values of the model node and adding the relationships between the
chance nodes, as many times as there are model update links. We
note that the possible set of models of the other agent j grows
exponentially with the number of time steps. For example, after T steps,
there may be at most |Mt=1
j,l−1|(|Aj||Ωj|)T −1
candidate models
residing in the model node.
4.2 Solution
Analogous to I-IDs, the solution to a level l I-DID for agent i
expanded over T time steps may be carried out recursively. For the
purpose of illustration, let l=1 and T=2. The solution method uses
the standard look-ahead technique, projecting the agent"s action
and observation sequences forward from the current belief state [17],
and finding the possible beliefs that i could have in the next time
step. Because agent i has a belief over j"s models as well, the
lookahead includes finding out the possible models that j could have in
the future. Consequently, each of j"s subintentional or level 0
models (represented using a standard DID) in the first time step must be
solved to obtain its optimal set of actions. These actions are
combined with the set of possible observations that j could make in that
model, resulting in an updated set of candidate models (that include
the updated beliefs) that could describe the behavior of j. Beliefs
over this updated set of candidate models are calculated using the
standard inference methods using the dependency relationships
between the model nodes as shown in Fig. 3(b). We note the recursive
nature of this solution: in solving agent i"s level 1 I-DID, j"s level 0
DIDs must be solved. If the nesting of models is deeper, all models
at all levels starting from 0 are solved in a bottom-up manner.
We briefly outline the recursive algorithm for solving agent i"s
Algorithm for solving I-DID
Input : level l ≥ 1 I-ID or level 0 ID, T
Expansion Phase
1. For t from 1 to T − 1 do
2. If l ≥ 1 then
Populate Mt+1
j,l−1
3. For each mt
j in Range(Mt
j,l−1) do
4. Recursively call algorithm with the l − 1 I-ID (or ID)
that represents mt
j and the horizon, T − t + 1
5. Map the decision node of the solved I-ID (or ID),
OPT(mt
j), to a chance node Aj
6. For each aj in OPT(mt
j) do
7. For each oj in Oj (part of mt
j) do
8. Update j"s belief, bt+1
j ← SE(bt
j, aj, oj)
9. mt+1
j ← New I-ID (or ID) with bt+1
j as the
initial belief
10. Range(Mt+1
j,l−1)
∪
← {mt+1
j }
11. Add the model node, Mt+1
j,l−1, and the dependency links
between Mt
j,l−1 and Mt+1
j,l−1 (shown in Fig. 3(b))
12. Add the chance, decision, and utility nodes for t + 1 time
slice and the dependency links between them
13. Establish the CPTs for each chance node and utility node
Look-Ahead Phase
14. Apply the standard look-ahead and backup method to solve
the expanded I-DID
Figure 5: Algorithm for solving a level l ≥ 0 I-DID.
level l I-DID expanded over T time steps with one other agent j in
Fig. 5. We adopt a two-phase approach: Given an I-ID of level l
(described previously in Section 3) with all lower level models also
represented as I-IDs or IDs (if level 0), the first step is to expand
the level l I-ID over T time steps adding the dependency links and
the conditional probability tables for each node. We particularly
focus on establishing and populating the model nodes (lines 3-11).
Note that Range(·) returns the values (lower level models) of the
random variable given as input (model node). In the second phase,
we use a standard look-ahead technique projecting the action and
observation sequences over T time steps in the future, and backing
up the utility values of the reachable beliefs. Similar to I-IDs, the
I-DIDs reduce to DIDs in the absence of other agents.
As we mentioned previously, the 0-th level models are the
traditional DIDs. Their solutions provide probability distributions over
actions of the agent modeled at that level to I-DIDs at level 1. Given
probability distributions over other agent"s actions the level 1
IDIDs can themselves be solved as DIDs, and provide probability
distributions to yet higher level models. Assume that the number
of models considered at each level is bound by a number, M.
Solving an I-DID of level l in then equivalent to solving O(Ml
) DIDs.
5. EXAMPLE APPLICATIONS
To illustrate the usefulness of I-DIDs, we apply them to three
problem domains. We describe, in particular, the formulation of
the I-DID and the optimal prescriptions obtained on solving it.
5.1 Followership-Leadership in the Multiagent
Tiger Problem
We begin our illustrations of using I-IDs and I-DIDs with a slightly
modified version of the multiagent tiger problem discussed in [9].
The problem has two agents, each of which can open the right door
(OR), the left door (OL) or listen (L). In addition to hearing growls
(from the left (GL) or from the right (GR)) when they listen, the
agents also hear creaks (from the left (CL), from the right (CR), or
no creaks (S)), which noisily indicate the other agent"s opening one
of the doors. When any door is opened, the tiger persists in its
original location with a probability of 95%. Agent i hears growls with
a reliability of 65% and creaks with a reliability of 95%. Agent j,
on the other hand, hears growls with a reliability of 95%. Thus,
the setting is such that agent i hears agent j opening doors more
reliably than the tiger"s growls. This suggests that i could use j"s
actions as an indication of the location of the tiger, as we discuss
below. Each agent"s preferences are as in the single agent game
discussed in [13]. The transition, observation, and reward
functions are shown in [16].
A good indicator of the usefulness of normative methods for
decision-making like I-DIDs is the emergence of realistic social
behaviors in their prescriptions. In settings of the persistent
multiagent tiger problem that reflect real world situations, we demonstrate
followership between the agents and, as shown in [15], deception
among agents who believe that they are in a follower-leader type
of relationship. In particular, we analyze the situational and
epistemological conditions sufficient for their emergence. The
followership behavior, for example, results from the agent knowing its own
weaknesses, assessing the strengths, preferences, and possible
behaviors of the other, and realizing that its best for it to follow the
other"s actions in order to maximize its payoffs.
Let us consider a particular setting of the tiger problem in which
agent i believes that j"s preferences are aligned with its own - both
of them just want to get the gold - and j"s hearing is more reliable
in comparison to itself. As an example, suppose that j, on listening
can discern the tiger"s location 95% of the times compared to i"s
65% accuracy. Additionally, agent i does not have any initial
information about the tiger"s location. In other words, i"s single-level
nested belief, bi,1, assigns 0.5 to each of the two locations of the
tiger. In addition, i considers two models of j, which differ in j"s
flat level 0 initial beliefs. This is represented in the level 1 I-ID
shown in Fig. 6(a). According to one model, j assigns a
probability of 0.9 that the tiger is behind the left door, while the other
818 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 6: (a) Level 1 I-ID of agent i, (b) two level 0 IDs of agent j
whose decision nodes are mapped to the chance nodes, A1
j , A2
j , in (a).
model assigns 0.1 to that location (see Fig. 6(b)). Agent i is
undecided on these two models of j. If we vary i"s hearing ability,
and solve the corresponding level 1 I-ID expanded over three time
steps, we obtain the normative behavioral policies shown in Fig 7
that exhibit followership behavior. If i"s probability of correctly
hearing the growls is 0.65, then as shown in the policy in Fig. 7(a),
i begins to conditionally follow j"s actions: i opens the same door
that j opened previously iff i"s own assessment of the tiger"s
location confirms j"s pick. If i loses the ability to correctly interpret
the growls completely, it blindly follows j and opens the same door
that j opened previously (Fig. 7(b)).
Figure 7: Emergence of (a) conditional followership, and (b) blind
followership in the tiger problem. Behaviors of interest are in bold. * is
a wildcard, and denotes any one of the observations.
We observed that a single level of belief nesting - beliefs about
the other"s models - was sufficient for followership to emerge in the
tiger problem. However, the epistemological requirements for the
emergence of leadership are more complex. For an agent, say j, to
emerge as a leader, followership must first emerge in the other agent
i. As we mentioned previously, if i is certain that its preferences
are identical to those of j, and believes that j has a better sense
of hearing, i will follow j"s actions over time. Agent j emerges
as a leader if it believes that i will follow it, which implies that
j"s belief must be nested two levels deep to enable it to recognize
its leadership role. Realizing that i will follow presents j with an
opportunity to influence i"s actions in the benefit of the collective
good or its self-interest alone. For example, in the tiger problem,
let us consider a setting in which if both i and j open the correct
door, then each gets a payoff of 20 that is double the original. If
j alone selects the correct door, it gets the payoff of 10. On the
other hand, if both agents pick the wrong door, their penalties are
cut in half. In this setting, it is in both j"s best interest as well as the
collective betterment for j to use its expertise in selecting the
correct door, and thus be a good leader. However, consider a slightly
different problem in which j gains from i"s loss and is penalized
if i gains. Specifically, let i"s payoff be subtracted from j"s,
indicating that j is antagonistic toward i - if j picks the correct door
and i the wrong one, then i"s loss of 100 becomes j"s gain. Agent
j believes that i incorrectly thinks that j"s preferences are those
that promote the collective good and that it starts off by believing
with 99% confidence where the tiger is. Because i believes that its
preferences are similar to those of j, and that j starts by believing
almost surely that one of the two is the correct location (two level
0 models of j), i will start by following j"s actions. We show i"s
normative policy on solving its singly-nested I-DID over three time
steps in Fig. 8(a). The policy demonstrates that i will blindly
follow j"s actions. Since the tiger persists in its original location with
a probability of 0.95, i will select the same door again. If j begins
the game with a 99% probability that the tiger is on the right,
solving j"s I-DID nested two levels deep, results in the policy shown in
Fig. 8(b). Even though j is almost certain that OL is the correct
action, it will start by selecting OR, followed by OL. Agent j"s
intention is to deceive i who, it believes, will follow j"s actions, so
as to gain $110 in the second time step, which is more than what j
would gain if it were to be honest.
Figure 8: Emergence of deception between agents in the tiger
problem. Behaviors of interest are in bold. * denotes as before. (a) Agent
i"s policy demonstrating that it will blindly follow j"s actions. (b) Even
though j is almost certain that the tiger is on the right, it will start by
selecting OR, followed by OL, in order to deceive i.
5.2 Altruism and Reciprocity in the Public
Good Problem
The public good (PG) problem [7], consists of a group of M
agents, each of whom must either contribute some resource to a
public pot or keep it for themselves. Since resources contributed to
the public pot are shared among all the agents, they are less
valuable to the agent when in the public pot. However, if all agents
choose to contribute their resources, then the payoff to each agent
is more than if no one contributes. Since an agent gets its share of
the public pot irrespective of whether it has contributed or not, the
dominating action is for each agent to not contribute, and instead
free ride on others" contributions. However, behaviors of human
players in empirical simulations of the PG problem differ from the
normative predictions. The experiments reveal that many players
initially contribute a large amount to the public pot, and continue
to contribute when the PG problem is played repeatedly, though
in decreasing amounts [4]. Many of these experiments [5] report
that a small core group of players persistently contributes to the
public pot even when all others are defecting. These experiments
also reveal that players who persistently contribute have altruistic
or reciprocal preferences matching expected cooperation of others.
For simplicity, we assume that the game is played between M =
2 agents, i and j. Let each agent be initially endowed with XT
amount of resources. While the classical PG game formulation
permits each agent to contribute any quantity of resources (≤ XT ) to
the public pot, we simplify the action space by allowing two
possible actions. Each agent may choose to either contribute (C) a fixed
amount of the resources, or not contribute. The latter action is
deThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819
noted as defect (D). We assume that the actions are not observable
to others. The value of resources in the public pot is discounted
by ci for each agent i, where ci is the marginal private return. We
assume that ci < 1 so that the agent does not benefit enough that
it contributes to the public pot for private gain. Simultaneously,
ciM > 1, making collective contribution pareto optimal.
i/j C D
C 2ciXT , 2cjXT ciXT − cp, XT + cjXT − P
D XT + ciXT − P, cjXT − cp XT , XT
Table 1: The one-shot PG game with punishment.
In order to encourage contributions, the contributing agents
punish free riders but incur a small cost for administering the
punishment. Let P be the punishment meted out to the defecting agent
and cp the non-zero cost of punishing for the contributing agent.
For simplicity, we assume that the cost of punishing is same for
both the agents. The one-shot PG game with punishment is shown
in Table. 1. Let ci = cj, cp > 0, and if P > XT − ciXT , then
defection is no longer a dominating action. If P < XT − ciXT , then
defection is the dominating action for both. If P = XT − ciXT ,
then the game is not dominance-solvable.
Figure 9: (a) Level 1 I-ID of agent i, (b) level 0 IDs of agent j with
decision nodes mapped to the chance nodes, A1
j and A2
j , in (a).
We formulate a sequential version of the PG problem with
punishment from the perspective of agent i. Though in the repeated PG
game, the quantity in the public pot is revealed to all the agents after
each round of actions, we assume in our formulation that it is
hidden from the agents. Each agent may contribute a fixed amount, xc,
or defect. An agent on performing an action receives an observation
of plenty (PY) or meager (MR) symbolizing the state of the
public pot. Notice that the observations are also indirectly indicative of
agent j"s actions because the state of the public pot is influenced by
them. The amount of resources in agent i"s private pot, is perfectly
observable to i. The payoffs are analogous to Table. 1.
Borrowing from the empirical investigations of the PG problem [5], we
construct level 0 IDs for j that model altruistic and non-altruistic
types (Fig. 9(b)). Specifically, our altruistic agent has a high
marginal private return (cj is close to 1) and does not punish others
who defect. Let xc = 1 and the level 0 agent be punished half the
times it defects. With one action remaining, both types of agents
choose to contribute to avoid being punished. With two actions
to go, the altruistic type chooses to contribute, while the other
defects. This is because cj for the altruistic type is close to 1, thus the
expected punishment, 0.5P > (1 − cj), which the altruistic type
avoids. Because cj for the non-altruistic type is less, it prefers not
to contribute. With three steps to go, the altruistic agent contributes
to avoid punishment (0.5P > 2(1 − cj)), and the non-altruistic
type defects. For greater than three steps, while the altruistic agent
continues to contribute to the public pot depending on how close
its marginal private return is to 1, the non-altruistic type prescribes
defection.
We analyzed the decisions of an altruistic agent i modeled using
a level 1 I-DID expanded over 3 time steps. i ascribes the two level
0 models, mentioned previously, to j (see Fig. 9). If i believes with
a probability 1 that j is altruistic, i chooses to contribute for each of
the three steps. This behavior persists when i is unaware of whether
j is altruistic (Fig. 10(a)), and when i assigns a high probability to
j being the non-altruistic type. However, when i believes with a
probability 1 that j is non-altruistic and will thus surely defect, i
chooses to defect to avoid being punished and because its marginal
private return is less than 1. These results demonstrate that the
behavior of our altruistic type resembles that found experimentally.
The non-altruistic level 1 agent chooses to defect regardless of how
likely it believes the other agent to be altruistic. We analyzed the
behavior of a reciprocal agent type that matches expected
cooperation or defection. The reciprocal type"s marginal private return
is similar to that of the non-altruistic type, however, it obtains a
greater payoff when its action is similar to that of the other. We
consider the case when the reciprocal agent i is unsure of whether
j is altruistic and believes that the public pot is likely to be half
full. For this prior belief, i chooses to defect. On receiving an
observation of plenty, i decides to contribute, while an observation of
meager makes it defect (Fig. 10(b)). This is because an
observation of plenty signals that the pot is likely to be greater than half
full, which results from j"s action to contribute. Thus, among the
two models ascribed to j, its type is likely to be altruistic making
it likely that j will contribute again in the next time step. Agent i
therefore chooses to contribute to reciprocate j"s action. An
analogous reasoning leads i to defect when it observes a meager pot.
With one action to go, i believing that j contributes, will choose to
contribute too to avoid punishment regardless of its observations.
Figure 10: (a) An altruistic level 1 agent always contributes. (b) A
reciprocal agent i starts off by defecting followed by choosing to
contribute or defect based on its observation of plenty (indicating that j is
likely altruistic) or meager (j is non-altruistic).
5.3 Strategies in Two-Player Poker
Poker is a popular zero sum card game that has received much
attention among the AI research community as a testbed [2]. Poker is
played among M ≥ 2 players in which each player receives a hand
of cards from a deck. While several flavors of Poker with
varying complexity exist, we consider a simple version in which each
player has three plys during which the player may either exchange
a card (E), keep the existing hand (K), fold (F) and withdraw from
the game, or call (C), requiring all players to show their hands. To
keep matters simple, let M = 2, and each player receive a hand
consisting of a single card drawn from the same suit. Thus, during
a showdown, the player who has the numerically larger card (2 is
the lowest, ace is the highest) wins the pot. During an exchange of
cards, the discarded card is placed either in the L pile, indicating to
the other agent that it was a low numbered card less than 8, or in the
820 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
H pile, indicating that the card had a rank greater than or equal to
8. Notice that, for example, if a lower numbered card is discarded,
the probability of receiving a low card in exchange is now reduced.
We show the level 1 I-ID for the simplified two-player Poker in
Fig. 11. We considered two models (personality types) of agent j.
The conservative type believes that it is likely that its opponent has
a high numbered card in its hand. On the other hand, the
aggressive agent j believes with a high probability that its opponent has
a lower numbered card. Thus, the two types differ in their beliefs
over their opponent"s hand. In both these level 0 models, the
opponent is assumed to perform its actions following a fixed, uniform
distribution. With three actions to go, regardless of its hand
(unless it is an ace), the aggressive agent chooses to exchange its card,
with the intent of improving on its current hand. This is because it
believes the other to have a low card, which improves its chances
of getting a high card during the exchange. The conservative agent
chooses to keep its card, no matter its hand because its chances of
getting a high card are slim as it believes that its opponent has one.
Figure 11: (a) Level 1 I-ID of agent i. The observation reveals
information about j"s hand of the previous time step, (b) level 0 IDs of agent
j whose decision nodes are mapped to the chance nodes, A1
j , A2
j , in (a).
The policy of a level 1 agent i who believes that each card
except its own has an equal likelihood of being in j"s hand (neutral
personality type) and j could be either an aggressive or
conservative type, is shown in Fig. 12. i"s own hand contains the card
numbered 8. The agent starts by keeping its card. On seeing that
j did not exchange a card (N), i believes with probability 1 that j
is conservative and hence will keep its cards. i responds by either
keeping its card or exchanging it because j is equally likely to have
a lower or higher card. If i observes that j discarded its card into
the L or H pile, i believes that j is aggressive. On observing L,
i realizes that j had a low card, and is likely to have a high card
after its exchange. Because the probability of receiving a low card
is high now, i chooses to keep its card. On observing H,
believing that the probability of receiving a high numbered card is high,
i chooses to exchange its card. In the final step, i chooses to call
regardless of its observation history because its belief that j has a
higher card is not sufficiently high to conclude that its better to fold
and relinquish the payoff. This is partly due to the fact that an
observation of, say, L resets the agent i"s previous time step beliefs
over j"s hand to the low numbered cards only.
6. DISCUSSION
We showed how DIDs may be extended to I-DIDs that enable
online sequential decision-making in uncertain multiagent settings.
Our graphical representation of I-DIDs improves on the previous
Figure 12: A level 1 agent i"s three step policy in the Poker problem.
i starts by believing that j is equally likely to be aggressive or
conservative and could have any card in its hand with equal probability.
work significantly by being more transparent, semantically clear,
and capable of being solved using standard algorithms that target
DIDs. I-DIDs extend NIDs to allow sequential decision-making
over multiple time steps in the presence of other interacting agents.
I-DIDs may be seen as concise graphical representations for
IPOMDPs providing a way to exploit problem structure and carry
out online decision-making as the agent acts and observes given its
prior beliefs. We are currently investigating ways to solve I-DIDs
approximately with provable bounds on the solution quality.
Acknowledgment: We thank Piotr Gmytrasiewicz for some
useful discussions related to this work. The first author would like
to acknowledge the support of a UGARF grant.
7. REFERENCES
[1] R. J. Aumann. Interactive epistemology i: Knowledge. International
Journal of Game Theory, 28:263-300, 1999.
[2] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron. The challenge
of poker. AIJ, 2001.
[3] A. Brandenburger and E. Dekel. Hierarchies of beliefs and common
knowledge. Journal of Economic Theory, 59:189-198, 1993.
[4] C. Camerer. Behavioral Game Theory: Experiments in Strategic
Interaction. Princeton University Press, 2003.
[5] E. Fehr and S. Gachter. Cooperation and punishment in public goods
experiments. American Economic Review, 90(4):980-994, 2000.
[6] D. Fudenberg and D. K. Levine. The Theory of Learning in Games.
MIT Press, 1998.
[7] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991.
[8] Y. Gal and A. Pfeffer. A language for modeling agent"s
decision-making processes in games. In AAMAS, 2003.
[9] P. Gmytrasiewicz and P. Doshi. A framework for sequential planning
in multiagent settings. JAIR, 24:49-79, 2005.
[10] P. Gmytrasiewicz and E. Durfee. Rational coordination in
multi-agent environments. JAAMAS, 3(4):319-350, 2000.
[11] J. C. Harsanyi. Games with incomplete information played by
bayesian players. Management Science, 14(3):159-182, 1967.
[12] R. A. Howard and J. E. Matheson. Influence diagrams. In R. A.
Howard and J. E. Matheson, editors, The Principles and Applications
of Decision Analysis. Strategic Decisions Group, Menlo Park, CA
94025, 1984.
[13] L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in
partially observable stochastic domains. Artificial Intelligence
Journal, 2, 1998.
[14] D. Koller and B. Milch. Multi-agent influence diagrams for
representing and solving games. In IJCAI, pages 1027-1034, 2001.
[15] K. Polich and P. Gmytrasiewicz. Interactive dynamic influence
diagrams. In GTDT Workshop, AAMAS, 2006.
[16] B. Rathnas., P. Doshi, and P. J. Gmytrasiewicz. Exact solutions to
interactive pomdps using behavioral equivalence. In Autonomous
Agents and Multi-Agent Systems Conference (AAMAS), 2006.
[17] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach
(Second Edition). Prentice Hall, 2003.
[18] R. D. Shachter. Evaluating influence diagrams. Operations Research,
34(6):871-882, 1986.
[19] D. Suryadi and P. Gmytrasiewicz. Learning models of other agents
using influence diagrams. In UM, 1999.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 821 | influence diagram;multiagent environment;influence diagram network;dynamic influence diagram;multiplexer;independence structure;dependency link;decision-make;online sequential decision-making;interactive dynamic influence diagram;nash equilibrium profile;agent online;network of influence diagram;policy link;multi-agent influence diagram;partially observable multiagent environment;interactive influence diagram;sequential decision-making;agent model;interactive partially observable markov decision process;graphical model |
train_I-66 | Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies | Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms. | 1. INTRODUCTION
Distributed Partially Observable Markov Decision Problems
(Distributed POMDPs) are emerging as a popular approach for
modeling sequential decision making in teams operating under
uncertainty [9, 4, 1, 2, 13]. The uncertainty arises on account of
nondeterminism in the outcomes of actions and because the world state
may only be partially (or incorrectly) observable. Unfortunately, as
shown by Bernstein et al. [3], the problem of finding the optimal
joint policy for general distributed POMDPs is NEXP-Complete.
Researchers have attempted two different types of approaches
towards solving these models. The first category consists of highly
efficient approximate techniques, that may not reach globally
optimal solutions [2, 9, 11]. The key problem with these techniques
has been their inability to provide any guarantees on the quality
of the solution. In contrast, the second less popular category of
approaches has focused on a global optimal result [13, 5, 10]. Though
these approaches obtain optimal solutions, they typically consider
only two agents. Furthermore, they fail to exploit structure in the
interactions of the agents and hence are severely hampered with
respect to scalability when considering more than two agents.
To address these problems with the existing approaches, we
propose approximate techniques that provide guarantees on the
quality of the solution while focussing on a network of more than two
agents. We first propose the basic SPIDER (Search for Policies
In Distributed EnviRonments) algorithm. There are two key novel
features in SPIDER: (i) it is a branch and bound heuristic search
technique that uses a MDP-based heuristic function to search for an
optimal joint policy; (ii) it exploits network structure of agents by
organizing agents into a Depth First Search (DFS) pseudo tree and
takes advantage of the independence in the different branches of the
DFS tree. We then provide three enhancements to improve the
efficiency of the basic SPIDER algorithm while providing guarantees
on the quality of the solution. The first enhancement uses
abstractions for speedup, but does not sacrifice solution quality. In
particular, it initially performs branch and bound search on abstract
policies and then extends to complete policies. The second
enhancement obtains speedups by sacrificing solution quality, but within
an input parameter that provides the tolerable expected value
difference from the optimal solution. The third enhancement is again
based on bounding the search for efficiency, however with a
tolerance parameter that is provided as a percentage of optimal.
We experimented with the sensor network domain presented in
Nair et al. [10], a domain representative of an important class of
problems with networks of agents working in uncertain
environments. In our experiments, we illustrate that SPIDER dominates
an existing global optimal approach called GOA [10], the only
known global optimal algorithm with demonstrated experimental
results for more than two agents. Furthermore, we demonstrate
that abstraction improves the performance of SPIDER significantly
(while providing optimal solutions). We finally demonstrate a key
feature of SPIDER: by utilizing the approximation enhancements
it enables principled tradeoffs in run-time versus solution quality.
822
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
2. DOMAIN: DISTRIBUTED SENSOR NETS
Distributed sensor networks are a large, important class of
domains that motivate our work. This paper focuses on a set of target
tracking problems that arise in certain types of sensor networks [6]
first introduced in [10]. Figure 1 shows a specific problem instance
within this type consisting of three sensors. Here, each sensor node
can scan in one of four directions: North, South, East or West (see
Figure 1). To track a target and obtain associated reward, two
sensors with overlapping scanning areas must coordinate by scanning
the same area simultaneously. In Figure 1, to track a target in
Loc11, sensor1 needs to scan ‘East" and sensor2 needs to scan ‘West"
simultaneously. Thus, sensors have to act in a coordinated fashion.
We assume that there are two independent targets and that each
target"s movement is uncertain and unaffected by the sensor agents.
Based on the area it is scanning, each sensor receives observations
that can have false positives and false negatives. The sensors"
observations and transitions are independent of each other"s actions
e.g.the observations that sensor1 receives are independent of
sensor2"s actions. Each agent incurs a cost for scanning whether the
target is present or not, but no cost if it turns off. Given the sensors"
observational uncertainty, the targets" uncertain transitions and the
distributed nature of the sensor nodes, these sensor nets provide a
useful domains for applying distributed POMDP models.
Figure 1: A 3-chain sensor configuration
3. BACKGROUND
3.1 Model: Network Distributed POMDP
The ND-POMDP model was introduced in [10], motivated by
domains such as the sensor networks introduced in Section 2. It is
defined as the tuple S, A, P, Ω, O, R, b , where S = ×1≤i≤nSi ×
Su is the set of world states. Si refers to the set of local states of
agent i and Su is the set of unaffectable states. Unaffectable state
refers to that part of the world state that cannot be affected by the
agents" actions, e.g. environmental factors like target locations that
no agent can control. A = ×1≤i≤nAi is the set of joint actions,
where Ai is the set of action for agent i.
ND-POMDP assumes transition independence, where the
transition function is defined as P(s, a, s ) = Pu(su, su) · 1≤i≤n
Pi(si, su, ai, si), where a = a1, . . . , an is the joint action
performed in state s = s1, . . . , sn, su and s = s1, . . . , sn, su is
the resulting state.
Ω = ×1≤i≤nΩi is the set of joint observations where Ωi is
the set of observations for agents i. Observational independence
is assumed in ND-POMDPs i.e., the joint observation function is
defined as O(s, a, ω) = 1≤i≤n Oi(si, su, ai, ωi), where s =
s1, . . . , sn, su is the world state that results from the agents
performing a = a1, . . . , an in the previous state, and
ω = ω1, . . . , ωn ∈ Ω is the observation received in state s. This
implies that each agent"s observation depends only on the
unaffectable state, its local action and on its resulting local state.
The reward function, R, is defined as
R(s, a) = l Rl(sl1, . . . , slr, su, al1, . . . , alr ), where each l
could refer to any sub-group of agents and r = |l|. Based on
the reward function, an interaction hypergraph is constructed. A
hyper-link, l, exists between a subset of agents for all Rl that
comprise R. The interaction hypergraph is defined as G = (Ag, E),
where the agents, Ag, are the vertices and E = {l|l ⊆ Ag ∧
Rl is a component of R} are the edges.
The initial belief state (distribution over the initial state), b, is
defined as b(s) = bu(su) · 1≤i≤n bi(si), where bu and bi refer
to the distribution over initial unaffectable state and agent i"s initial
belief state, respectively. The goal in ND-POMDP is to compute
the joint policy π = π1, . . . , πn that maximizes team"s expected
reward over a finite horizon T starting from the belief state b.
An ND-POMDP is similar to an n-ary Distributed Constraint
Optimization Problem (DCOP)[8, 12] where the variable at each
node represents the policy selected by an individual agent, πi with
the domain of the variable being the set of all local policies, Πi.
The reward component Rl where |l| = 1 can be thought of as a
local constraint while the reward component Rl where l > 1
corresponds to a non-local constraint in the constraint graph.
3.2 Algorithm: Global Optimal Algorithm (GOA)
In previous work, GOA has been defined as a global optimal
algorithm for ND-POMDPs [10]. We will use GOA in our
experimental comparisons, since GOA is a state-of-the-art global optimal
algorithm, and in fact the only one with experimental results
available for networks of more than two agents. GOA borrows from a
global optimal DCOP algorithm called DPOP[12]. GOA"s message
passing follows that of DPOP. The first phase is the UTIL
propagation, where the utility messages, in this case values of policies,
are passed up from the leaves to the root. Value for a policy at an
agent is defined as the sum of best response values from its
children and the joint policy reward associated with the parent policy.
Thus, given a policy for a parent node, GOA requires an agent to
iterate through all its policies, finding the best response policy and
returning the value to the parent - while at the parent node, to find
the best policy, an agent requires its children to return their best
responses to each of its policies. This UTIL propagation process
is repeated at each level in the tree, until the root exhausts all its
policies. In the second phase of VALUE propagation, where the
optimal policies are passed down from the root till the leaves.
GOA takes advantage of the local interactions in the interaction
graph, by pruning out unnecessary joint policy evaluations
(associated with nodes not connected directly in the tree). Since the
interaction graph captures all the reward interactions among agents
and as this algorithm iterates through all the relevant joint policy
evaluations, this algorithm yields a globally optimal solution.
4. SPIDER
As mentioned in Section 3.1, an ND-POMDP can be treated as a
DCOP, where the goal is to compute a joint policy that maximizes
the overall joint reward. The brute-force technique for computing
an optimal policy would be to examine the expected values for all
possible joint policies. The key idea in SPIDER is to avoid
computation of expected values for the entire space of joint policies, by
utilizing upper bounds on the expected values of policies and the
interaction structure of the agents.
Akin to some of the algorithms for DCOP [8, 12], SPIDER has a
pre-processing step that constructs a DFS tree corresponding to the
given interaction structure. Note that these DFS trees are pseudo
trees [12] that allow links between ancestors and children. We
employ the Maximum Constrained Node (MCN) heuristic used in the
DCOP algorithm, ADOPT [8], however other heuristics (such as
MLSP heuristic from [7]) can also be employed. MCN heuristic
tries to place agents with more number of constraints at the top of
the tree. This tree governs how the search for the optimal joint
polThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823
icy proceeds in SPIDER. The algorithms presented in this paper are
easily extendable to hyper-trees, however for expository purposes,
we assume binary trees.
SPIDER is an algorithm for centralized planning and distributed
execution in distributed POMDPs. In this paper, we employ the
following notation to denote policies and expected values:
Ancestors(i) ⇒ agents from i to the root (not including i).
Tree(i) ⇒ agents in the sub-tree (not including i) for which i is
the root.
πroot+
⇒ joint policy of all agents.
πi+
⇒ joint policy of all agents in Tree(i) ∪ i.
πi−
⇒ joint policy of agents that are in Ancestors(i).
πi ⇒ policy of the ith agent.
ˆv[πi, πi−
] ⇒ upper bound on the expected value for πi+
given πi
and policies of ancestor agents i.e. πi−
.
ˆvj[πi, πi−
] ⇒ upper bound on the expected value for πi+
from the
jth child.
v[πi, πi−
] ⇒ expected value for πi given policies of ancestor agents,
πi−
.
v[πi+
, πi−
] ⇒ expected value for πi+
given policies of ancestor
agents, πi−
.
vj[πi+
, πi−
] ⇒ expected value for πi+
from the jth child.
Figure 2: Execution of SPIDER, an example
4.1 Outline of SPIDER
SPIDER is based on the idea of branch and bound search, where
the nodes in the search tree represent partial/complete joint
policies. Figure 2 shows an example search tree for the SPIDER
algorithm, using an example of the three agent chain. Before SPIDER
begins its search we create a DFS tree (i.e. pseudo tree) from the
three agent chain, with the middle agent as the root of this tree.
SPIDER exploits the structure of this DFS tree while engaging in
its search. Note that in our example figure, each agent is assigned
a policy with T=2. Thus, each rounded rectange (search tree node)
indicates a partial/complete joint policy, a rectangle indicates an
agent and the ovals internal to an agent show its policy. Heuristic
or actual expected value for a joint policy is indicated in the top
right corner of the rounded rectangle. If the number is italicized
and underlined, it implies that the actual expected value of the joint
policy is provided.
SPIDER begins with no policy assigned to any of the agents
(shown in the level 1 of the search tree). Level 2 of the search tree
indicates that the joint policies are sorted based on upper bounds
computed for root agent"s policies. Level 3 shows one SPIDER
search node with a complete joint policy (a policy assigned to each
of the agents). The expected value for this joint policy is used to
prune out the nodes in level 2 (the ones with upper bounds < 234)
When creating policies for each non-leaf agent i, SPIDER
potentially performs two steps:
1. Obtaining upper bounds and sorting: In this step, agent i
computes upper bounds on the expected values, ˆv[πi, πi−
] of the
joint policies πi+
corresponding to each of its policy πi and fixed
ancestor policies. An MDP based heuristic is used to compute these
upper bounds on the expected values. Detailed description about
this MDP heuristic is provided in Section 4.2. All policies of agent
i, Πi are then sorted based on these upper bounds (also referred to
as heuristic values henceforth) in descending order. Exploration of
these policies (in step 2 below) are performed in this descending
order. As indicated in the level 2 of the search tree (of Figure 2), all
the joint policies are sorted based on the heuristic values, indicated
in the top right corner of each joint policy. The intuition behind
sorting and then exploring policies in descending order of upper
bounds, is that the policies with higher upper bounds could yield
joint policies with higher expected values.
2. Exploration and Pruning: Exploration implies computing
the best response joint policy πi+,∗
corresponding to fixed
ancestor policies of agent i, πi−
. This is performed by iterating through
all policies of agent i i.e. Πi and summing two quantities for each
policy: (i) the best response for all of i"s children (obtained by
performing steps 1 and 2 at each of the child nodes); (ii) the expected
value obtained by i for fixed policies of ancestors. Thus,
exploration of a policy πi yields actual expected value of a joint policy,
πi+
represented as v[πi+
, πi−
]. The policy with the highest
expected value is the best response policy.
Pruning refers to avoiding exploring all policies (or computing
expected values) at agent i by using the current best expected value,
vmax
[πi+
, πi−
]. Henceforth, this vmax
[πi+
, πi−
] will be referred
to as threshold. A policy, πi need not be explored if the upper
bound for that policy, ˆv[πi, πi−
] is less than the threshold. This is
because the expected value for the best joint policy attainable for
that policy will be less than the threshold.
On the other hand, when considering a leaf agent, SPIDER
computes the best response policy (and consequently its expected value)
corresponding to fixed policies of its ancestors, πi−
. This is
accomplished by computing expected values for each of the policies
(corresponding to fixed policies of ancestors) and selecting the highest
expected value policy. In Figure 2, SPIDER assigns best response
policies to leaf agents at level 3. The policy for the left leaf agent is
to perform action East at each time step in the policy, while the
policy for the right leaf agent is to perform Off at each time step.
These best response policies from the leaf agents yield an actual
expected value of 234 for the complete joint policy.
Algorithm 1 provides the pseudo code for SPIDER. This
algorithm outputs the best joint policy, πi+,∗
(with an expected value
greater than threshold) for the agents in Tree(i). Lines 3-8
compute the best response policy of a leaf agent i, while lines 9-23
computes the best response joint policy for agents in Tree(i). This
best response computation for a non-leaf agent i includes: (a)
Sorting of policies (in descending order) based on heuristic values on
line 11; (b) Computing best response policies at each of the
children for fixed policies of agent i in lines 16-20; and (c) Maintaining
824 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Algorithm 1 SPIDER(i, πi−
, threshold)
1: πi+,∗ ← null
2: Πi ← GET-ALL-POLICIES (horizon, Ai, Ωi)
3: if IS-LEAF(i) then
4: for all πi ∈ Πi do
5: v[πi, πi−] ← JOINT-REWARD (πi, πi−)
6: if v[πi, πi−] > threshold then
7: πi+,∗ ← πi
8: threshold ← v[πi, πi−]
9: else
10: children ← CHILDREN (i)
11: ˆΠi ← UPPER-BOUND-SORT(i, Πi, πi−)
12: for all πi ∈ ˆΠi do
13: ˜πi+ ← πi
14: if ˆv[πi, πi−] < threshold then
15: Go to line 12
16: for all j ∈ children do
17: jThres ← threshold − v[πi, πi−]−
Σk∈children,k=j ˆvk[πi, πi−]
18: πj+,∗ ← SPIDER(j, πi πi−, jThres)
19: ˜πi+ ← ˜πi+ πj+,∗
20: ˆvj[πi, πi−] ← v[πj+,∗, πi πi−]
21: if v[˜πi+, πi−] > threshold then
22: threshold ← v[˜πi+, πi−]
23: πi+,∗ ← ˜πi+
24: return πi+,∗
Algorithm 2 UPPER-BOUND-SORT(i, Πi, πi−
)
1: children ← CHILDREN (i)
2: ˆΠi ← null /* Stores the sorted list */
3: for all πi ∈ Πi do
4: ˆv[πi, πi−] ← JOINT-REWARD (πi, πi−)
5: for all j ∈ children do
6: ˆvj[πi, πi−] ← UPPER-BOUND(i, j, πi πi−)
7: ˆv[πi, πi−]
+
← ˆvj[πi, πi−]
8: ˆΠi ← INSERT-INTO-SORTED (πi, ˆΠi)
9: return ˆΠi
best expected value, joint policy in lines 21-23.
Algorithm 2 provides the pseudo code for sorting policies based
on the upper bounds on the expected values of joint policies.
Expected value for an agent i consists of two parts: value obtained
from ancestors and value obtained from its children. Line 4
computes the expected value obtained from ancestors of the agent
(using JOINT-REWARD function), while lines 5-7 compute the
heuristic value from the children. The sum of these two parts yields an
upper bound on the expected value for agent i, and line 8 of the
algorithm sorts the policies based on these upper bounds.
4.2 MDP based heuristic function
The heuristic function quickly provides an upper bound on the
expected value obtainable from the agents in Tree(i). The
subtree of agents is a distributed POMDP in itself and the idea here
is to construct a centralized MDP corresponding to the (sub-tree)
distributed POMDP and obtain the expected value of the optimal
policy for this centralized MDP. To reiterate this in terms of the
agents in DFS tree interaction structure, we assume full
observability for the agents in Tree(i) and for fixed policies of the agents in
{Ancestors(i) ∪ i}, we compute the joint value ˆv[πi+
, πi−
] .
We use the following notation for presenting the equations for
computing upper bounds/heuristic values (for agents i and k):
Let Ei−
denote the set of links between agents in {Ancestors(i)∪
i} and Tree(i), Ei+
denote the set of links between agents in
Tree(i). Also, if l ∈ Ei−
, then l1 is the agent in {Ancestors(i)∪
i} and l2 is the agent in Tree(i), that l connects together. We first
compact the standard notation:
ot
k =Ok(st+1
k , st+1
u , πk(ωt
k), ωt+1
k ) (1)
pt
k =Pk(st
k, st
u, πk(ωt
k), st+1
k ) · ot
k
pt
u =P(st
u, st+1
u )
st
l = st
l1
, st
l2
, st
u ; ωt
l = ωt
l1
, ωt
l2
rt
l =Rl(st
l , πl1
(ωt
l1
), πl2
(ωt
l2
))
vt
l =V t
πl
(st
l , st
u, ωt
l1
, ωt
l2
)
Depending on the location of agent k in the agent tree we have the following
cases:
IF k ∈ {Ancestors(i) ∪ i}, ˆpt
k = pt
k, (2)
IF k ∈ Tree(i), ˆpt
k = Pk(st
k, st
u, πk(ωt
k), st+1
k )
IF l ∈ Ei−
, ˆrt
l = max
{al2
}
Rl(st
l , πl1
(ωt
l1
), al2
)
IF l ∈ Ei+
, ˆrt
l = max
{al1
,al2
}
Rl(st
l , al1
, al2
)
The value function for an agent i executing the joint policy πi+
at
time η − 1 is provided by the equation:
V η−1
πi+ (sη−1
, ωη−1
) = l∈Ei− vη−1
l + l∈Ei+ vη−1
l (3)
where vη−1
l = rη−1
l + ω
η
l
,sη pη−1
l1
pη−1
l2
pη−1
u vη
l
Algorithm 3 UPPER-BOUND (i, j, πj−
)
1: val ← 0
2: for all l ∈ Ej− ∪ Ej+ do
3: if l ∈ Ej− then πl1
← φ
4: for all s0
l do
5: val
+
← startBel[s0
l ]· UPPER-BOUND-TIME
(i, s0
l , j, πl1
, )
6: return val
Algorithm 4 UPPER-BOUND-TIME (i, st
l , j, πl1 , ωt
l1
)
1: maxV al ← −∞
2: for all al1
, al2
do
3: if l ∈ Ei− and l ∈ Ej− then al1
← πl1
(ωt
l1
)
4: val ← GET-REWARD(st
l , al1
, al2
)
5: if t < πi.horizon − 1 then
6: for all st+1
l , ωt+1
l1
do
7: futV al←pt
u ˆpt
l1
ˆpt
l2
8: futV al
∗
← UPPER-BOUND-TIME(st+1
l , j, πl1
, ωt
l1
ωt+1
l1
)
9: val
+
← futV al
10: if val > maxV al then maxV al ← val
11: return maxV al
Upper bound on the expected value for a link is computed by
modifying the equation 3 to reflect the full observability
assumption. This involves removing the observational probability term
for agents in Tree(i) and maximizing the future value ˆvη
l over the
actions of those agents (in Tree(i)). Thus, the equation for the
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825
computation of the upper bound on a link l, is as follows:
IF l ∈ Ei−
, ˆvη−1
l =ˆrη−1
l + max
al2
ω
η
l1
,s
η
l
ˆpη−1
l1
ˆpη−1
l2
pη−1
u ˆvη
l
IF l ∈ Ei+
, ˆvη−1
l =ˆrη−1
l + max
al1
,al2
s
η
l
ˆpη−1
l1
ˆpη−1
l2
pη−1
u ˆvη
l
Algorithm 3 and Algorithm 4 provide the algorithm for computing
upper bound for child j of agent i, using the equations descirbed
above. While Algorithm 4 computes the upper bound on a link
given the starting state, Algorithm 3 sums the upper bound values
computed over each of the links in Ei−
∪ Ei+
.
4.3 Abstraction
Algorithm 5 SPIDER-ABS(i, πi−
, threshold)
1: πi+,∗ ← null
2: Πi ← GET-POLICIES (<>, 1)
3: if IS-LEAF(i) then
4: for all πi ∈ Πi do
5: absHeuristic ← GET-ABS-HEURISTIC (πi, πi−)
6: absHeuristic
∗
← (timeHorizon − πi.horizon)
7: if πi.horizon = timeHorizon and πi.absNodes = 0 then
8: v[πi, πi−] ← JOINT-REWARD (πi, πi−)
9: if v[πi, πi−] > threshold then
10: πi+,∗ ← πi; threshold ← v[πi, πi−]
11: else if v[πi, πi−] + absHeuristic > threshold then
12: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1)
13: Πi
+
← INSERT-SORTED-POLICIES ( ˆΠi)
14: REMOVE(πi)
15: else
16: children ← CHILDREN (i)
17: Πi ← UPPER-BOUND-SORT(i, Πi, πi−)
18: for all πi ∈ Πi do
19: ˜πi+ ← πi
20: absHeuristic ← GET-ABS-HEURISTIC (πi, πi−)
21: absHeuristic
∗
← (timeHorizon − πi.horizon)
22: if πi.horizon = timeHorizon and πi.absNodes = 0 then
23: if ˆv[πi, πi−] < threshold and πi.absNodes = 0 then
24: Go to line 19
25: for all j ∈ children do
26: jThres ← threshold − v[πi, πi−]−
Σk∈children,k=j ˆvk[πi, πi−]
27: πj+,∗ ← SPIDER(j, πi πi−, jThres)
28: ˜πi+ ← ˜πi+ πj+,∗; ˆvj[πi, πi−] ← v[πj+,∗, πi πi−]
29: if v[˜πi+, πi−] > threshold then
30: threshold ← v[˜πi+, πi−]; πi+,∗ ← ˜πi+
31: else if ˆv[πi+, πi−] + absHeuristic > threshold then
32: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1)
33: Πi
+
← INSERT-SORTED-POLICIES (ˆΠi)
34: REMOVE(πi)
35: return πi+,∗
In SPIDER, the exploration/pruning phase can only begin after
the heuristic (or upper bound) computation and sorting for the
policies has ended. We provide an approach to possibly circumvent the
exploration of a group of policies based on heuristic computation
for one abstract policy, thus leading to an improvement in runtime
performance (without loss in solution quality). The important steps
in this technique are defining the abstract policy and how heuristic
values are computated for the abstract policies. In this paper, we
propose two types of abstraction:
1. Horizon Based Abstraction (HBA): Here, the abstract policy is
defined as a shorter horizon policy. It represents a group of longer
horizon policies that have the same actions as the abstract policy
for times less than or equal to the horizon of the abstract policy.
In Figure 3(a), a T=1 abstract policy that performs East action,
represents a group of T=2 policies, that perform East in the first
time step.
For HBA, there are two parts to heuristic computation:
(a) Computing the upper bound for the horizon of the abstract
policy. This is same as the heuristic computation defined by the
GETHEURISTIC() algorithm for SPIDER, however with a shorter time
horizon (horizon of the abstract policy).
(b) Computing the maximum possible reward that can be
accumulated in one time step (using GET-ABS-HEURISTIC()) and
multiplying it by the number of time steps to time horizon. This
maximum possible reward (for one time step) is obtained by iterating
through all the actions of all the agents in Tree(i) and computing
the maximum joint reward for any joint action.
Sum of (a) and (b) is the heuristic value for a HBA abstract policy.
2. Node Based Abstraction (NBA): Here an abstract policy is
obtained by not associating actions to certain nodes of the policy tree.
Unlike in HBA, this implies multiple levels of abstraction. This is
illustrated in Figure 3(b), where there are T=2 policies that do not
have an action for observation ‘TP". These incomplete T=2
policies are abstractions for T=2 complete policies. Increased levels of
abstraction leads to faster computation of a complete joint policy,
πroot+
and also to shorter heuristic computation and exploration,
pruning phases. For NBA, the heuristic computation is similar to
that of a normal policy, except in cases where there is no action
associated with policy nodes. In such cases, the immediate reward
is taken as Rmax (maximum reward for any action).
We combine both the abstraction techniques mentioned above
into one technique, SPIDER-ABS. Algorithm 5 provides the
algorithm for this abstraction technique. For computing optimal joint
policy with SPIDER-ABS, a non-leaf agent i initially examines all
abstract T=1 policies (line 2) and sorts them based on abstract
policy heuristic computations (line 17). The abstraction horizon is
gradually increased and these abstract policies are then explored
in descending order of heuristic values and ones that have heuristic
values less than the threshold are pruned (lines 23-24). Exploration
in SPIDER-ABS has the same definition as in SPIDER if the policy
being explored has a horizon of policy computation which is equal
to the actual time horizon and if all the nodes of the policy have an
action associated with them (lines 25-30). However, if those
conditions are not met, then it is substituted by a group of policies that it
represents (using EXTEND-POLICY () function) (lines 31-32).
EXTEND-POLICY() function is also responsible for
initializing the horizon and absNodes of a policy. absNodes
represents the number of nodes at the last level in the policy tree,
that do not have an action assigned to them. If πi.absNodes =
|Ωi|πi.horizon−1
(i.e. total number of policy nodes possible at
πi.horizon) , then πi.absNodes is set to zero and πi.horizon is
increased by 1. Otherwise, πi.absNodes is increased by 1. Thus,
this function combines both HBA and NBA by using the policy
variables, horizon and absNodes. Before substituting the abstract
policy with a group of policies, those policies are sorted based on
heuristic values (line 33). Similar type of abstraction based best
response computation is adopted at leaf agents (lines 3-14).
4.4 Value ApproXimation (VAX)
In this section, we present an approximate enhancement to
SPIDER called VAX. The input to this technique is an approximation
parameter , which determines the difference from the optimal
solution quality. This approximation parameter is used at each agent
for pruning out joint policies. The pruning mechanism in SPIDER
and SPIDER-Abs dictates that a joint policy be pruned only if the
threshold is exactly greater than the heuristic value. However, the
826 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 3: Example of abstraction for (a) HBA (Horizon Based Abstraction) and (b) NBA (Node Based Abstraction)
idea in this technique is to prune out joint a policy if the following
condition is satisfied: threshold + > ˆv[πi
, πi−
]. Apart from the
pruning condition, VAX is the same as SPIDER/SPIDER-ABS.
In the example of Figure 2, if the heuristic value for the second
joint policy (or second search tree node) in level 2 were 238 instead
of 232, then that policy could not be be pruned using SPIDER or
SPIDER-Abs. However, in VAX with an approximation parameter
of 5, the joint policy in consideration would also be pruned. This is
because the threshold (234) at that juncture plus the approximation
parameter (5), i.e. 239 would have been greater than the heuristic
value for that joint policy (238). It can be noted from the example
(just discussed) that this kind of pruning can lead to fewer
explorations and hence lead to an improvement in the overall run-time
performance. However, this can entail a sacrifice in the quality of
the solution because this technique can prune out a candidate
optimal solution. A bound on the error introduced by this approximate
algorithm as a function of , is provided by Proposition 3.
4.5 Percentage ApproXimation (PAX)
In this section, we present the second approximation
enhancement over SPIDER called PAX. Input to this technique is a
parameter, δ that represents the minimum percentage of the optimal
solution quality that is desired. Output of this technique is a policy
with an expected value that is at least δ% of the optimal solution
quality. A policy is pruned if the following condition is satisfied:
threshold > δ
100
ˆv[πi
, πi−
]. Like in VAX, the only difference
between PAX and SPIDER/SPIDER-ABS is this pruning condition.
Again in Figure 2, if the heuristic value for the second search
tree node in level 2 were 238 instead of 232, then PAX with an
input parameter of 98% would be able to prune that search tree node
(since 98
100
∗238 < 234). This type of pruning leads to fewer
explorations and hence an improvement in run-time performance, while
potentially leading to a loss in quality of the solution. Proposition 4
provides the bound on quality loss.
4.6 Theoretical Results
PROPOSITION 1. Heuristic provided using the centralized MDP
heuristic is admissible.
Proof. For the value provided by the heuristic to be admissible,
it should be an over estimate of the expected value for a joint policy.
Thus, we need to show that: For l ∈ Ei+
∪ Ei−
: ˆvt
l ≥ vt
l (refer to
notation in Section 4.2)
We use mathematical induction on t to prove this.
Base case: t = T − 1. Irrespective of whether l ∈ Ei−
or l ∈
Ei+
, ˆrt
l is computed by maximizing over all actions of the agents
in Tree(i), while rt
l is computed for fixed policies of the same
agents. Hence, ˆrt
l ≥ rt
l and also ˆvt
l ≥ vt
l .
Assumption: Proposition holds for t = η, where 1 ≤ η < T − 1.
We now have to prove that the proposition holds for t = η − 1.
We show the proof for l ∈ Ei−
and similar reasoning can be
adopted to prove for l ∈ Ei+
. The heuristic value function for
l ∈ Ei−
is provided by the following equation:
ˆvη−1
l =ˆrη−1
l + max
al2
ω
η
l1
,s
η
l
ˆpη−1
l1
ˆpη−1
l2
pη−1
u ˆvη
l
Rewriting the RHS and using Eqn 2 (in Section 4.2)
=ˆrη−1
l + max
al2
ω
η
l1
,s
η
l
pη−1
u pη−1
l1
ˆpη−1
l2
ˆvη
l
=ˆrη−1
l +
ω
η
l1
,s
η
l
pη−1
u pη−1
l1
max
al2
ˆpη−1
l2
ˆvη
l
Since maxal2
ˆpη−1
l2
ˆvη
l ≥ ωl2
oη−1
l2
ˆpη−1
l2
ˆvη
l and pη−1
l2
= oη−1
l2
ˆpη−1
l2
≥ˆrη−1
l +
ω
η
l1
,s
η
l
pη−1
u pη−1
l1
ωl2
pη−1
l2
ˆvη
l
Since ˆvη
l ≥ vη
l (from the assumption)
≥ˆrη−1
l +
ω
η
l1
,s
η
l
pη−1
u pη−1
l1
ωl2
pη−1
l2
vη
l
Since ˆrη−1
l ≥ rη−1
l (by definition)
≥rη−1
l +
ω
η
l1
,s
η
l
pη−1
u pη−1
l1
ωl2
pη−1
l2
vη
l
=rη−1
l +
(ω
η
l
,s
η
l
)
pη−1
u pη−1
l1
pη−1
l2
vη
l = vη−1
l
Thus proved.
PROPOSITION 2. SPIDER provides an optimal solution.
Proof. SPIDER examines all possible joint policies given the
interaction structure of the agents. The only exception being when
a joint policy is pruned based on the heuristic value. Thus, as long
as a candidate optimal policy is not pruned, SPIDER will return an
optimal policy. As proved in Proposition 1, the expected value for
a joint policy is always an upper bound. Hence when a joint policy
is pruned, it cannot be an optimal solution.
PROPOSITION 3. Error bound on the solution quality for VAX
(implemented over SPIDER-ABS) with an approximation
parameter of is ρ , where ρ is the number of leaf nodes in the DFS tree.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827
Proof. We prove this proposition using mathematical induction
on the depth of the DFS tree.
Base case: depth = 1 (i.e. one node). Best response is
computed by iterating through all policies, Πk. A policy,πk is pruned
if ˆv[πk, πk−
] < threshold + . Thus the best response policy
computed by VAX would be at most away from the optimal best
response. Hence the proposition holds for the base case.
Assumption: Proposition holds for d, where 1 ≤ depth ≤ d.
We now have to prove that the proposition holds for d + 1.
Without loss of generality, lets assume that the root node of this
tree has k children. Each of this children is of depth ≤ d, and hence
from the assumption, the error introduced in kth child is ρk , where
ρk is the number of leaf nodes in kth child of the root. Therefore,
ρ = k ρk, where ρ is the number of leaf nodes in the tree.
In SPIDER-ABS, threshold at the root agent, thresspider =
k v[πk+
, πk−
]. However, with VAX the threshold at the root
agent will be (in the worst case), threshvax = k v[πk+
, πk−
]−
k ρk . Hence, with VAX a joint policy is pruned at the root
agent if ˆv[πroot, πroot−
] < threshvax + ⇒ ˆv[πroot, πroot−
] <
threshspider − (( k ρk) − 1) ≤ threshspider − ( k ρk) ≤
threshspider − ρ . Hence proved.
PROPOSITION 4. For PAX (implemented over SPIDER-ABS) with
an input parameter of δ, the solution quality is at least δ
100
v[πroot+,∗
],
where v[πroot+,∗
] denotes the optimal solution quality.
Proof. We prove this proposition using mathematical induction
on the depth of the DFS tree.
Base case: depth = 1 (i.e. one node). Best response is
computed by iterating through all policies, Πk. A policy,πk is pruned
if δ
100
ˆv[πk, πk−
] < threshold. Thus the best response policy
computed by PAX would be at least δ
100
times the optimal best
response. Hence the proposition holds for the base case.
Assumption: Proposition holds for d, where 1 ≤ depth ≤ d.
We now have to prove that the proposition holds for d + 1.
Without loss of generality, lets assume that the root node of
this tree has k children. Each of this children is of depth ≤ d,
and hence from the assumption, the solution quality in the kth
child is at least δ
100
v[πk+,∗
, πk−
] for PAX. With SPIDER-ABS,
a joint policy is pruned at the root agent if ˆv[πroot, πroot−
] <
k v[πk+,∗
, πk−
]. However with PAX, a joint policy is pruned if
δ
100
ˆv[πroot, πroot−
] < k
δ
100
v[πk+,∗
, πk−
] ⇒ ˆv[πroot, πroot−
] <
k v[πk+,∗
, πk−
]. Since the pruning condition at the root agent in
PAX is the same as the one in SPIDER-ABS, there is no error
introduced at the root agent and all the error is introduced in the
children. Thus, overall solution quality is at least δ
100
of the optimal
solution. Hence proved.
5. EXPERIMENTAL RESULTS
All our experiments were conducted on the sensor network
domain from Section 2. The five network configurations employed
are shown in Figure 4. Algorithms that we experimented with
are GOA, SPIDER, SPIDER-ABS, PAX and VAX. We compare
against GOA because it is the only global optimal algorithm that
considers more than two agents. We performed two sets of
experiments: (i) firstly, we compared the run-time performance of the
above algorithms and (ii) secondly, we experimented with PAX and
VAX to study the tradeoff between run-time and solution quality.
Experiments were terminated after 10000 seconds1
.
Figure 5(a) provides run-time comparisons between the optimal
algorithms GOA, SPIDER, SPIDER-Abs and the approximate
algorithms, PAX ( of 30) and VAX(δ of 80). X-axis denotes the
1
Machine specs for all experiments: Intel Xeon 3.6 GHZ processor,
2GB RAM
sensor network configuration used, while Y-axis indicates the
runtime (on a log-scale). The time horizon of policy computation was
3. For each configuration (3-chain, 4-chain, 4-star and 5-star), there
are five bars indicating the time taken by GOA, SPIDER,
SPIDERAbs, PAX and VAX. GOA did not terminate within the time limit
for 4-star and 5-star configurations. SPIDER-Abs dominated the
SPIDER and GOA for all the configurations. For instance, in the
3chain configuration, SPIDER-ABS provides 230-fold speedup over
GOA and 2-fold speedup over SPIDER and for the 4-chain
configuration it provides 58-fold speedup over GOA and 2-fold speedup
over SPIDER. The two approximation approaches, VAX and PAX
provided further improvement in performance over SPIDER-Abs.
For instance, in the 5-star configuration VAX provides a 15-fold
speedup and PAX provides a 8-fold speedup over SPIDER-Abs.
Figures 5(b) provides a comparison of the solution quality
obtained using the different algorithms for the problems tested in
Figure 5(a). X-axis denotes the sensor network configuration while
Y-axis indicates the solution quality. Since GOA, SPIDER, and
SPIDER-Abs are all global optimal algorithms, the solution
quality is the same for all those algorithms. For 5-P configuration,
the global optimal algorithms did not terminate within the limit of
10000 seconds, so the bar for optimal quality indicates an upper
bound on the optimal solution quality. With both the
approximations, we obtained a solution quality that was close to the optimal
solution quality. In 3-chain and 4-star configurations, it is
remarkable that both PAX and VAX obtained almost the same actual
quality as the global optimal algorithms, despite the approximation
parameter and δ. For other configurations as well, the loss in quality
was less than 20% of the optimal solution quality.
Figure 5(c) provides the time to solution with PAX (for
varying epsilons). X-axis denotes the approximation parameter, δ
(percentage to optimal) used, while Y-axis denotes the time taken to
compute the solution (on a log-scale). The time horizon for all
the configurations was 4. As δ was decreased from 70 to 30, the
time to solution decreased drastically. For instance, in the 3-chain
case there was a total speedup of 170-fold when the δ was changed
from 70 to 30. Interestingly, even with a low δ of 30%, the actual
solution quality remained equal to the one obtained at 70%.
Figure 5(d) provides the time to solution for all the
configurations with VAX (for varying epsilons). X-axis denotes the
approximation parameter, used, while Y-axis denotes the time taken to
compute the solution (on a log-scale). The time horizon for all the
configurations was 4. As was increased, the time to solution
decreased drastically. For instance, in the 4-star case there was a total
speedup of 73-fold when the was changed from 60 to 140. Again,
the actual solution quality did not change with varying epsilon.
Figure 4: Sensor network configurations
828 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 5: Comparison of GOA, SPIDER, SPIDER-Abs and VAX for T = 3 on (a) Runtime and (b) Solution quality; (c) Time to solution for PAX
with varying percentage to optimal for T=4 (d) Time to solution for VAX with varying epsilon for T=4
6. SUMMARY AND RELATED WORK
This paper presents four algorithms SPIDER, SPIDER-ABS, PAX
and VAX that provide a novel combination of features for
policy search in distributed POMDPs: (i) exploiting agent interaction
structure given a network of agents (i.e. easier scale-up to larger
number of agents); (ii) using branch and bound search with an MDP
based heuristic function; (iii) utilizing abstraction to improve
runtime performance without sacrificing solution quality; (iv)
providing a priori percentage bounds on quality of solutions using PAX;
and (v) providing expected value bounds on the quality of solutions
using VAX. These features allow for systematic tradeoff of solution
quality for run-time in networks of agents operating under
uncertainty. Experimental results show orders of magnitude
improvement in performance over previous global optimal algorithms.
Researchers have typically employed two types of techniques
for solving distributed POMDPs. The first set of techniques
compute global optimal solutions. Hansen et al. [5] present an
algorithm based on dynamic programming and iterated elimination of
dominant policies, that provides optimal solutions for distributed
POMDPs. Szer et al. [13] provide an optimal heuristic search
method for solving Decentralized POMDPs. This algorithm is based
on the combination of a classical heuristic search algorithm, A∗
and
decentralized control theory. The key differences between SPIDER
and MAA* are: (a) Enhancements to SPIDER (VAX and PAX)
provide for quality guaranteed approximations, while MAA* is a
global optimal algorithm and hence involves significant
computational complexity; (b) Due to MAA*"s inability to exploit
interaction structure, it was illustrated only with two agents. However,
SPIDER has been illustrated for networks of agents; and (c)
SPIDER explores the joint policy one agent at a time, while MAA*
expands it one time step at a time (simultaneously for all the agents).
The second set of techniques seek approximate policies.
EmeryMontemerlo et al. [4] approximate POSGs as a series of one-step
Bayesian games using heuristics to approximate future value,
trading off limited lookahead for computational efficiency, resulting in
locally optimal policies (with respect to the selected heuristic). Nair
et al. [9]"s JESP algorithm uses dynamic programming to reach a
local optimum solution for finite horizon decentralized POMDPs.
Peshkin et al. [11] and Bernstein et al. [2] are examples of policy
search techniques that search for locally optimal policies. Though
all the above techniques improve the efficiency of policy
computation considerably, they are unable to provide error bounds on the
quality of the solution. This aspect of quality bounds differentiates
SPIDER from all the above techniques.
Acknowledgements. This material is based upon work
supported by the Defense Advanced Research Projects Agency (DARPA),
through the Department of the Interior, NBC, Acquisition Services
Division under Contract No. NBCHD030010. The views and
conclusions contained in this document are those of the authors, and
should not be interpreted as representing the official policies, either
expressed or implied, of the Defense Advanced Research Projects
Agency or the U.S. Government.
7. REFERENCES
[1] R. Becker, S. Zilberstein, V. Lesser, and C.V. Goldman.
Solving transition independent decentralized Markov
decision processes. JAIR, 22:423-455, 2004.
[2] D. S. Bernstein, E.A. Hansen, and S. Zilberstein. Bounded
policy iteration for decentralized POMDPs. In IJCAI, 2005.
[3] D. S. Bernstein, S. Zilberstein, and N. Immerman. The
complexity of decentralized control of MDPs. In UAI, 2000.
[4] R. Emery-Montemerlo, G. Gordon, J. Schneider, and
S. Thrun. Approximate solutions for partially observable
stochastic games with common payoffs. In AAMAS, 2004.
[5] E. Hansen, D. Bernstein, and S. Zilberstein. Dynamic
programming for partially observable stochastic games. In
AAAI, 2004.
[6] V. Lesser, C. Ortiz, and M. Tambe. Distributed sensor nets:
A multiagent perspective. Kluwer, 2003.
[7] R. Maheswaran, M. Tambe, E. Bowring, J. Pearce, and
P. Varakantham. Taking dcop to the real world : Efficient
complete solutions for distributed event scheduling. In
AAMAS, 2004.
[8] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. An
asynchronous complete method for distributed constraint
optimization. In AAMAS, 2003.
[9] R. Nair, D. Pynadath, M. Yokoo, M. Tambe, and S. Marsella.
Taming decentralized POMDPs: Towards efficient policy
computation for multiagent settings. In IJCAI, 2003.
[10] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo.
Networked distributed POMDPs: A synthesis of distributed
constraint optimization and POMDPs. In AAAI, 2005.
[11] L. Peshkin, N. Meuleau, K.-E. Kim, and L. Kaelbling.
Learning to cooperate via policy search. In UAI, 2000.
[12] A. Petcu and B. Faltings. A scalable method for multiagent
constraint optimization. In IJCAI, 2005.
[13] D. Szer, F. Charpillet, and S. Zilberstein. MAA*: A heuristic
search algorithm for solving decentralized POMDPs. In
IJCAI, 2005.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 829 | quality guaranteed approximation;policy search;branch and bound heuristic search technique;globally optimal solution;distributed partially observable markov decision problem;global optimality;network structure;heuristic;partially observable markov decision process;distributed sensor network;pomdp;approximate solution;multi-agent system;network of agent;maximum constrained node;distribute pomdp;agent network;depth first search;overall joint reward;quality guaranteed policy;optimal joint policy;heuristic function;network;uncertain domain;agent interaction structure |
train_I-68 | On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints | Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC- MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. | 1. INTRODUCTION
The development of algorithms for effective coordination of
multiple agents acting as a team in uncertain and time critical domains
has recently become a very active research field with potential
applications ranging from coordination of agents during a hostage
rescue mission [11] to the coordination of Autonomous Mars
Exploration Rovers [2]. Because of the uncertain and dynamic
characteristics of such domains, decision-theoretic models have received
a lot of attention in recent years, mainly thanks to their
expressiveness and the ability to reason about the utility of actions over
time.
Key decision-theoretic models that have become popular in the
literature include Decentralized Markov Decision Processes
(DECMDPs) and Decentralized, Partially Observable Markov Decision
Processes (DEC-POMDPs). Unfortunately, solving these models
optimally has been proven to be NEXP-complete [3], hence more
tractable subclasses of these models have been the subject of
intensive research. In particular, Network Distributed POMDP [13]
which assume that not all the agents interact with each other,
Transition Independent DEC-MDP [2] which assume that transition
function is decomposable into local transition functions or DEC-MDP
with Event Driven Interactions [1] which assume that interactions
between agents happen at fixed time points constitute good
examples of such subclasses. Although globally optimal algorithms for
these subclasses have demonstrated promising results, domains on
which these algorithms run are still small and time horizons are
limited to only a few time ticks.
To remedy that, locally optimal algorithms have been proposed
[12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5],
referred to as OC-DEC-MDP, is particularly notable, as it has been
shown to scale up to domains with hundreds of tasks and double
digit time horizons. Additionally, OC-DEC-MDP is unique in its
ability to address both temporal constraints and uncertain method
execution durations, which is an important factor for real-world
domains. OC-DEC-MDP is able to scale up to such domains mainly
because instead of searching for the globally optimal solution, it
carries out a series of policy iterations; in each iteration it performs
a value iteration that reuses the data computed during the previous
policy iteration. However, OC-DEC-MDP is still slow, especially
as the time horizon and the number of methods approach large
values. The reason for high runtimes of OC-DEC-MDP for such
domains is a consequence of its huge state space, i.e., OC-DEC-MDP
introduces a separate state for each possible pair of method and
method execution interval. Furthermore, OC-DEC-MDP
overestimates the reward that a method expects to receive for enabling
the execution of future methods. This reward, also referred to as
the opportunity cost, plays a crucial role in agent decision making,
and as we show later, its overestimation leads to highly suboptimal
policies.
In this context, we present VFP (= Value Function P ropagation),
an efficient solution technique for the DEC-MDP model with
temporal constraints and uncertain method execution durations, that
builds on the success of OC-DEC-MDP. VFP introduces our two
orthogonal ideas: First, similarly to [7] [9] and [10], we maintain
830
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
and manipulate a value function over time for each method rather
than a separate value for each pair of method and time interval.
Such representation allows us to group the time points for which
the value function changes at the same rate (= its slope is
constant), which results in fast, functional propagation of value
functions. Second, we prove (both theoretically and empirically) that
OC-DEC- MDP overestimates the opportunity cost, and to remedy
that, we introduce a set of heuristics, that correct the opportunity
cost overestimation problem.
This paper is organized as follows: In section 2 we motivate this
research by introducing a civilian rescue domain where a team of
fire- brigades must coordinate in order to rescue civilians trapped in
a burning building. In section 3 we provide a detailed description of
our DEC-MDP model with Temporal Constraints and in section 4
we discuss how one could solve the problems encoded in our model
using globally optimal and locally optimal solvers. Sections 5 and
6 discuss the two orthogonal improvements to the state-of-the-art
OC-DEC-MDP algorithm that our VFP algorithm implements.
Finally, in section 7 we demonstrate empirically the impact of our two
orthogonal improvements, i.e., we show that: (i) The new
heuristics correct the opportunity cost overestimation problem leading to
higher quality policies, and (ii) By allowing for a systematic
tradeoff of solution quality for time, the VFP algorithm runs much faster
than the OC-DEC-MDP algorithm
2. MOTIVATING EXAMPLE
We are interested in domains where multiple agents must
coordinate their plans over time, despite uncertainty in plan execution
duration and outcome. One example domain is large-scale disaster,
like a fire in a skyscraper. Because there can be hundreds of
civilians scattered across numerous floors, multiple rescue teams have
to be dispatched, and radio communication channels can quickly
get saturated and useless. In particular, small teams of fire-brigades
must be sent on separate missions to rescue the civilians trapped in
dozens of different locations.
Picture a small mission plan from Figure (1), where three
firebrigades have been assigned a task to rescue the civilians trapped
at site B, accessed from site A (e.g. an office accessed from the
floor)1
. General fire fighting procedures involve both: (i) putting
out the flames, and (ii) ventilating the site to let the toxic, high
temperature gases escape, with the restriction that ventilation should
not be performed too fast in order to prevent the fire from spreading.
The team estimates that the civilians have 20 minutes before the fire
at site B becomes unbearable, and that the fire at site A has to be
put out in order to open the access to site B. As has happened in
the past in large scale disasters, communication often breaks down;
and hence we assume in this domain that there is no
communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and
FB3). Consequently, FB2 does not know if it is already safe to
ventilate site A, FB1 does not know if it is already safe to enter site A
and start fighting fire at site B, etc. We assign the reward 50 for
evacuating the civilians from site B, and a smaller reward 20 for
the successful ventilation of site A, since the civilians themselves
might succeed in breaking out from site B.
One can clearly see the dilemma, that FB2 faces: It can only
estimate the durations of the Fight fire at site A methods to be
executed by FB1 and FB3, and at the same time FB2 knows that time
is running out for civilians. If FB2 ventilates site A too early, the
fire will spread out of control, whereas if FB2 waits with the
ventilation method for too long, fire at site B will become unbearable for
the civilians. In general, agents have to perform a sequence of such
1
We explain the EST and LET notation in section 3
Figure 1: Civilian rescue domain and a mission plan. Dotted
arrows represent implicit precedence constraints within an agent.
difficult decisions; in particular, decision process of FB2 involves
first choosing when to start ventilating site A, and then
(depending on the time it took to ventilate site A), choosing when to start
evacuating the civilians from site B. Such sequence of decisions
constitutes the policy of an agent, and it must be found fast because
time is running out.
3. MODEL DESCRIPTION
We encode our decision problems in a model which we refer to as
Decentralized MDP with Temporal Constraints 2
. Each instance of
our decision problems can be described as a tuple M, A, C, P, R
where M = {mi}
|M|
i=1 is the set of methods, and A = {Ak}
|A|
k=1
is the set of agents. Agents cannot communicate during mission
execution. Each agent Ak is assigned to a set Mk of methods,
such that
S|A|
k=1 Mk = M and ∀i,j;i=jMi ∩ Mj = ø. Also, each
method of agent Ak can be executed only once, and agent Ak can
execute only one method at a time. Method execution times are
uncertain and P = {pi}
|M|
i=1 is the set of distributions of method
execution durations. In particular, pi(t) is the probability that the
execution of method mi consumes time t. C is a set of
temporal constraints in the system. Methods are partially ordered and
each method has fixed time windows inside which it can be
executed, i.e., C = C≺ ∪ C[ ] where C≺ is the set of predecessor
constraints and C[ ] is the set of time window constraints. For
c ∈ C≺, c = mi, mj means that method mi precedes method
mj i.e., execution of mj cannot start before mi terminates. In
particular, for an agent Ak, all its methods form a chain linked by
predecessor constraints. We assume, that the graph G = M, C≺
is acyclic, does not have disconnected nodes (the problem cannot
be decomposed into independent subproblems), and its source and
sink vertices identify the source and sink methods of the system.
For c ∈ C[ ], c = mi, EST, LET means that execution of mi
can only start after the Earliest Starting Time EST and must
finish before the Latest End Time LET; we allow methods to have
multiple disjoint time window constraints. Although distributions
pi can extend to infinite time horizons, given the time window
constraints, the planning horizon Δ = max m,τ,τ ∈C[ ] τ is
considered as the mission deadline. Finally, R = {ri}
|M|
i=1 is the set of
non-negative rewards, i.e., ri is obtained upon successful
execution of mi.
Since there is no communication allowed, an agent can only
estimate the probabilities that its methods have already been enabled
2
One could also use the OC-DEC-MDP framework, which models
both time and resource constraints
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831
by other agents. Consequently, if mj ∈ Mk is the next method
to be executed by the agent Ak and the current time is t ∈ [0, Δ],
the agent has to make a decision whether to Execute the method
mj (denoted as E), or to Wait (denoted as W). In case agent Ak
decides to wait, it remains idle for an arbitrary small time , and
resumes operation at the same place (= about to execute method mj)
at time t + . In case agent Ak decides to Execute the next method,
two outcomes are possible:
Success: The agent Ak receives reward rj and moves on to its
next method (if such method exists) so long as the following
conditions hold: (i) All the methods {mi| mi, mj ∈ C≺} that
directly enable method mj have already been completed, (ii)
Execution of method mj started in some time window of method mj, i.e.,
∃ mj ,τ,τ ∈C[ ]
such that t ∈ [τ, τ ], and (iii) Execution of method
mj finished inside the same time window, i.e., agent Ak completed
method mj in time less than or equal to τ − t.
Failure: If any of the above-mentioned conditions does not hold,
agent Ak stops its execution. Other agents may continue their
execution, but methods mk ∈ {m| mj, m ∈ C≺} will never become
enabled.
The policy πk of an agent Ak is a function πk : Mk × [0, Δ] →
{W, E}, and πk( m, t ) = a means, that if Ak is at method m
at time t, it will choose to perform the action a. A joint policy
π = [πk]
|A|
k=1 is considered to be optimal (denoted as π∗
), if it
maximizes the sum of expected rewards for all the agents.
4. SOLUTION TECHNIQUES
4.1 Optimal Algorithms
Optimal joint policy π∗
is usually found by using the Bellman
update principle, i.e., in order to determine the optimal policy for
method mj, optimal policies for methods mk ∈ {m| mj, m ∈
C≺} are used. Unfortunately, for our model, the optimal
policy for method mj also depends on policies for methods mi ∈
{m| m, mj ∈ C≺}. This double dependency results from the
fact, that the expected reward for starting the execution of method
mj at time t also depends on the probability that method mj will be
enabled by time t. Consequently, if time is discretized, one needs to
consider Δ|M|
candidate policies in order to find π∗
. Thus,
globally optimal algorithms used for solving real-world problems are
unlikely to terminate in reasonable time [11]. The complexity of
our model could be reduced if we considered its more restricted
version; in particular, if each method mj was allowed to be
enabled at time points t ∈ Tj ⊂ [0, Δ], the Coverage Set Algorithm
(CSA) [1] could be used. However, CSA complexity is double
exponential in the size of Ti, and for our domains Tj can store all
values ranging from 0 to Δ.
4.2 Locally Optimal Algorithms
Following the limited applicability of globally optimal algorithms
for DEC-MDPs with Temporal Constraints, locally optimal
algorithms appear more promising. Specially, the OC-DEC-MDP
algorithm [4] is particularly significant, as it has shown to easily scale
up to domains with hundreds of methods. The idea of the
OC-DECMDP algorithm is to start with the earliest starting time policy π0
(according to which an agent will start executing the method m as
soon as m has a non-zero chance of being already enabled), and
then improve it iteratively, until no further improvement is
possible. At each iteration, the algorithm starts with some policy π,
which uniquely determines the probabilities Pi,[τ,τ ] that method
mi will be performed in the time interval [τ, τ ]. It then performs
two steps:
Step 1: It propagates from sink methods to source methods the
values Vi,[τ,τ ], that represent the expected utility for executing
method mi in the time interval [τ, τ ]. This propagation uses the
probabilities Pi,[τ,τ ] from previous algorithm iteration. We call
this step a value propagation phase.
Step 2: Given the values Vi,[τ,τ ] from Step 1, the algorithm chooses
the most profitable method execution intervals which are stored in
a new policy π . It then propagates the new probabilities Pi,[τ,τ ]
from source methods to sink methods. We call this step a
probability propagation phase. If policy π does not improve π, the
algorithm terminates.
There are two shortcomings of the OC-DEC-MDP algorithm that
we address in this paper. First, each of OC-DEC-MDP states is a
pair mj, [τ, τ ] , where [τ, τ ] is a time interval in which method
mj can be executed. While such state representation is beneficial,
in that the problem can be solved with a standard value iteration
algorithm, it blurs the intuitive mapping from time t to the expected
total reward for starting the execution of mj at time t.
Consequently, if some method mi enables method mj, and the values
Vj,[τ,τ ]∀τ,τ ∈[0,Δ] are known, the operation that calculates the
values Vi,[τ,τ ]∀τ, τ ∈ [0, Δ] (during the value propagation phase),
runs in time O(I2
), where I is the number of time intervals 3
. Since
the runtime of the whole algorithm is proportional to the runtime of
this operation, especially for big time horizons Δ, the OC-
DECMDP algorithm runs slow.
Second, while OC-DEC-MDP emphasizes on precise calculation
of values Vj,[τ,τ ], it fails to address a critical issue that determines
how the values Vj,[τ,τ ] are split given that the method mj has
multiple enabling methods. As we show later, OC-DEC-MDP splits
Vj,[τ,τ ] into parts that may overestimate Vj,[τ,τ ] when summed up
again. As a result, methods that precede the method mj
overestimate the value for enabling mj which, as we show later, can have
disastrous consequences. In the next two sections, we address both
of these shortcomings.
5. VALUE FUNCTION PROPAGATION (VFP)
The general scheme of the VFP algorithm is identical to the
OCDEC-MDP algorithm, in that it performs a series of policy
improvement iterations, each one involving a Value and Probability
Propagation Phase. However, instead of propagating separate
values, VFP maintains and propagates the whole functions, we
therefore refer to these phases as the value function propagation phase
and the probability function propagation phase. To this end, for
each method mi ∈ M, we define three new functions:
Value Function, denoted as vi(t), that maps time t ∈ [0, Δ] to the
expected total reward for starting the execution of method mi at
time t.
Opportunity Cost Function, denoted as Vi(t), that maps time
t ∈ [0, Δ] to the expected total reward for starting the execution
of method mi at time t assuming that mi is enabled.
Probability Function, denoted as Pi(t), that maps time t ∈ [0, Δ]
to the probability that method mi will be completed before time
t.
Such functional representation allows us to easily read the current
policy, i.e., if an agent Ak is at method mi at time t, then it will
wait as long as value function vi(t) will be greater in the future.
Formally:
πk( mi, t ) =
j
W if ∃t >t such that vi(t) < vi(t )
E otherwise.
We now develop an analytical technique for performing the value
function and probability function propagation phases.
3
Similarly for the probability propagation phase
832 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
5.1 Value Function Propagation Phase
Suppose, that we are performing a value function propagation phase
during which the value functions are propagated from the sink
methods to the source methods. At any time during this phase we
encounter a situation shown in Figure 2, where opportunity cost
functions [Vjn ]N
n=0 of methods [mjn ]N
n=0 are known, and the
opportunity cost Vi0 of method mi0 is to be derived. Let pi0 be the
probability distribution function of method mi0 execution
duration, and ri0 be the immediate reward for starting and
completing the execution of method mi0 inside a time interval [τ, τ ] such
that mi0 τ, τ ∈ C[ ]. The function Vi0 is then derived from ri0
and opportunity costs Vjn,i0 (t) n = 1, ..., N from future methods.
Formally:
Vi0 (t) =
8
>><
>>:
R τ −t
0
pi0 (t )(ri0 +
PN
n=0 Vjn,i0 (t + t ))dt
if ∃ mi0
τ,τ ∈C[ ]
such that t ∈ [τ, τ ]
0 otherwise
(1)
Note, that for t ∈ [τ, τ ], if h(t) := ri0 +
PN
n=0 Vjn,i0 (τ −t) then
Vi0 is a convolution of p and h: vi0 (t) = (pi0 ∗h)(τ −t).
Assume for now, that Vjn,i0 represents a full opportunity cost,
postponing the discussion on different techniques for splitting the
opportunity cost Vj0 into [Vj0,ik ]K
k=0 until section 6. We now show
how to derive Vj0,i0 (derivation of Vjn,i0 for n = 0 follows the
same scheme).
Figure 2: Fragment of an MDP of agent Ak. Probability
functions propagate forward (left to right) whereas value functions
propagate backward (right to left).
Let V j0,i0 (t) be the opportunity cost of starting the execution of
method mj0 at time t given that method mi0 has been completed.
It is derived by multiplying Vi0 by the probability functions of all
methods other than mi0 that enable mj0 . Formally:
V j0,i0 (t) = Vj0 (t) ·
KY
k=1
Pik (t).
Where similarly to [4] and [5] we ignored the dependency of [Plk ]K
k=1.
Observe that V j0,i0 does not have to be monotonically
decreasing, i.e., delaying the execution of the method mi0 can sometimes
be profitable. Therefore the opportunity cost Vj0,i0 (t) of enabling
method mi0 at time t must be greater than or equal to V j0,i0 .
Furthermore, Vj0,i0 should be non-increasing. Formally:
Vj0,i0 = min
f∈F
f (2)
Where F = {f | f ≥ V j0,i0 and f(t) ≥ f(t ) ∀t<t }.
Knowing the opportunity cost Vi0 , we can then easily derive the
value function vi0 . Let Ak be an agent assigned to the method mi0 .
If Ak is about to start the execution of mi0 it means, that Ak must
have completed its part of the mission plan up to the method mi0 .
Since Ak does not know if other agents have completed methods
[mlk ]k=K
k=1 , in order to derive vi0 , it has to multiply Vi0 by the
probability functions of all methods of other agents that enable mi0 .
Formally:
vi0 (t) = Vi0 (t) ·
KY
k=1
Plk (t)
Where the dependency of [Plk ]K
k=1 is also ignored.
We have consequently shown a general scheme how to propagate
the value functions: Knowing [vjn ]N
n=0 and [Vjn ]N
n=0 of methods
[mjn ]N
n=0 we can derive vi0 and Vi0 of method mi0 . In general, the
value function propagation scheme starts with sink nodes. It then
visits at each time a method m, such that all the methods that m
enables have already been marked as visited. The value function
propagation phase terminates when all the source methods have
been marked as visited.
5.2 Reading the Policy
In order to determine the policy of agent Ak for the method mj0
we must identify the set Zj0 of intervals [z, z ] ⊂ [0, ..., Δ], such
that:
∀t∈[z,z ] πk( mj0 , t ) = W.
One can easily identify the intervals of Zj0 by looking at the time
intervals in which the value function vj0 does not decrease
monotonically.
5.3 Probability Function Propagation Phase
Assume now, that value functions and opportunity cost values have
all been propagated from sink methods to source nodes and the sets
Zj for all methods mj ∈ M have been identified. Since value
function propagation phase was using probabilities Pi(t) for
methods mi ∈ M and times t ∈ [0, Δ] found at previous algorithm
iteration, we now have to find new values Pi(t), in order to prepare
the algorithm for its next iteration. We now show how in the general
case (Figure 2) propagate the probability functions forward through
one method, i.e., we assume that the probability functions [Pik ]K
k=0
of methods [mik ]K
k=0 are known, and the probability function Pj0
of method mj0 must be derived. Let pj0 be the probability
distribution function of method mj0 execution duration, and Zj0 be the
set of intervals of inactivity for method mj0 , found during the last
value function propagation phase. If we ignore the dependency of
[Pik ]K
k=0 then the probability Pj0 (t) that the execution of method
mj0 starts before time t is given by:
Pj0 (t) =
(QK
k=0 Pik (τ) if ∃(τ, τ ) ∈ Zj0 s.t. t ∈ (τ, τ )
QK
k=0 Pik (t) otherwise.
Given Pj0 (t), the probability Pj0 (t) that method mj0 will be
completed by time t is derived by:
Pj0 (t) =
Z t
0
Z t
0
(
∂Pj0
∂t
)(t ) · pj0 (t − t )dt dt (3)
Which can be written compactly as
∂Pj0
∂t
= pj0 ∗
∂P j0
∂t
.
We have consequently shown how to propagate the probability
functions [Pik ]K
k=0 of methods [mik ]K
k=0 to obtain the probability
function Pj0 of method mj0 . The general, the probability function
propagation phase starts with source methods msi for which we
know that Psi = 1 since they are enabled by default. We then
visit at each time a method m such that all the methods that enable
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833
m have already been marked as visited. The probability function
propagation phase terminates when all the sink methods have been
marked as visited.
5.4 The Algorithm
Similarly to the OC-DEC-MDP algorithm, VFP starts the policy
improvement iterations with the earliest starting time policy π0
.
Then at each iteration it: (i) Propagates the value functions [vi]
|M|
i=1
using the old probability functions [Pi]
|M|
i=1 from previous algorithm
iteration and establishes the new sets [Zi]
|M|
i=1 of method inactivity
intervals, and (ii) propagates the new probability functions [Pi ]
|M|
i=1
using the newly established sets [Zi]
|M|
i=1. These new functions
[Pi ]
|M|
i=1 are then used in the next iteration of the algorithm.
Similarly to OC-DEC-MDP, VFP terminates if a new policy does not
improve the policy from the previous algorithm iteration.
5.5 Implementation of Function Operations
So far, we have derived the functional operations for value function
and probability function propagation without choosing any
function representation. In general, our functional operations can
handle continuous time, and one has freedom to choose a desired
function approximation technique, such as piecewise linear [7] or
piecewise constant [9] approximation. However, since one of our goals
is to compare VFP with the existing OC-DEC- MDP algorithm, that
works only for discrete time, we also discretize time, and choose to
approximate value functions and probability functions with
piecewise linear (PWL) functions.
When the VFP algorithm propagates the value functions and
probability functions, it constantly carries out operations represented by
equations (1) and (3) and we have already shown that these
operations are convolutions of some functions p(t) and h(t). If time is
discretized, functions p(t) and h(t) are discrete; however, h(t) can
be nicely approximated with a PWL function bh(t), which is exactly
what VFP does. As a result, instead of performing O(Δ2
)
multiplications to compute f(t), VFP only needs to perform O(k · Δ)
multiplications to compute f(t), where k is the number of linear
segments of bh(t) (note, that since h(t) is monotonic, bh(t) is
usually close to h(t) with k Δ). Since Pi values are in range
[0, 1] and Vi values are in range [0,
P
mi∈M ri], we suggest to
approximate Vi(t) with bVi(t) within error V , and Pi(t) with bPi(t)
within error P . We now prove that the overall approximation error
accumulated during the value function propagation phase can be
expressed in terms of P and V :
THEOREM 1. Let C≺ be a set of precedence constraints of a
DEC-MDP with Temporal Constraints, and P and V be the
probability function and value function approximation errors
respectively. The overall error π = maxV supt∈[0,Δ]|V (t) − bV (t)| of
value function propagation phase is then bounded by:
|C≺|
V + ((1 + P )|C≺|
− 1)
P
mi∈M ri
.
PROOF. In order to establish the bound for π, we first prove
by induction on the size of C≺, that the overall error of
probability function propagation phase, π(P ) = maxP supt∈[0,Δ]|P(t) −
bP(t)| is bounded by (1 + P )|C≺|
− 1.
Induction base: If n = 1 only two methods are present, and we
will perform the operation identified by Equation (3) only once,
introducing the error π(P ) = P = (1 + P )|C≺|
− 1.
Induction step: Suppose, that π(P ) for |C≺| = n is bounded by
(1 + P )n
− 1, and we want to prove that this statement holds for
|C≺| = n. Let G = M, C≺ be a graph with at most n + 1
edges, and G = M, C≺ be a subgraph of G, such that C≺ =
C≺ − { mi, mj }, where mj ∈ M is a sink node in G. From the
induction assumption we have, that C≺ introduces the probability
propagation phase error bounded by (1 + P )n
− 1. We now add
back the link { mi, mj } to C≺, which affects the error of only
one probability function, namely Pj, by a factor of (1 + P ). Since
probability propagation phase error in C≺ was bounded by (1 +
P )n
− 1, in C≺ = C≺ ∪ { mi, mj } it can be at most ((1 +
P )n
− 1)(1 + P ) < (1 + P )n+1
− 1. Thus, if opportunity cost
functions are not overestimated, they are bounded by
P
mi∈M ri
and the error of a single value function propagation operation will
be at most
Z Δ
0
p(t)( V +((1+ P )
|C≺|
−1)
X
mi∈M
ri) dt < V +((1+ P )
|C≺|
−1)
X
mi∈M
ri.
Since the number of value function propagation operations is |C≺|,
the total error π of the value function propagation phase is bounded
by: |C≺|
V + ((1 + P )|C≺|
− 1)
P
mi∈M ri
.
6. SPLITTING THE OPPORTUNITY COST
FUNCTIONS
In section 5 we left out the discussion about how the
opportunity cost function Vj0 of method mj0 is split into opportunity cost
functions [Vj0,ik ]K
k=0 sent back to methods [mik ]K
k=0 , that
directly enable method mj0 . So far, we have taken the same
approach as in [4] and [5] in that the opportunity cost function Vj0,ik
that the method mik sends back to the method mj0 is a
minimal, non-increasing function that dominates function V j0,ik (t) =
(Vj0 ·
Q
k ∈{0,...,K}
k =k
Pik
)(t). We refer to this approach, as
heuristic H 1,1 . Before we prove that this heuristic overestimates the
opportunity cost, we discuss three problems that might occur when
splitting the opportunity cost functions: (i) overestimation, (ii)
underestimation and (iii) starvation. Consider the situation in Figure
Figure 3: Splitting the value function of method mj0 among
methods [mik ]K
k=0.
(3) when value function propagation for methods [mik ]K
k=0 is
performed. For each k = 0, ..., K, Equation (1) derives the
opportunity cost function Vik from immediate reward rk and
opportunity cost function Vj0,ik . If m0 is the only methods that precedes
method mk, then V ik,0 = Vik is propagated to method m0, and
consequently the opportunity cost for completing the method m0 at
time t is equal to
PK
k=0 Vik,0(t). If this cost is overestimated, then
an agent A0 at method m0 will have too much incentive to finish
the execution of m0 at time t. Consequently, although the
probability P(t) that m0 will be enabled by other agents by time t is low,
agent A0 might still find the expected utility of starting the
execution of m0 at time t higher than the expected utility of doing it later.
As a result, it will choose at time t to start executing method m0
instead of waiting, which can have disastrous consequences.
Similarly, if
PK
k=0 Vik,0(t) is underestimated, agent A0 might loose
interest in enabling the future methods [mik ]K
k=0 and just focus on
834 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
maximizing the chance of obtaining its immediate reward r0. Since
this chance is increased when agent A0 waits4
, it will consider at
time t to be more profitable to wait, instead of starting the
execution of m0, which can have similarly disastrous consequences.
Finally, if Vj0 is split in a way, that for some k, Vj0,ik = 0, it is the
method mik that underestimates the opportunity cost of enabling
method mj0 , and the similar reasoning applies. We call such
problem a starvation of method mk. That short discussion shows the
importance of splitting the opportunity cost function Vj0 in such a
way, that overestimation, underestimation, and starvation problem
is avoided. We now prove that:
THEOREM 2. Heuristic H 1,1 can overestimate the
opportunity cost.
PROOF. We prove the theorem by showing a case where the
overestimation occurs. For the mission plan from Figure (3), let
H 1,1 split Vj0 into [V j0,ik = Vj0 ·
Q
k ∈{0,...,K}
k =k
Pik
]K
k=0 sent to
methods [mik ]K
k=0 respectively. Also, assume that methods [mik ]K
k=0
provide no local reward and have the same time windows, i.e.,
rik = 0; ESTik = 0, LETik = Δ for k = 0, ..., K. To prove the
overestimation of opportunity cost, we must identify t0 ∈ [0, ..., Δ]
such that the opportunity cost
PK
k=0 Vik (t) for methods [mik ]K
k=0
at time t ∈ [0, .., Δ] is greater than the opportunity cost Vj0 (t).
From Equation (1) we have:
Vik
(t) =
Z Δ−t
0
pik
(t )Vj0,ik
(t + t )dt
Summing over all methods [mik ]K
k=0 we obtain:
KX
k=0
Vik
(t) =
KX
k=0
Z Δ−t
0
pik
(t )Vj0,ik
(t + t )dt (4)
≥
KX
k=0
Z Δ−t
0
pik
(t )V j0,ik
(t + t )dt
=
KX
k=0
Z Δ−t
0
pik
(t )Vj0 (t + t )
Y
k ∈{0,...,K}
k =k
Pik
(t + t )dt
Let c ∈ (0, 1] be a constant and t0 ∈ [0, Δ] be such that ∀t>t0
and ∀k=0,..,K we have
Q
k ∈{0,...,K}
k =k
Pik
(t) > c. Then:
KX
k=0
Vik
(t0) >
KX
k=0
Z Δ−t0
0
pik
(t )Vj0 (t0 + t ) · c dt
Because Pjk
is non-decreasing. Now, suppose there exists t1 ∈
(t0, Δ], such that
PK
k=0
R t1−t0
0
pik (t )dt >
Vj0
(t0)
c·Vj0
(t1)
. Since
decreasing the upper limit of the integral over positive function also
decreases the integral, we have:
KX
k=0
Vik
(t0) > c
KX
k=0
Z t1
t0
pik
(t − t0)Vj0 (t )dt
And since Vj0 (t ) is non-increasing we have:
KX
k=0
Vik
(t0) > c · Vj0 (t1)
KX
k=0
Z t1
t0
pik
(t − t0)dt (5)
= c · Vj0 (t1)
KX
k=0
Z t1−t0
0
pik
(t )dt
> c · Vj0 (t1)
Vj(t0)
c · Vj(t1)
= Vj(t0)
4
Assuming LET0 t
Consequently, the opportunity cost
PK
k=0 Vik (t0) of starting the
execution of methods [mik ]K
k=0 at time t ∈ [0, .., Δ] is greater
than the opportunity cost Vj0 (t0) which proves the theorem.Figure
4 shows that the overestimation of opportunity cost is easily
observable in practice.
To remedy the problem of opportunity cost overestimation, we
propose three alternative heuristics that split the opportunity cost
functions:
• Heuristic H 1,0 : Only one method, mik gets the full
expected reward for enabling method mj0 , i.e., V j0,ik
(t) = 0
for k ∈ {0, ..., K}\{k} and V j0,ik (t) = (Vj0 ·
Q
k ∈{0,...,K}
k =k
Pik
)(t).
• Heuristic H 1/2,1/2 : Each method [mik ]K
k=0 gets the full
opportunity cost for enabling method mj0 divided by the
number K of methods enabling the method mj0 , i.e., V j0,ik (t) =
1
K
(Vj0 ·
Q
k ∈{0,...,K}
k =k
Pik
)(t) for k ∈ {0, ..., K}.
• Heuristic bH 1,1 : This is a normalized version of the H 1,1
heuristic in that each method [mik ]K
k=0 initially gets the full
opportunity cost for enabling the method mj0 . To avoid
opportunity cost overestimation, we normalize the split
functions when their sum exceeds the opportunity cost function
to be split. Formally:
V j0,ik (t) =
8
><
>:
V
H 1,1
j0,ik
(t) if
PK
k=0 V
H 1,1
j0,ik
(t) < Vj0 (t)
Vj0 (t)
V
H 1,1
j0,ik
(t)
PK
k=0
V
H 1,1
j0,ik
(t)
otherwise
Where V
H 1,1
j0,ik
(t) = (Vj0 ·
Q
k ∈{0,...,K}
k =k
Pjk
)(t).
For the new heuristics, we now prove, that:
THEOREM 3. Heuristics H 1,0 , H 1/2,1/2 and bH 1,1 do not
overestimate the opportunity cost.
PROOF. When heuristic H 1,0 is used to split the opportunity
cost function Vj0 , only one method (e.g. mik ) gets the opportunity
cost for enabling method mj0 . Thus:
KX
k =0
Vik
(t) =
Z Δ−t
0
pik
(t )Vj0,ik
(t + t )dt (6)
And since Vj0 is non-increasing
≤
Z Δ−t
0
pik
(t )Vj0 (t + t ) ·
Y
k ∈{0,...,K}
k =k
Pjk
(t + t )dt
≤
Z Δ−t
0
pik
(t )Vj0 (t + t )dt ≤ Vj0 (t)
The last inequality is also a consequence of the fact that Vj0 is
non-increasing.
For heuristic H 1/2,1/2 we similarly have:
KX
k=0
Vik
(t) ≤
KX
k=0
Z Δ−t
0
pik
(t )
1
K
Vj0 (t + t )
Y
k ∈{0,...,K}
k =k
Pjk
(t + t )dt
≤
1
K
KX
k=0
Z Δ−t
0
pik
(t )Vj0 (t + t )dt
≤
1
K
· K · Vj0 (t) = Vj0 (t).
For heuristic bH 1,1 , the opportunity cost function Vj0 is by
definition split in such manner, that
PK
k=0 Vik (t) ≤ Vj0 (t).
Consequently, we have proved, that our new heuristics H 1,0 , H 1/2,1/2
and bH 1,1 avoid the overestimation of the opportunity cost.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835
The reason why we have introduced all three new heuristics is the
following: Since H 1,1 overestimates the opportunity cost, one
has to choose which method mik will receive the reward from
enabling the method mj0 , which is exactly what the heuristic H 1,0
does. However, heuristic H 1,0 leaves K − 1 methods that
precede the method mj0 without any reward which leads to starvation.
Starvation can be avoided if opportunity cost functions are split
using heuristic H 1/2,1/2 , that provides reward to all enabling
methods. However, the sum of split opportunity cost functions for the
H 1/2,1/2 heuristic can be smaller than the non-zero split
opportunity cost function for the H 1,0 heuristic, which is clearly
undesirable. Such situation (Figure 4, heuristic H 1,0 ) occurs because
the mean f+g
2
of two functions f, g is not smaller than f nor g
only if f = g. This is why we have proposed the bH 1,1 heuristic,
which by definition avoids the overestimation, underestimation and
starvation problems.
7. EXPERIMENTAL EVALUATION
Since the VFP algorithm that we introduced provides two
orthogonal improvements over the OC-DEC-MDP algorithm, the
experimental evaluation we performed consisted of two parts: In part 1,
we tested empirically the quality of solutions that an locally optimal
solver (either OC-DEC-MDP or VFP) finds, given it uses different
opportunity cost function splitting heuristic, and in part 2, we
compared the runtimes of the VFP and OC-DEC- MDP algorithms for
a variety of mission plan configurations.
Part 1: We first ran the VFP algorithm on a generic mission plan
configuration from Figure 3 where only methods mj0 , mi1 , mi2
and m0 were present. Time windows of all methods were set to
400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1
400
and durations pi1 , pi2 of methods mi1 , mi2 were normal
distributions, i.e., pi1 = N(μ = 250, σ = 20), and pi2 = N(μ =
200, σ = 100). We assumed that only method mj0 provided
reward, i.e. rj0 = 10 was the reward for finishing the execution of
method mj0 before time t = 400. We show our results in Figure
(4) where the x-axis of each of the graphs represents time whereas
the y-axis represents the opportunity cost. The first graph confirms,
that when the opportunity cost function Vj0 was split into
opportunity cost functions Vi1 and Vi2 using the H 1,1 heuristic, the
function Vi1 +Vi2 was not always below the Vj0 function. In particular,
Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%. When
heuristics H 1,0 , H 1/2,1/2 and bH 1,1 were used (graphs 2,3 and 4),
the function Vi1 + Vi2 was always below Vj0 .
We then shifted our attention to the civilian rescue domain
introduced in Figure 1 for which we sampled all action execution
durations from the normal distribution N = (μ = 5, σ = 2)). To
obtain the baseline for the heuristic performance, we implemented
a globally optimal solver, that found a true expected total reward
for this domain (Figure (6a)). We then compared this reward with
a expected total reward found by a locally optimal solver guided
by each of the discussed heuristics. Figure (6a), which plots on
the y-axis the expected total reward of a policy complements our
previous results: H 1,1 heuristic overestimated the expected total
reward by 280% whereas the other heuristics were able to guide the
locally optimal solver close to a true expected total reward.
Part 2: We then chose H 1,1 to split the opportunity cost
functions and conducted a series of experiments aimed at testing the
scalability of VFP for various mission plan configurations, using
the performance of the OC-DEC-MDP algorithm as a benchmark.
We began the VFP scalability tests with a configuration from Figure
(5a) associated with the civilian rescue domain, for which method
execution durations were extended to normal distributions N(μ =
Figure 5: Mission plan configurations: (a) civilian rescue
domain, (b) chain of n methods, (c) tree of n methods with
branching factor = 3 and (d) square mesh of n methods.
Figure 6: VFP performance in the civilian rescue domain.
30, σ = 5), and the deadline was extended to Δ = 200.
We decided to test the runtime of the VFP algorithm running with
three different levels of accuracy, i.e., different approximation
parameters P and V were chosen, such that the cumulative error
of the solution found by VFP stayed within 1%, 5% and 10% of
the solution found by the OC- DEC-MDP algorithm. We then run
both algorithms for a total of 100 policy improvement iterations.
Figure (6b) shows the performance of the VFP algorithm in the
civilian rescue domain (y-axis shows the runtime in milliseconds).
As we see, for this small domain, VFP runs 15% faster than
OCDEC-MDP when computing the policy with an error of less than
1%. For comparison, the globally optimal solved did not terminate
within the first three hours of its runtime which shows the strength
of the opportunistic solvers, like OC-DEC-MDP.
We next decided to test how VFP performs in a more difficult
domain, i.e., with methods forming a long chain (Figure (5b)). We
tested chains of 10, 20 and 30 methods, increasing at the same
time method time windows to 350, 700 and 1050 to ensure that
later methods can be reached. We show the results in Figure (7a),
where we vary on the x-axis the number of methods and plot on
the y-axis the algorithm runtime (notice the logarithmic scale). As
we observe, scaling up the domain reveals the high performance of
VFP: Within 1% error, it runs up to 6 times faster than
OC-DECMDP.
We then tested how VFP scales up, given that the methods are
arranged into a tree (Figure (5c)). In particular, we considered trees
with branching factor of 3, and depth of 2, 3 and 4, increasing at
the same time the time horizon from 200 to 300, and then to 400.
We show the results in Figure (7b). Although the speedups are
smaller than in case of a chain, the VFP algorithm still runs up to 4
times faster than OC-DEC-MDP when computing the policy with
an error of less than 1%.
We finally tested how VFP handles the domains with methods
arranged into a n × n mesh, i.e., C≺ = { mi,j, mk,j+1 } for i =
1, ..., n; k = 1, ..., n; j = 1, ..., n − 1. In particular, we consider
836 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 4: Visualization of heuristics for opportunity costs splitting.
Figure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations.
meshes of 3×3, 4×4, and 5×5 methods. For such configurations
we have to greatly increase the time horizon since the
probabilities of enabling the final methods by a particular time decrease
exponentially. We therefore vary the time horizons from 3000 to
4000, and then to 5000. We show the results in Figure (7c) where,
especially for larger meshes, the VFP algorithm runs up to one
order of magnitude faster than OC-DEC-MDP while finding a policy
that is within less than 1% from the policy found by OC-
DECMDP.
8. CONCLUSIONS
Decentralized Markov Decision Process (DEC-MDP) has been very
popular for modeling of agent-coordination problems, it is very
difficult to solve, especially for the real-world domains. In this
paper, we improved a state-of-the-art heuristic solution method for
DEC-MDPs, called OC-DEC-MDP, that has recently been shown
to scale up to large DEC-MDPs. Our heuristic solution method,
called Value Function Propagation (VFP), provided two
orthogonal improvements of OC-DEC-MDP: (i) It speeded up
OC-DECMDP by an order of magnitude by maintaining and manipulating a
value function for each method rather than a separate value for each
pair of method and time interval, and (ii) it achieved better solution
qualities than OC-DEC-MDP because it corrected the
overestimation of the opportunity cost of OC-DEC-MDP.
In terms of related work, we have extensively discussed the
OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4,
there are globally optimal algorithms for solving DEC-MDPs with
temporal constraints [1] [11]. Unfortunately, they fail to scale up to
large-scale domains at present time. Beyond OC-DEC-MDP, there
are other locally optimal algorithms for DEC-MDPs and
DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with
uncertain execution times and temporal constraints. Finally, value
function techniques have been studied in context of single agent
MDPs [7] [9]. However, similarly to [6], they fail to address the
lack of global state knowledge, which is a fundamental issue in
decentralized planning.
Acknowledgments
This material is based upon work supported by the DARPA/IPTO
COORDINATORS program and the Air Force Research
Laboratory under Contract No. FA875005C0030. The authors also want
to thank Sven Koenig and anonymous reviewers for their valuable
comments.
9. REFERENCES
[1] R. Becker, V. Lesser, and S. Zilberstein. Decentralized MDPs with
Event-Driven Interactions. In AAMAS, pages 302-309, 2004.
[2] R. Becker, S. Zilberstein, V. Lesser, and C. V. Goldman.
Transition-Independent Decentralized Markov Decision Processes. In
AAMAS, pages 41-48, 2003.
[3] D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of
decentralized control of Markov decision processes. In UAI, pages
32-37, 2000.
[4] A. Beynier and A. Mouaddib. A polynomial algorithm for
decentralized Markov decision processes with temporal constraints.
In AAMAS, pages 963-969, 2005.
[5] A. Beynier and A. Mouaddib. An iterative algorithm for solving
constrained decentralized Markov decision processes. In AAAI, pages
1089-1094, 2006.
[6] C. Boutilier. Sequential optimality and coordination in multiagent
systems. In IJCAI, pages 478-485, 1999.
[7] J. Boyan and M. Littman. Exact solutions to time-dependent MDPs.
In NIPS, pages 1026-1032, 2000.
[8] C. Goldman and S. Zilberstein. Optimizing information exchange in
cooperative multi-agent systems, 2003.
[9] L. Li and M. Littman. Lazy approximation for solving continuous
finite-horizon MDPs. In AAAI, pages 1175-1180, 2005.
[10] Y. Liu and S. Koenig. Risk-sensitive planning with one-switch utility
functions: Value iteration. In AAAI, pages 993-999, 2005.
[11] D. Musliner, E. Durfee, J. Wu, D. Dolgov, R. Goldman, and
M. Boddy. Coordinated plan management using multiagent MDPs. In
AAAI Spring Symposium, 2006.
[12] R. Nair, M. Tambe, M. Yokoo, D. Pynadath, and S. Marsella. Taming
decentralized POMDPs: Towards efficient policy computation for
multiagent settings. In IJCAI, pages 705-711, 2003.
[13] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked
distributed POMDPs: A synergy of distributed constraint
optimization and POMDPs. In IJCAI, pages 1758-1760, 2005.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 837 | locally optimal solution;rescue mission;agent-coordination problem;decentralized partially observable markov decision process;decentralize markov decision process;temporal constraint;decision-theoretic model;multiplication;policy iteration;heuristic performance;probability function propagation;opportunity cost;value function propagation;multi-agent system;decentralized markov decision process |