name
stringlengths 1
3
| title
stringlengths 17
118
| abstract
stringlengths 268
2.12k
| fulltext
stringlengths 8.6k
78.1k
| keywords
stringlengths 28
1.35k
|
---|---|---|---|---|
1 | 2-Source Dispersers for Sub-Polynomial Entropy and Ramsey Graphs Beating the Frankl-Wilson Construction | The main result of this paper is an explicit disperser for two independent sources on n bits, each of entropy k = n o(1). Put differently, setting N = 2n and K = 2k , we construct explicit N N Boolean matrices for which no K K sub-matrix is monochromatic. Viewed as adjacency matrices of bipartite graphs, this gives an explicit construction of K-Ramsey bipartite graphs of size N . This greatly improves the previous bound of k = o(n) of Barak, Kindler, Shaltiel, Sudakov and Wigderson [4]. It also significantly improves the 25-year record of k = ~ O(n) on the special case of Ramsey graphs, due to Frankl and Wilson [9]. The construction uses (besides "classical" extractor ideas) almost all of the machinery developed in the last couple of years for extraction from independent sources, including: Bourgain's extractor for 2 independent sources of some entropy rate < 1/2 [5] Raz's extractor for 2 independent sources, one of which has any entropy rate > 1/2 [18] Rao's extractor for 2 independent block-sources of entropy n (1) [17] The "Challenge-Response" mechanism for detecting "entropy concentration" of [4]. The main novelty comes in a bootstrap procedure which allows the Challenge-Response mechanism of [4] to be used with sources of less and less entropy, using recursive calls to itself. Subtleties arise since the success of this mechanism depends on restricting the given sources, and so recursion constantly changes the original sources. These are resolved via a new construct, in between a disperser and an extractor, which behaves like an extractor on sufficiently large subsources of the given ones. This version is only an extended abstract, please see the full version, available on the authors' homepages, for more details. | INTRODUCTION
This paper deals with randomness extraction from weak
random sources. Here a weak random source is a distribution
which contains some entropy. The extraction task is to
design efficient algorithms (called extractors) to convert this
entropy into useful form, namely a sequence of independent
unbiased bits. Beyond the obvious motivations (potential
use of physical sources in pseudorandom generators and in
derandomization), extractors have found applications in a
variety of areas in theoretical computer science where randomness
does not seem an issue, such as in efficient constructions
of communication networks [24, 7], error correcting
codes [22, 12], data structures [14] and more.
Most work in this subject over the last 20 years has focused
on what is now called seeded extraction, in which the
extractor is given as input not only the (sample from the)
defective random source, but also a few truly random bits
(called the seed). A comprehensive survey of much of this
body of work is [21].
Another direction, which has been mostly dormant till
about two years ago, is (seedless, deterministic) extraction
from a few independent weak sources. This kind of extraction
is important in several applications where it is unrealis-tic
to have a short random seed or deterministically enumerate
over its possible values. However, it is easily shown to be
impossible when only one weak source is available. When at
least 2 independent sources are available extraction becomes
possible in principle. The 2-source case is the one we will
focus on in this work.
The rest of the introduction is structured as follows. We'll
start by describing our main result in the context of Ramsey
graphs. We then move to the context of extractors and disperser
, describing the relevant background and stating our
result in this language. Then we give an overview of the
construction of our dispersers, describing the main building
blocks we construct along the way. As the construction is
quite complex and its analysis quite subtle, in this proceedings
version we try to abstract away many of the technical
difficulties so that the main ideas, structure and tools used
are highlighted. For that reason we also often state definitions
and theorems somewhat informally.
1.1
Ramsey Graphs
Definition 1.1. A graph on N vertices is called a K-Ramsey
Graph if it contains no clique or independent set of
size K.
In 1947 Erd
os published his paper inaugurating the Prob-abilistic
Method with a few examples, including a proof that
most graphs on N = 2
n
vertices are 2n-Ramsey. The quest
for constructing such graphs explicitly has existed ever since
and lead to some beautiful mathematics.
The best record to date was obtained in 1981 by Frankl
and Wilson [9], who used intersection theorems for set systems
to construct N -vertex graphs which are 2
n log n
-Ramsey.
This bound was matched by Alon [1] using the Polynomial
Method, by Grolmusz [11] using low rank matrices over rings,
and also by Barak [2] boosting Abbot's method with almost
k-wise independent random variables (a construction that
was independently discovered by others as well). Remark-ably
all of these different approaches got stuck at essentially
the same bound. In recent work, Gopalan [10] showed that
other than the last construction, all of these can be viewed
as coming from low-degree symmetric representations of the
OR function. He also shows that any such symmetric representation
cannot be used to give a better Ramsey graph,
which gives a good indication of why these constructions
had similar performance. Indeed, as we will discuss in a
later section, the n entropy bound initially looked like a
natural obstacle even for our techniques, though eventually
we were able to surpass it.
The analogous question for bipartite graphs seemed much
harder.
Definition 1.2. A bipartite graph on two sets of N vertices
is a K-Ramsey Bipartite Graph if it has no K K
complete or empty bipartite subgraph.
While Erd
os' result on the abundance of 2n-Ramsey graphs
holds as is for bipartite graphs, until recently the best explicit
construction of bipartite Ramsey graphs was 2
n/2
Ramsey
, using the Hadamard matrix. This was improved
last year, first to o(2
n/2
) by Pudlak and R
odl [16] and then
to 2
o(n)
by Barak, Kindler, Shaltiel, Sudakov and Wigderson
[4].
It is convenient to view such graphs as functions f :
({0, 1}
n
)
2
{0, 1}. This then gives exactly the definition
of a disperser.
Definition 1.3. A function f : ({0, 1}
n
)
2
{0, 1} is
called a 2-source disperser for entropy k if for any two sets
X, Y {0, 1}
n
with |X| = |Y | = 2
k
, we have that the image
f (X, Y ) is {0, 1}.
This allows for a more formal definition of explicitness: we
simply demand that the function f is computable in polynomial
time. Most of the constructions mentioned above are
explicit in this sense.
1
Our main result (stated informally) significantly improves
the bounds in both the bipartite and non-bipartite settings:
Theorem 1.4. For every N we construct polynomial time
computable bipartite graphs which are 2
n
o
(1)
-Ramsey. A standard
transformation of these graphs also yields polynomial
time computable ordinary Ramsey Graphs with the same parameters
.
1.2
Extractors and Dispersers from independent
sources
Now we give a brief review of past relevant work (with the
goal of putting this paper in proper context) and describe
some of the tools from these past works that we will use.
We start with the basic definitions of k-sources by Nisan
and Zuckerman [15] and of extractors and dispersers for independent
sources by Santha and Vazirani [20].
Definition 1.5
([15], see also [8]). The min-entropy
of a distribution X is the maximum k such that for every
element x in its support, Pr[X = x] 2
-k
. If X is a distribution
on strings with min-entropy at least k, we will call
X a k-source
2
.
To simplify the presentation, in this version of the paper
we will assume that we are working with entropy as opposed
to min-entropy.
Definition 1.6
([20]). A function f : ({0, 1}
n
)
c
{0, 1}
m
is a c-source (k, ) extractor if for every family of c
independent k-sources X
1
,
, X
c
, the output f (X
1
,
, X
c
)
1
The Abbot's product based Ramsey-graph construction of
[3] and the bipartite Ramsey construction of [16] only satisfy
a weaker notion of explicitness.
2
It is no loss of generality to imagine that X is uniformly
distributed over some (unknown) set of size 2
k
.
672
is a -close
3
to uniformly distributed on m bits. f is a disperser
for the same parameters if the output is simply required
to have a support of relative size (1 - ).
To simplify the presentation, in this version of the paper,
we will assume that = 0 for all of our constructions.
In this language, Erd
os' theorem says that most functions
f : ({0, 1}
n
)
2
{0,1} are dispersers for entropy 1 + log n
(treating f as the characteristic function for the set of edges
of the graph). The proof easily extends to show that indeed
most such functions are in fact extractors. This naturally
challenges us to find explicit functions f that are 2-source
extractors.
Until one year ago, essentially the only known explicit
construction was the Hadamard extractor Had defined by
Had
(x, y) = x, y ( mod 2). It is an extractor for entropy
k > n/2 as observed by Chor and Goldreich [8] and can
be extended to give m = (n) output bits as observed by
Vazirani [23]. Over 20 years later, a recent breakthrough
of Bourgain [5] broke this "1/2 barrier" and can handle 2
sources of entropy .4999n, again with linear output length
m = (n). This seemingly minor improvement will be crucial
for our work!
Theorem 1.7
([5]). There is a polynomial time computable
2-source extractor f : ({0, 1}
n
)
2
{0, 1}
m
for entropy
.4999n and m = (n).
No better bounds are known for 2-source extractors. Now
we turn our attention to 2-source dispersers. It turned out
that progress for building good 2-source dispersers came via
progress on extractors for more than 2 sources, all happening
in fast pace in the last 2 years. The seminal paper of Bourgain
, Katz and Tao [6] proved the so-called "sum-product
theorem" in prime fields, a result in arithmetic combinatorics
. This result has already found applications in diverse
areas of mathematics, including analysis, number theory,
group theory and ... extractor theory. Their work implic-itly
contained dispersers for c = O(log(n/k)) independent
sources of entropy k (with output m = (k)). The use of
the "sum-product" theorem was then extended by Barak et
al. [3] to give extractors with similar parameters. Note that
for linear entropy k = (n), the number of sources needed
for extraction c is a constant!
Relaxing the independence assumptions via the idea of
repeated condensing, allowed the reduction of the number
of independent sources to c = 3, for extraction from sources
of any linear entropy k = (n), by Barak et al. [4] and
independently by Raz [18].
For 2 sources Barak et al. [4] were able to construct dispersers
for sources of entropy o(n). To do this, they first
showed that if the sources have extra structure (block-source
structure, defined below), even extraction is possible from 2
sources. The notion of block-sources, capturing "semi inde-pendence"
of parts of the source, was introduced by Chor
and Goldreich [8]. It has been fundamental in the development
of seeded extractors and as we shall see, is essential
for us as well.
Definition 1.8
([8]). A distribution X = X
1
, . . . , X
c
is a c-block-source of (block) entropy k if every block X
i
has entropy k even conditioned on fixing the previous blocks
X
1
,
, X
i-1
to arbitrary constants.
3
The error is usually measured in terms of
1
distance or
variation distance.
This definition allowed Barak et al. [4] to show that their
extractor for 4 independent sources, actually performs as
well with only 2 independent sources, as long as both are
2-block-sources.
Theorem 1.9
([4]). There exists a polynomial time computable
extractor f : ({0, 1}
n
)
2
{0, 1} for 2 independent
2-block-sources with entropy o(n).
There is no reason to assume that the given sources are
block-sources, but it is natural to try and reduce to this
case. This approach has been one of the most successful in
the extractor literature. Namely try to partition a source
X into two blocks X = X
1
, X
2
such that X
1
, X
2
form a
2-block-source. Barak et al. introduced a new technique to
do this reduction called the Challenge-Response mechanism,
which is crucial for this paper. This method gives a way to
"find" how entropy is distributed in a source X, guiding the
choice of such a partition. This method succeeds only with
small probability, dashing the hope for an extractor, but still
yielding a disperser.
Theorem 1.10
([4]). There exists a polynomial time
computable 2-source disperser f : ({0, 1}
n
)
2
{0, 1} for
entropy o(n).
Reducing the entropy requirement of the above 2-source
disperser, which is what we achieve in this paper, again
needed progress on achieving a similar reduction for extractors
with more independent sources. A few months ago Rao
[?] was able to significantly improve all the above results
for c 3 sources. Interestingly, his techniques do not use
arithmetic combinatorics, which seemed essential to all the
papers above. He improves the results of Barak et al. [3] to
give c = O((log n)/(log k))-source extractors for entropy k.
Note that now the number c of sources needed for extraction
is constant, even when the entropy is as low as n
for any
constant !
Again, when the input sources are block-sources with sufficiently
many blocks, Rao proves that 2 independent sources
suffice (though this result does rely on arithmetic combinatorics
, in particular, on Bourgain's extractor).
Theorem 1.11
([?]). There is a polynomial time computable
extractor f : ({0, 1}
n
)
2
{0, 1}
m
for 2 independent
c-block-sources with block entropy k and m = (k), as long
as c = O((log n)/(log k)).
In this paper (see Theorem 2.7 below) we improve this
result to hold even when only one of the 2 sources is a c-block
-source. The other source can be an arbitrary source
with sufficient entropy. This is a central building block in
our construction. This extractor, like Rao's above, critically
uses Bourgain's extractor mentioned above. In addition it
uses a theorem of Raz [18] allowing seeded extractors to have
"weak" seeds, namely instead of being completely random
they work as long as the seed has entropy rate > 1/2.
MAIN NOTIONS AND NEW RESULTS
The main result of this paper is a polynomial time computable
disperser for 2 sources of entropy n
o(1)
, significantly
improving both the results of Barak et al. [4] (o(n) entropy).
It also improves on Frankl and Wilson [9], who only built
Ramsey Graphs and only for entropy ~
O(n).
673
Theorem 2.1
(Main theorem, restated). There exists
a polynomial time computable 2-source disperser D :
({0, 1}
n
)
2
{0, 1} for entropy n
o(1)
.
The construction of this disperser will involve the construction
of an object which in some sense is stronger and
in another weaker than a disperser: a subsource somewhere
extractor. We first define a related object: a somewhere extractor
, which is a function producing several outputs, one of
which must be uniform. Again we will ignore many technical
issues such as error, min-entropy vs. entropy and more, in
definitions and results, which are deferred to the full version
of this paper.
Definition 2.2. A function f : ({0, 1}
n
)
2
({0, 1}
m
)
is a 2-source somewhere extractor with outputs, for entropy
k, if for every 2 independent k-sources X, Y there exists an
i [] such the ith output f(X,Y )
i
is a uniformly distributed
string of m bits.
Here is a simple construction of such a somewhere extractor
with as large as poly(n) (and the p in its name will
stress the fact that indeed the number of outputs is that
large). It will nevertheless be useful to us (though its description
in the next sentence may be safely skipped). Define
pSE
(x, y)
i
= V(E(x, i), E(y, i)) where E is a "strong" logarithmic
seed extractor, and V is the Hadamard/Vazirani 2-source
extractor. Using this construction, it is easy to see
that:
Proposition 2.3. For every n, k there is a polynomial
time computable somewhere extractor pSE : ({0, 1}
n
)
2
({0, 1}
m
)
with = poly(n) outputs, for entropy k, and m =
(k).
Before we define subsource somewhere extractor, we must
first define a subsource.
Definition 2.4
(Subsources). Given random variables
Z and ^
Z on {0, 1}
n
we say that ^
Z is a deficiency d subsource
of Z and write ^
Z Z if there exists a set A {0,1}
n
such
that (Z|Z A) = ^Z and Pr[Z A] 2
-d
.
A subsource somewhere extractor guarantees the "some-where
extractor" property only on subsources X
, Y
of the
original input distributions X, Y (respectively). It will be
extremely important for us to make these subsources as large
as possible (i.e. we have to lose as little entropy as possible).
Controlling these entropy deficiencies is a major technical
complication we have to deal with. However we will be informal
with it here, mentioning it only qualitatively when
needed. We discuss this issue a little more in Section 6.
Definition 2.5. A function f : ({0, 1}
n
)
2
({0, 1}
m
)
is a 2-source subsource somewhere extractor with outputs
for entropy k, if for every 2 independent k-sources X, Y there
exists a subsource ^
X of X, a subsource ^
Y of Y and an i []
such the i
th
output f ( ^
X, ^
Y )
i
is a uniformly distributed string
of m bits.
A central technical result for us is that with this "sub-source"
relaxation, we can have much fewer outputs indeed
we'll replace poly(n) outputs in our first construction
above with n
o(1)
outputs.
Theorem 2.6
(Subsource somewhere extractor).
For every > 0 there is a polynomial time computable subsource
somewhere extractor SSE : ({0, 1}
n
)
2
({0,1}
m
)
with = n
o(1)
outputs, for entropy k = n
, with output
m = k.
We will describe the ideas used for constructing this important
object and analyzing it in the next section, where
we will also indicate how it is used in the construction of
the final disperser. Here we state a central building block,
mentioned in the previous section (as an improvement of the
work of Rao [?]). We construct an extractor for 2 independent
sources one of which is a block-sources with sufficient
number of blocks.
Theorem 2.7
(Block Source Extractor). There is
a polynomial time computable extractor B : ({0, 1}
n
)
2
{0, 1}
m
for 2 independent sources, one of which is a c-block-sources
with block entropy k and the other a source of entropy
k, with m = (k), and c = O((log n)/(log k)).
A simple corollary of this block-source extractor B, is the
following weaker (though useful) somewhere block-source
extractor SB. A source Z = Z
1
, Z
2
,
, Z
t
is a somewhere
c-block-source of block entropy k if for some c indices i
1
<
i
2
<
< i
c
the source Z
i
1
, Z
i
2
,
, Z
i
c
is a c-block-source.
Collecting the outputs of B on every c-subset of blocks results
in that somewhere extractor.
Corollary 2.8. There is a polynomial time computable
somewhere extractor SB : ({0, 1}
n
)
2
({0, 1}
m
)
for 2 independent
sources, one of which is a somewhere c-block-sources
with block entropy k and t blocks total and the other a source
of entropy k, with m = (k), c = O((log n)/(log k)), and
t
c
.
In both the theorem and corollary above, the values of
entropy k we will be interested in are k = n
(1)
. It follows
that a block-source with a constant c = O(1) suffices.
THE CHALLENGE-RESPONSE MECHANISM
We now describe abstractly a mechanism which will be
used in the construction of the disperser as well as the subsource
somewhere extractor. Intuitively, this mechanism allows
us to identify parts of a source which contain large
amounts of entropy. One can hope that using such a mechanism
one can partition a given source into blocks in a way
which make it a block-source, or alternatively focus on a part
of the source which is unusually condensed with entropy two
cases which may simplify the extraction problem.
The reader may decide, now or in the middle of this
section, to skip ahead to the next section which describes
the construction of the subsource somewhere extractor SSE,
which extensively uses this mechanism. Then this section
may seem less abstract, as it will be clearer where this mechanism
is used.
This mechanism was introduced by Barak et al. [4], and
was essential in their 2-source disperser. Its use in this paper
is far more involved (in particular it calls itself recursively,
a fact which creates many subtleties). However, at a high
level, the basic idea behind the mechanism is the same:
Let Z be a source and Z
a part of Z (Z projected on a
subset of the coordinates). We know that Z has entropy k,
674
and want to distinguish two possibilities: Z
has no entropy
(it is fixed) or it has at least k
entropy. Z
will get a pass
or fail grade, hopefully corresponding to the cases of high or
no entropy in Z
.
Anticipating the use of this mechanism, it is a good idea
to think of Z as a "parent" of Z
, which wants to check if
this "child" has sufficient entropy. Moreover, in the context
of the initial 2 sources X, Y we will operate on, think of Z
as a part of X, and thus that Y is independent of Z and Z
.
To execute this "test" we will compute two sets of strings
(all of length m, say): the Challenge C = C(Z
, Y ) and
the Response R = R(Z, Y ). Z
fails if C R and passes
otherwise.
The key to the usefulness of this mechanism is the following
lemma, which states that what "should" happen, indeed
happens after some restriction of the 2 sources Z and Y .
We state it and then explain how the functions C and R are
defined to accommodate its proof.
Lemma 3.1. Assume Z, Y are sources of entropy k.
1. If Z
has entropy k
+ O(m), then there are subsources
^
Z of Z and ^
Y of Y , such that
Pr[ ^
Z
passes] = Pr[C( ^
Z
, ^
Y )
R
( ^
Z, ^
Y )] 1-n
O(1)
2
-m
2. If Z
is fixed (namely, has zero entropy), then for some
subsources ^
Z of Z and ^
Y of Y , we have
Pr[Z
fails] = Pr[C( ^
Z
, ^
Y ) R( ^Z, ^Y)] = 1
Once we have such a mechanism, we will design our disperser
algorithm assuming that the challenge response mechanism
correctly identifies parts of the source with high or
low levels of entropy. Then in the analysis, we will ensure
that our algorithm succeeds in making the right decisions,
at least on subsources of the original input sources.
Now let us explain how to compute the sets C and R. We
will use some of the constructs above with parameters which
don't quite fit.
The response set R(Z, Y ) = pSE(Z, Y ) is chosen to be the
output of the somewhere extractor of Proposition 2.3. The
challenge set C(Z
, Y ) = SSE(Z
, Y ) is chosen to be the output
of the subsource somewhere extractor of Theorem 2.6.
Why does it work? We explain each of the two claims
in the lemma in turn (and after each comment on the important
parameters and how they differ from Barak et al.
[4]).
1. Z
has entropy. We need to show that Z
passes the
test with high probability. We will point to the output
string in C( ^
Z
, ^
Y
) which avoids R( ^
Z, ^
Y ) with high
probability as follows. In the analysis we will use the
union bound on several events, one associated with
each (poly(n) many) string in pSE( ^
Z, ^
Y ). We note
that by the definition of the response function, if we
want to fix a particular element in the response set to
a particular value, we can do this by fixing E(Z, i) and
E
(Y, i). This fixing keeps the restricted sources independent
and loses only O(m) entropy. In the subsource
of Z
guaranteed to exist by Theorem 2.6 we can afford
to lose this entropy in Z
. Thus we conclude that one
of its outputs is uniform. The probability that this
output will equal any fixed value is thus 2
-m
, completing
the argument. We note that we can handle
the polynomial output size of pSE, since the uniform
string has length m = n
(1)
(something which could
not be done with the technology available to Barak et
al. [4]).
2. Z
has no entropy. We now need to guarantee that
in the chosen subsources (which we choose) ^
Z, ^
Y , all
strings in C = C( ^
Z
, ^
Y ) are in R( ^
Z, ^
Y ). First notice
that as Z
is fixed, C is only a function of Y . We
set ~
Y to be the subsource of Y that fixes all strings
in C = C(Y ) to their most popular values (losing
only m entropy from Y ). We take care of including
these fixed strings in R(Z, ~
Y ) one at a time, by
restricting to subsources assuring that. Let be any
m-bit string we want to appear in R(Z, ~
Y ). Recall that
R
(z, y) = V(E(z, i), E(y, i)). We pick a "good" seed i,
and restrict Z, ~
Y to subsources with only O(m) less
entropy by fixing E(Z, i) = a and E( ~
Y , i) = b to values
(a, b) for which V(a, b) = . This is repeated suc-cessively
times, and results in the final subsources
^
Z, ^
Y on which ^
Z
fails with probability 1. Note that
we keep reducing the entropy of our sources times,
which necessitates that this be tiny (here we could
not tolerate poly(n), and indeed can guarantee n
o(1)
,
at least on a subsource this is one aspect of how crucial
the subsource somewhere extractor SSE is to the
construction.
We note that initially it seemed like the Challenge-Response
mechanism as used in [4] could not be used to handle entropy
that is significantly less than n (which is approxi-mately
the bound that many of the previous constructions
got stuck at). The techniques of [4] involved partitioning
the sources into t pieces of length n/t each, with the hope
that one of those parts would have a significant amount of
entropy, yet there'd be enough entropy left over in the rest
of the source (so that the source can be partitioned into a
block source).
However it is not clear how to do this when the total
entropy is less than n. On the one hand we will have
to partition our sources into blocks of length significantly
more than n (or the adversary could distribute a negligible
fraction of entropy in all blocks). On the other hand, if
our blocks are so large, a single block could contain all the
entropy. Thus it was not clear how to use the challenge
response mechanism to find a block source.
THE SUBSOURCE SOMEWHERE EXTRACTOR
SSE
We now explain some of the ideas behind the construction
of the subsource somewhere extractor SSE of Theorem 2.6.
Consider the source X. We are seeking to find in it a somewhere
c-block-source, so that we can use it (together with Y )
in the block-source extractor of Theorem 2.8. Like in previous
works in the extractor literature (e.g. [19, 13]) we use a
"win-win" analysis which shows that either X is already a
somewhere c-block-source, or it has a condensed part which
contains a lot of the entropy of the source. In this case we
proceed recursively on that part. Continuing this way we
eventually reach a source so condensed that it must be a
somewhere block source. Note that in [4], the challenge response
mechanism was used to find a block source also, but
there the entropy was so high that they could afford to use
675
t blocks
low
high
med
n bits total
t blocks
med
med
low
high
responded
Challenge
Challenge
responded
Challenge Unresponded
med
med
n/t bits total
SB
SB
Outputs
Somewhere Block Source!
Not Somewhere block source
X
Random Row
< k'
0< low < k'/t
k'/c < high < k'
k'/t < med < k'/c
Figure 1: Analysis of the subsource somewhere extractor.
a tree of depth 1. They did not need to recurse or condense
the sources.
Consider the tree of parts of the source X evolved by
such recursion. Each node in the tree corresponds to some
interval of bit locations of the source, with the root node
corresponding to the entire source. A node is a child of another
if its interval is a subinterval of the parent. It can be
shown that some node in the tree is "good"; it corresponds
to a somewhere c-source, but we don't know which node is
good. Since we only want a somewhere extractor, we can
apply to each node the somewhere block-source extractor of
Corollary 2.8 this will give us a random output in every
"good" node of the tree. The usual idea is output all these
values (and in seeded extractors, merge them using the ex-ternally
given random seed). However, we cannot afford to
do that here as there is no external seed and the number of
these outputs (the size of the tree) is far too large.
Our aim then will be to significantly prune this number
of candidates and in fact output only the candidates on one
path to a canonical "good" node. First we will give a very informal
description of how to do this (Figure 1). Before calling
SSE recursively on a subpart of a current part of X, we'll
use the "Challenge-Response" mechanism described above
to check if "it has entropy".
4
We will recurse only with the
first (in left-to-right order) part which passes the "entropy
test". Thus note that we will follow a single path on this
tree. The algorithm SSE will output only the sets of strings
produced by applying the somewhere c-block-extractor SB
on the parts visited along this path.
Now let us describe the algorithm for SSE. SSE will be
initially invoked as SSE(x, y), but will recursively call itself
with different inputs z which will always be substrings of x.
4
We note that we ignore the additional complication that
SSE
will actually use recursion also to compute the challenge
in the challenge-response mechanism.
Algorithm: SSE
(z, y)
Let pSE(., .) be the somewhere extractor with a polynomial
number of outputs of Proposition 2.3.
Let SB be the somewhere block source extractor of Corollary
2.8.
Global Parameters: t, the branching factor of the tree. k
the original entropy of the sources.
Output will be a set of strings.
1. If z is shorter than k, return the empty set, else
continue.
2. Partition z into t equal parts z = z
1
, z
2
, . . . , z
t
.
3. Compute the response set R(z, y) which is the set of
strings output by pSE(z, y).
4. For i [t], compute the challenge set C(z
i
, y), which
is the set of outputs of SSE(z
i
, y).
5. Let h be the smallest index for which the challenge set
C
(z
h
, y) is not contained in the response set (set h = t
if no such index exists).
6. Output SB(z, y) concatenated with SSE(z
h
, y).
Proving that indeed there are subsources on which SSE
will follow a path to a "good" (for these subsources) node,
is the heart of the analysis. It is especially complex due
to the fact that the recursive call to SSE on subparts of
the current part is used to generate the Challenges for the
Challenge-Response mechanism. Since SSE works only on
a subsources we have to guarantee that restriction to these
does not hamper the behavior of SSE in past and future calls
to it.
Let us turn to the highlights of the analysis, for the proof
of Theorem 2.6. Let k
be the entropy of the source Z at
some place in this recursion. Either one of its blocks Z
i
has
676
entropy k
/c, in which case it is very condensed, since its
size is n/t for t c), or it must be that c of its blocks form
a c-block source with block entropy k
/t (which is sufficient
for the extractor B used by SB). In the 2nd case the fact
that SB(z, y) is part of the output of of our SSE guarantees
that we are somewhere random. If the 2nd case doesn't hold,
let Z
i
be the leftmost condensed block. We want to ensure
that (on appropriate subsources) SSE calls itself on that ith
subpart. To do so, we fix all Z
j
for j < i to constants z
j
. We
are now in the position described in the Challenge-Response
mechanism section, that (in each of the first i parts) there
is either no entropy or lots of entropy. We further restrict
to subsources as explained there which make all first i - 1
blocks fail the "entropy test", and the fact that Z
i
still has
lots of entropy after these restrictions (which we need to
prove) ensures that indeed SSE will be recursively applied
to it.
We note that while the procedure SSE can be described recursively
, the formal analysis of fixing subsources is actually
done globally, to ensure that indeed all entropy requirements
are met along the various recursive calls.
Let us remark on the choice of the branching parameter t.
On the one hand, we'd like to keep it small, as it dominates
the number of outputs t
c
of SB, and thus the total number of
outputs (which is t
c
log
t
n). For this purpose, any t = n
o(1)
will do. On the other hand, t should be large enough so that
condensing is faster than losing entropy. Here note that if
Z is of length n, its child has length n/t, while the entropy
shrinks only from k
to k
/c. A simple calculation shows that
if k
(log t)/ log c)
> n
2
then a c block-source must exist along
such a path before the length shrinks to k. Note that for
k = n
(1)
a (large enough) constant t suffices (resulting in
only logarithmic number of outputs of SSE). This analysis
is depicted pictorially in Figure 1.
THE FINAL DISPERSER
D
Following is a rough description of our disperser D proving
Theorem 2.1. The high level structure of D will resemble the
structure of SSE - we will recursively split the source X and
look for entropy in the parts. However now we must output
a single value (rather than a set) which can take both values
0 and 1. This was problematic in SSE, even knowing where
the "good" part (containing a c-block-source) was! How can
we do so now?
We now have at our disposal a much more powerful tool
for generating challenges (and thus detecting entropy), namely
the subsource somewhere disperser SSE. Note that in constructing
SSE we only had essentially the somewhere c-block-source
extractor SB to (recursively) generate the challenges,
but it depended on a structural property of the block it was
applied on. Now SSE does not assume any structure on its
input sources except sufficient entropy
5
.
Let us now give a high level description of the disperser
D
. It too will be a recursive procedure. If when processing
some part Z of X it "realizes" that a subpart Z
i
of Z has
entropy, but not all the entropy of Z (namely Z
i
, Z is a
2-block-source) then we will halt and produce the output
of D. Intuitively, thinking about the Challenge-Response
mechanism described above, the analysis implies that we
5
There is a catch it only works on subsources of them!
This will cause us a lot of head ache; we will elaborate on it
later.
can either pass or fail Z
i
(on appropriate subsources). But
this means that the outcome of this "entropy test" is a 1-bit
disperser!
To capitalize on this idea, we want to use SSE to identify
such a block-source in the recursion tree. As before, we scan
the blocks from left to right, and want to distinguish three
possibilities.
low
Z
i
has low entropy. In this case we proceed to i + 1.
medium
Z
i
has "medium" entropy (Z
i
, Z is a block-source).
In which case we halt and produce an output (zero or
one).
high
Z
i
has essentially all entropy of Z. In this case we
recurse on the condensed block Z
i
.
As before, we use the Challenge-Response mechanism (with
a twist). We will compute challenges C(Z
i
, Y ) and responses
R
(Z, Y ), all strings of length m. The responses are computed
exactly as before, using the somewhere extractor pSE. The
Challenges are computed using our subsource somewhere
extractor SSE.
We really have 4 possibilities to distinguish, since when we
halt we also need to decide which output bit we give. We will
do so by deriving three tests from the above challenges and
responses: (C
H
, R
H
), (C
M
, R
M
), (C
L
, R
L
) for high, medium
and low respectively, as follows. Let m m
H
>> m
M
>>
m
L
be appropriate integers: then in each of the tests above
we restrict ourselves to prefixes of all strings of the appropriate
lengths only. So every string in C
M
will be a prefix
of length m
M
of some string in C
H
. Similarly, every string
in R
L
is the length m
L
prefix of some string in R
H
. Now
it is immediately clear that if C
M
is contained in R
M
, then
C
L
is contained in R
L
. Thus these tests are monotone, if
our sample fails the high test, it will definitely fail all tests.
Algorithm: D
(z, y)
Let pSE(., .) be the somewhere extractor with a polynomial
number of outputs of Proposition 2.3.
Let SSE(., .) be the subsource somewhere extractor of Theorem
2.6.
Global Parameters: t, the branching factor of the tree. k
the original entropy of the sources.
Local Parameters for recursive level: m
L
m
M
m
H
.
Output will be an element of {0, 1}.
1. If z is shorter than k, return 0.
2. Partition z into t equal parts z = z
1
, z
2
, . . . , z
t
.
3. Compute three response sets R
L
, R
M
, R
H
using pSE(z, y).
R
j
will be the prefixes of length m
j
of the strings in
pSE
(z, y).
4. For each i [t], compute three challenge sets C
i
L
, C
i
M
, C
i
H
using SSE(z
i
, y). C
i
j
will be the prefixes of length m
j
of the strings in SSE(z
i
, y).
5. Let h be the smallest index for which the challenge set
C
L
is not contained in the response set R
L
, if there is
no such index, output 0 and halt.
6. If C
h
H
is contained in R
H
and C
h
H
is contained in R
M
,
output 0 and halt. If C
h
H
is contained in R
H
but C
h
H
is not contained in R
M
, output 1 and halt.
677
t blocks
t blocks
t blocks
fail
fail
fail
pass
pass
pass
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
pass
pass
fail
pass
fail
fail
low
low
high
low
low
low
high
low
med
n bits total
n/t bits total
X
low
low
Output 0
Output 1
n/t^2 bits total
X_3
(X_3)_4
Figure 2: Analysis of the disperser.
7. Output D(z
h
, y),
First note the obvious monotonicity of the tests. If Z
i
fails
one of the tests it will certainly fail for shorter strings. Thus
there are only four outcomes to the three tests, written in the
order (low, medium, high): (pass, pass, pass), (pass, pass, fail),
(pass, fail, fail) and (fail, fail, fail).
Conceptually, the algorithm
is making the following decisions using the four tests:
1. (fail, fail, fail): Assume Z
i
has low entropy and proceed
to block i + 1.
2. (pass, fail, fail): Assume Z
i
is medium, halt and output
0.
3. (pass, pass, fail): Assume Z
i
is medium, halt and output
1.
4. (pass, pass, pass): Assume Z
i
is high and recurse on Z
i
.
The analysis of this idea (depicted in Figure 2).turns out
to be more complex than it seems. There are two reasons for
that. Now we briefly explain them and the way to overcome
them in the construction and analysis.
The first reason is the fact mentioned above, that SSE
which generates the challenges, works only on a subsources
of the original sources. Restricting to these subsources at
some level of the recursion (as required by the analysis of of
the test) causes entropy loss which affects both definitions
(such as these entropy thresholds for decisions) and correct-ness
of SSE in higher levels of recursion. Controlling this entropy
loss is achieved by calling SSE recursively with smaller
and smaller entropy requirements, which in turn limits the
entropy which will be lost by these restrictions. In order not
to lose all the entropy for this reason alone, we must work
with special parameters of SSE, essentially requiring that at
termination it has almost all the entropy it started with.
The second reason is the analysis of the test when we are
in a medium block. In contrast with the above situation, we
cannot consider the value of Z
i
fixed when we need it to fail
on the Medium and Low tests. We need to show that for
these two tests (given a pass for High), they come up both
(pass, fail) and (fail, fail) each with positive probability.
Since the length of Medium challenges and responses is
m
M
, the probability of failure is at least exp(-(m
M
)) (this
follows relatively easily from the fact that the responses are
somewhere random). If the Medium test fails so does the
Low test, and thus (fail, fail) has a positive probability and
our disperser D outputs 0 with positive probability.
To bound (pass, fail) we first observe (with a similar
reasoning) that the low test fails with probability at least
exp(-(m
L
)). But we want the medium test to pass at the
same time. This probability is at least the probability that
low
fails minus the probability that medium fails. We already
have a bound on the latter: it is at most poly(n)exp(-m
M
).
Here comes our control of the different length into play - we
can make the m
L
sufficiently smaller than m
M
to yield this
difference positive. We conclude that our disperser D outputs
1 with positive probability as well.
Finally, we need to take care of termination: we have to
ensure that the recurrence always arrives at a medium subpart
, but it is easy to chose entropy thresholds for low, medium
and high to ensure that this happens.
678
RESILIENCY AND DEFICIENCY
In this section we will breifly discuss an issue which arises
in our construction that we glossed over in the previous sections
. Recall our definition of subsources:
Definition 6.1
(Subsources). Given random variables
Z and ^
Z on {0, 1}
n
we say that ^
Z is a deficiency d subsource
of Z and write ^
Z Z if there exists a set A {0,1}
n
such
that (Z|A) = ^Z and Pr[Z A] 2
-d
.
Recall that we were able to guarantee that our algorithms
made the right decisions only on subsources of the original
source. For example, in the construction of our final disperser
, to ensure that our algorithms correctly identify the
right high block to recurse on, we were only able to guarantee
that there are subsources of the original sources in
which our algorithm makes the correct decision with high
probability. Then, later in the analysis we had to further
restrict the source to even smaller subsources. This leads to
complications, since the original event of picking the correct
high
block, which occurred with high probability, may become
an event which does not occur with high probability
in the current subsource. To handle these kinds of issues,
we will need to be very careful in measuring how small our
subsources are.
In the formal analysis we introduce the concept of resiliency
to deal with this. To give an idea of how this works,
here is the actual definition of somewhere subsource extractor
that we use in the formal analysis.
Definition 6.2
(subsource somewhere extractor).
A function SSE : {0, 1}
n
{0, 1}
n
({0, 1}
m
)
is a subsource
somewhere extractor with nrows output rows, entropy
threshold k, deficiency def, resiliency res and error if for
every (n, k)-sources X, Y there exist a deficiency def subsource
X
good
of X and a deficiency def subsource Y
good
of
Y such that for every deficiency res subsource X
of X
good
and deficiency res subsource Y
of Y
good
, the random variable
SSE(X
, Y
) is -close to a m somewhere random
distribution.
It turns out that our subsource somewhere extractor does
satisfy this stronger definition. The advantage of this definition
is that it says that once we restrict our attention to
the good subsources X
good
, Y
good
, we have the freedom to further
restrict these subsources to smaller subsources, as long
as our final subsources do not lose more entropy than the
resiliency permits.
This issue of managing the resiliency for the various objects
that we construct is one of the major technical challenges
that we had to overcome in our construction.
OPEN PROBLEMS
Better Independent Source Extractors
A bottleneck to
improving our disperser is the block versus general
source extractor of Theorem 2.7. A good next step
would be to try to build an extractor for one block
source (with only a constant number of blocks) and
one other independent source which works for polylog-arithmic
entropy, or even an extractor for a constant
number of sources that works for sub-polynomial entropy
.
Simple Dispersers
While our disperser is polynomial time
computable, it is not as explicit as one might have
hoped. For instance the Ramsey Graph construction
of Frankl-Wilson is extremely simple: For a prime p,
let the vertices of the graph be all subsets of [p
3
] of
size p
2
- 1. Two vertices S,T are adjacent if and only
if |S T| -1 mod p. It would be nice to find a good
disperser that beats the Frankl-Wilson construction,
yet is comparable in simplicity.
REFERENCES
[1] N. Alon. The shannon capacity of a union.
Combinatorica, 18, 1998.
[2] B. Barak. A simple explicit construction of an
n
~
o(log n)
-ramsey graph. Technical report, Arxiv, 2006.
http://arxiv.org/abs/math.CO/0601651
.
[3] B. Barak, R. Impagliazzo, and A. Wigderson.
Extracting randomness using few independent sources.
In Proceedings of the 45th Annual IEEE Symposium
on Foundations of Computer Science, pages 384393,
2004.
[4] B. Barak, G. Kindler, R. Shaltiel, B. Sudakov, and
A. Wigderson. Simulating independence: New
constructions of condensers, Ramsey graphs,
dispersers, and extractors. In Proceedings of the 37th
Annual ACM Symposium on Theory of Computing,
pages 110, 2005.
[5] J. Bourgain. More on the sum-product phenomenon in
prime fields and its applications. International Journal
of Number Theory, 1:132, 2005.
[6] J. Bourgain, N. Katz, and T. Tao. A sum-product
estimate in finite fields, and applications. Geometric
and Functional Analysis, 14:2757, 2004.
[7] M. Capalbo, O. Reingold, S. Vadhan, and
A. Wigderson. Randomness conductors and
constant-degree lossless expanders. In Proceedings of
the 34th Annual ACM Symposium on Theory of
Computing, pages 659668, 2002.
[8] B. Chor and O. Goldreich. Unbiased bits from sources
of weak randomness and probabilistic communication
complexity. SIAM Journal on Computing,
17(2):230261, 1988.
[9] P. Frankl and R. M. Wilson. Intersection theorems
with geometric consequences. Combinatorica,
1(4):357368, 1981.
[10] P. Gopalan. Constructing ramsey graphs from boolean
function representations. In Proceedings of the 21th
Annual IEEE Conference on Computational
Complexity, 2006.
[11] V. Grolmusz. Low rank co-diagonal matrices and
ramsey graphs. Electr. J. Comb, 7, 2000.
[12] V. Guruswami. Better extractors for better codes?
Electronic Colloquium on Computational Complexity
(ECCC), (080), 2003.
[13] C. J. Lu, O. Reingold, S. Vadhan, and A. Wigderson.
Extractors: Optimal up to constant factors. In
Proceedings of the 35th Annual ACM Symposium on
Theory of Computing, pages 602611, 2003.
[14] P. Miltersen, N. Nisan, S. Safra, and A. Wigderson.
On data structures and asymmetric communication
complexity. Journal of Computer and System
Sciences, 57:3749, 1 1998.
679
[15] N. Nisan and D. Zuckerman. More deterministic
simulation in logspace. In Proceedings of the 25th
Annual ACM Symposium on Theory of Computing,
pages 235244, 1993.
[16] P. Pudlak and V. Rodl. Pseudorandom sets and
explicit constructions of ramsey graphs. Submitted for
publication, 2004.
[17] A. Rao. Extractors for a constant number of
polynomially small min-entropy independent sources.
In Proceedings of the 38th Annual ACM Symposium
on Theory of Computing, 2006.
[18] R. Raz. Extractors with weak random seeds. In
Proceedings of the 37th Annual ACM Symposium on
Theory of Computing, pages 1120, 2005.
[19] O. Reingold, R. Shaltiel, and A. Wigderson.
Extracting randomness via repeated condensing. In
Proceedings of the 41st Annual IEEE Symposium on
Foundations of Computer Science, pages 2231, 2000.
[20] M. Santha and U. V. Vazirani. Generating
quasi-random sequences from semi-random sources.
Journal of Computer and System Sciences, 33:7587,
1986.
[21] R. Shaltiel. Recent developments in explicit
constructions of extractors. Bulletin of the European
Association for Theoretical Computer Science,
77:6795, 2002.
[22] A. Ta-Shma and D. Zuckerman. Extractor codes.
IEEE Transactions on Information Theory, 50, 2004.
[23] U. Vazirani. Towards a strong communication
complexity theory or generating quasi-random
sequences from two communicating slightly-random
sources (extended abstract). In Proceedings of the 17th
Annual ACM Symposium on Theory of Computing,
pages 366378, 1985.
[24] A. Wigderson and D. Zuckerman. Expanders that
beat the eigenvalue bound: Explicit construction and
applications. Combinatorica, 19(1):125138, 1999.
680
| sum-product theorem;distribution;explicit disperser;construction of disperser;Extractors;recursion;subsource somewhere extractor;structure;bipartite graph;extractors;independent sources;extractor;tools;Ramsey Graphs;disperser;polynomial time computable disperser;resiliency;Theorem;Ramsey graphs;block-sources;deficiency;termination;entropy;Ramsey graph;Independent Sources;algorithms;independent source;subsource;Dispersers;randomness extraction |
10 | A Frequency-based and a Poisson-based Definition of the Probability of Being Informative | This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ). We show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. | INTRODUCTION AND BACKGROUND
The inverse document frequency (idf ) is one of the most
successful parameters for a relevance-based ranking of retrieved
objects. With N being the total number of documents
, and n(t) being the number of documents in which
term t occurs, the idf is defined as follows:
idf(t) := - log n(t)
N , 0 <= idf(t) <
Ranking based on the sum of the idf -values of the query
terms that occur in the retrieved documents works well, this
has been shown in numerous applications. Also, it is well
known that the combination of a document-specific term
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'03, July 28August 1, 2003, Toronto, Canada.
Copyright 2003 ACM 1-58113-646-3/03/0007 ...
$
5.00.
weight and idf works better than idf alone. This approach
is known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is
the so-called term frequency of term t in document d. The
idf reflects the discriminating power (informativeness) of a
term, whereas the tf reflects the occurrence of a term.
The idf alone works better than the tf alone does. An explanation
might be the problem of tf with terms that occur
in many documents; let us refer to those terms as "noisy"
terms. We use the notion of "noisy" terms rather than "fre-quent"
terms since frequent terms leaves open whether we
refer to the document frequency of a term in a collection or
to the so-called term frequency (also referred to as within-document
frequency) of a term in a document. We associate
"noise" with the document frequency of a term in a
collection, and we associate "occurrence" with the within-document
frequency of a term. The tf of a noisy term might
be high in a document, but noisy terms are not good candidates
for representing a document. Therefore, the removal
of noisy terms (known as "stopword removal") is essential
when applying tf . In a tf-idf approach, the removal of stopwords
is conceptually obsolete, if stopwords are just words
with a low idf .
From a probabilistic point of view, tf is a value with a
frequency-based probabilistic interpretation whereas idf has
an "informative" rather than a probabilistic interpretation.
The missing probabilistic interpretation of idf is a problem
in probabilistic retrieval models where we combine uncertain
knowledge of different dimensions (e.g.: informativeness of
terms, structure of documents, quality of documents, age
of documents, etc.) such that a good estimate of the probability
of relevance is achieved. An intuitive solution is a
normalisation of idf such that we obtain values in the interval
[0; 1]. For example, consider a normalisation based on
the maximal idf -value. Let T be the set of terms occurring
in a collection.
P
freq
(t is informative) := idf(t)
maxidf
maxidf := max(
{idf(t)|t T }), maxidf <= - log(1/N)
minidf := min(
{idf(t)|t T }), minidf >= 0
minidf
maxidf P
freq
(t is informative) 1.0
This frequency-based probability function covers the interval
[0; 1] if the minimal idf is equal to zero, which is the case
if we have at least one term that occurs in all documents.
Can we interpret P
freq
, the normalised idf , as the probability
that the term is informative?
When investigating the probabilistic interpretation of the
227
normalised idf , we made several observations related to disjointness
and independence of document events. These observations
are reported in section 3. We show in section 3.1
that the frequency-based noise probability
n(t)
N
used in the
classic idf -definition can be explained by three assumptions:
binary term occurrence, constant document containment and
disjointness of document containment events. In section 3.2
we show that by assuming independence of documents, we
obtain 1
- e
-1
1 - 0.37 as the upper bound of the noise
probability of a term. The value e
-1
is related to the logarithm
and we investigate in section 3.3 the link to information
theory. In section 4, we link the results of the previous
sections to probability theory. We show the steps from possible
worlds to binomial distribution and Poisson distribution.
In section 5, we emphasise that the theoretical framework
of this paper is applicable for both idf and tf . Finally, in
section 6, we base the definition of the probability of being
informative on the results of the previous sections and
compare frequency-based and Poisson-based definitions.
BACKGROUND
The relationship between frequencies, probabilities and
information theory (entropy) has been the focus of many
researchers. In this background section, we focus on work
that investigates the application of the Poisson distribution
in IR since a main part of the work presented in this paper
addresses the underlying assumptions of Poisson.
[4] proposes a 2-Poisson model that takes into account
the different nature of relevant and non-relevant documents,
rare terms (content words) and frequent terms (noisy terms,
function words, stopwords). [9] shows experimentally that
most of the terms (words) in a collection are distributed
according to a low dimension n-Poisson model. [10] uses a
2-Poisson model for including term frequency-based probabilities
in the probabilistic retrieval model. The non-linear
scaling of the Poisson function showed significant improvement
compared to a linear frequency-based probability. The
Poisson model was here applied to the term frequency of a
term in a document. We will generalise the discussion by
pointing out that document frequency and term frequency
are dual parameters in the collection space and the document
space, respectively. Our discussion of the Poisson distribution
focuses on the document frequency in a collection
rather than on the term frequency in a document.
[7] and [6] address the deviation of idf and Poisson, and
apply Poisson mixtures to achieve better Poisson-based estimates
. The results proved again experimentally that a one-dimensional
Poisson does not work for rare terms, therefore
Poisson mixtures and additional parameters are proposed.
[3], section 3.3, illustrates and summarises comprehen-sively
the relationships between frequencies, probabilities
and Poisson. Different definitions of idf are put into context
and a notion of "noise" is defined, where noise is viewed
as the complement of idf . We use in our paper a different
notion of noise: we consider a frequency-based noise that
corresponds to the document frequency, and we consider a
term noise that is based on the independence of document
events.
[11], [12], [8] and [1] link frequencies and probability estimation
to information theory. [12] establishes a framework
in which information retrieval models are formalised based
on probabilistic inference. A key component is the use of a
space of disjoint events, where the framework mainly uses
terms as disjoint events. The probability of being informative
defined in our paper can be viewed as the probability
of the disjoint terms in the term space of [12].
[8] address entropy and bibliometric distributions. Entropy
is maximal if all events are equiprobable and the frequency
-based Lotka law (N/i
is the number of scientists
that have written i publications, where N and are distribution
parameters), Zipf and the Pareto distribution are related
. The Pareto distribution is the continuous case of the
Lotka and Lotka and Zipf show equivalences. The Pareto
distribution is used by [2] for term frequency normalisation.
The Pareto distribution compares to the Poisson distribution
in the sense that Pareto is "fat-tailed", i. e. Pareto assigns
larger probabilities to large numbers of events than
Poisson distributions do.
This makes Pareto interesting
since Poisson is felt to be too radical on frequent events.
We restrict in this paper to the discussion of Poisson, however
, our results show that indeed a smoother distribution
than Poisson promises to be a good candidate for improving
the estimation of probabilities in information retrieval.
[1] establishes a theoretical link between tf-idf and information
theory and the theoretical research on the meaning
of tf-idf "clarifies the statistical model on which the different
measures are commonly based". This motivation matches
the motivation of our paper: We investigate theoretically
the assumptions of classical idf and Poisson for a better
understanding of parameter estimation and combination.
FROM DISJOINT TO INDEPENDENT
We define and discuss in this section three probabilities:
The frequency-based noise probability (definition 1), the total
noise probability for disjoint documents (definition 2).
and the noise probability for independent documents (definition
3).
3.1
Binary occurrence, constant containment
and disjointness of documents
We show in this section, that the frequency-based noise
probability
n(t)
N
in the idf definition can be explained as
a total probability with binary term occurrence, constant
document containment and disjointness of document containments
.
We refer to a probability function as binary if for all events
the probability is either 1.0 or 0.0. The occurrence probability
P (t|d) is binary, if P (t|d) is equal to 1.0 if t d, and
P (t|d) is equal to 0.0, otherwise.
P (t|d) is binary : P (t|d) = 1.0 P (t|d) = 0.0
We refer to a probability function as constant if for all
events the probability is equal. The document containment
probability reflect the chance that a document occurs in a
collection. This containment probability is constant if we
have no information about the document containment or
we ignore that documents differ in containment. Containment
could be derived, for example, from the size, quality,
age, links, etc. of a document. For a constant containment
in a collection with N documents,
1
N
is often assumed as
the containment probability. We generalise this definition
and introduce the constant where 0 N. The containment
of a document d depends on the collection c, this
is reflected by the notation P (d|c) used for the containment
228
of a document.
P (d|c) is constant : d : P (d|c) =
N
For disjoint documents that cover the whole event space,
we set = 1 and obtain
d
P (d|c) = 1.0. Next, we define
the frequency-based noise probability and the total noise
probability for disjoint documents. We introduce the event
notation t is noisy and t occurs for making the difference
between the noise probability P (t is noisy|c) in a collection
and the occurrence probability P (t occurs|d) in a document
more explicit, thereby keeping in mind that the noise probability
corresponds to the occurrence probability of a term
in a collection.
Definition 1. The frequency-based term noise probability
:
P
freq
(t is noisy|c) := n(t)
N
Definition 2. The total term noise probability for
disjoint documents:
P
dis
(t is noisy|c) :=
d
P (t occurs|d) P (d|c)
Now, we can formulate a theorem that makes assumptions
explicit that explain the classical idf .
Theorem 1. IDF assumptions: If the occurrence probability
P (t|d) of term t over documents d is binary, and
the containment probability P (d|c) of documents d is constant
, and document containments are disjoint events, then
the noise probability for disjoint documents is equal to the
frequency-based noise probability.
P
dis
(t is noisy|c) = P
freq
(t is noisy|c)
Proof. The assumptions are:
d : (P (t occurs|d) = 1 P (t occurs|d) = 0)
P (d|c) =
N
d
P (d|c) = 1.0
We obtain:
P
dis
(t is noisy|c) =
d|td
1
N =
n(t)
N = P
freq
(t is noisy|c)
The above result is not a surprise but it is a mathematical
formulation of assumptions that can be used to explain
the classical idf . The assumptions make explicit that the
different types of term occurrence in documents (frequency
of a term, importance of a term, position of a term, document
part where the term occurs, etc.) and the different
types of document containment (size, quality, age, etc.) are
ignored, and document containments are considered as disjoint
events.
From the assumptions, we can conclude that idf (frequency-based
noise, respectively) is a relatively simple but strict
estimate.
Still, idf works well.
This could be explained
by a leverage effect that justifies the binary occurrence and
constant containment: The term occurrence for small documents
tends to be larger than for large documents, whereas
the containment for small documents tends to be smaller
than for large documents.
From that point of view, idf
means that P (t d|c) is constant for all d in which t occurs,
and P (t d|c) is zero otherwise. The occurrence and containment
can be term specific. For example, set P (t d|c) =
1/N
D
(c) if t occurs in d, where N
D
(c) is the number of documents
in collection c (we used before just N). We choose a
document-dependent occurrence P (t|d) := 1/N
T
(d), i. e. the
occurrence probability is equal to the inverse of N
T
(d), which
is the total number of terms in document d. Next, we choose
the containment P (d|c) := N
T
(d)/N
T
(c)N
T
(c)/N
D
(c) where
N
T
(d)/N
T
(c) is a document length normalisation (number
of terms in document d divided by the number of terms in
collection c), and N
T
(c)/N
D
(c) is a constant factor of the
collection (number of terms in collection c divided by the
number of documents in collection c). We obtain P (td|c) =
1/N
D
(c).
In a tf-idf -retrieval function, the tf -component reflects
the occurrence probability of a term in a document. This is
a further explanation why we can estimate the idf with a
simple P (t|d), since the combined tf-idf contains the occurrence
probability. The containment probability corresponds
to a document normalisation (document length normalisation
, pivoted document length) and is normally attached to
the tf -component or the tf-idf -product.
The disjointness assumption is typical for frequency-based
probabilities. From a probability theory point of view, we
can consider documents as disjoint events, in order to achieve
a sound theoretical model for explaining the classical idf .
But does disjointness reflect the real world where the containment
of a document appears to be independent of the
containment of another document? In the next section, we
replace the disjointness assumption by the independence assumption
.
3.2
The upper bound of the noise probability
for independent documents
For independent documents, we compute the probability
of a disjunction as usual, namely as the complement of the
probability of the conjunction of the negated events:
P (d
1
. . . d
N
)
=
1
- P (d
1
. . . d
N
)
=
1
d
(1
- P (d))
The noise probability can be considered as the conjunction
of the term occurrence and the document containment.
P (t is noisy|c) := P (t occurs (d
1
. . . d
N
)
|c)
For disjoint documents, this view of the noise probability
led to definition 2. For independent documents, we use now
the conjunction of negated events.
Definition 3. The term noise probability for independent
documents:
P
in
(t is noisy|c) :=
d
(1
- P (t occurs|d) P (d|c))
With binary occurrence and a constant containment P (d|c) :=
/N, we obtain the term noise of a term t that occurs in n(t)
documents:
P
in
(t is noisy|c) = 1 - 1 N
n(t)
229
For binary occurrence and disjoint documents, the containment
probability was 1/N. Now, with independent documents
, we can use as a collection parameter that controls
the average containment probability. We show through the
next theorem that the upper bound of the noise probability
depends on .
Theorem 2. The upper bound of being noisy: If the
occurrence P (t|d) is binary, and the containment P (d|c)
is constant, and document containments are independent
events, then 1
- e
is
the upper bound of the noise probability
.
t : P
in
(t is noisy|c) < 1 - e
Proof
. The upper bound of the independent noise probability
follows from the limit lim
N
(1 +
x
N
)
N
= e
x
(see
any comprehensive math book, for example, [5], for the convergence
equation of the Euler function). With x = -, we
obtain:
lim
N
1
N
N
= e
For
the term noise, we have:
P
in
(t is noisy|c) = 1 - 1 N
n(t)
P
in
(t is noisy|c) is strictly monotonous: The noise of a term
t
n
is less than the noise of a term t
n+1
, where t
n
occurs in
n documents and t
n+1
occurs in n + 1 documents. Therefore
, a term with n = N has the largest noise probability.
For a collection with infinite many documents, the upper
bound of the noise probability for terms t
N
that occur in all
documents becomes:
lim
N
P
in
(t
N
is noisy)
=
lim
N
1
- 1 N
N
=
1
- e
By
applying an independence rather a disjointness assumption
, we obtain the probability e
-1
that a term is not noisy
even if the term does occur in all documents. In the disjoint
case, the noise probability is one for a term that occurs in
all documents.
If we view P (d|c) := /N as the average containment,
then is large for a term that occurs mostly in large documents
, and is small for a term that occurs mostly in small
documents. Thus, the noise of a term t is large if t occurs in
n(t) large documents and the noise is smaller if t occurs in
small documents. Alternatively, we can assume a constant
containment and a term-dependent occurrence. If we assume
P (d|c) := 1, then P (t|d) := /N can be interpreted as
the average probability that t represents a document. The
common assumption is that the average containment or occurrence
probability is proportional to n(t). However, here
is additional potential: The statistical laws (see [3] on Luhn
and Zipf) indicate that the average probability could follow
a normal distribution, i. e. small probabilities for small n(t)
and large n(t), and larger probabilities for medium n(t).
For the monotonous case we investigate here, the noise of
a term with n(t) = 1 is equal to 1 - (1 - /N) = /N and
the noise of a term with n(t) = N is close to 1 - e
. In the
next section, we relate the value e
to
information theory.
3.3
The probability of a maximal informative
signal
The probability e
-1
is special in the sense that a signal
with that probability is a signal with maximal information as
derived from the entropy definition. Consider the definition
of the entropy contribution H(t) of a signal t.
H(t) := P (t) - ln P (t)
We form the first derivation for computing the optimum.
H(t)
P (t)
=
- ln P (t) + -1
P (t) P (t)
=
-(1 + ln P (t))
For obtaining optima, we use:
0 =
-(1 + ln P (t))
The entropy contribution H(t) is maximal for P (t) = e
-1
.
This result does not depend on the base of the logarithm as
we see next:
H(t)
P (t) = - log
b
P (t) +
-1
P (t) ln b P (t)
=
1
ln b + log
b
P (t) = 1
+ ln P (t)
ln b
We summarise this result in the following theorem:
Theorem 3. The probability of a maximal informative
signal: The probability P
max
= e
-1
0.37 is the probability
of a maximal informative signal. The entropy of a
maximal informative signal is H
max
= e
-1
.
Proof. The probability and entropy follow from the derivation
above.
The complement of the maximal noise probability is e
and
we are looking now for a generalisation of the entropy
definition such that e
is
the probability of a maximal informative
signal. We can generalise the entropy definition
by computing the integral of + ln P (t), i. e. this derivation
is zero for e
. We obtain a generalised entropy:
-( + ln P (t)) d(P (t)) = P (t) (1 - - ln P (t))
The generalised entropy corresponds for = 1 to the classical
entropy. By moving from disjoint to independent documents
, we have established a link between the complement
of the noise probability of a term that occurs in all documents
and information theory. Next, we link independent
documents to probability theory.
THE LINK TO PROBABILITY THEORY
We review for independent documents three concepts of
probability theory: possible worlds, binomial distribution
and Poisson distribution.
4.1
Possible Worlds
Each conjunction of document events (for each document,
we consider two document events: the document can be
true or false) is associated with a so-called possible world.
For example, consider the eight possible worlds for three
documents (N = 3).
230
world w
conjunction
w
7
d
1
d
2
d
3
w
6
d
1
d
2
d
3
w
5
d
1
d
2
d
3
w
4
d
1
d
2
d
3
w
3
d
1
d
2
d
3
w
2
d
1
d
2
d
3
w
1
d
1
d
2
d
3
w
0
d
1
d
2
d
3
With each world w, we associate a probability (w), which
is equal to the product of the single probabilities of the document
events.
world w probability (w)
w
7
N
3
1
N
0
w
6
N
2
1
N
1
w
5
N
2
1
N
1
w
4
N
1
1
N
2
w
3
N
2
1
N
1
w
2
N
1
1
N
2
w
1
N
1
1
N
2
w
0
N
0
1
N
3
The sum over the possible worlds in which k documents are
true and N -k documents are false is equal to the probability
function of the binomial distribution, since the binomial
coefficient yields the number of possible worlds in which k
documents are true.
4.2
Binomial distribution
The binomial probability function yields the probability
that k of N events are true where each event is true with
the single event probability p.
P (k) := binom(N, k, p) :=
N
k
p
k
(1
- p)N -k
The single event probability is usually defined as p := /N,
i. e. p is inversely proportional to N, the total number of
events. With this definition of p, we obtain for an infinite
number of documents the following limit for the product of
the binomial coefficient and p
k
:
lim
N
N
k
p
k
=
=
lim
N
N (N -1) . . . (N -k +1)
k!
N
k
=
k
k!
The limit is close to the actual value for k << N. For large
k, the actual value is smaller than the limit.
The limit of (1
-p)N -k follows from the limit lim
N
(1+
x
N
)
N
= e
x
.
lim
N
(1
- p)
N-k
= lim
N
1
N
N -k
=
lim
N
e
1 N
-k
= e
Again
, the limit is close to the actual value for k << N. For
large k, the actual value is larger than the limit.
4.3
Poisson distribution
For an infinite number of events, the Poisson probability
function is the limit of the binomial probability function.
lim
N
binom(N, k, p) =
k
k! e
P
(k) = poisson(k, ) :=
k
k! e
The
probability poisson (0, 1) is equal to e
-1
, which is the
probability of a maximal informative signal.
This shows
the relationship of the Poisson distribution and information
theory.
After seeing the convergence of the binomial distribution,
we can choose the Poisson distribution as an approximation
of the independent term noise probability. First, we define
the Poisson noise probability:
Definition 4. The Poisson term noise probability:
P
poi
(t is noisy|c) := e
n(t)
k=1
k
k!
For independent documents, the Poisson distribution approximates
the probability of the disjunction for large n(t),
since the independent term noise probability is equal to the
sum over the binomial probabilities where at least one of
n(t) document containment events is true.
P
in
(t is noisy|c) =
n(t)
k=1
n(t)
k
p
k
(1
- p)N -k
P
in
(t is noisy|c) P
poi
(t is noisy|c)
We have defined a frequency-based and a Poisson-based probability
of being noisy, where the latter is the limit of the
independence-based probability of being noisy. Before we
present in the final section the usage of the noise probability
for defining the probability of being informative, we
emphasise in the next section that the results apply to the
collection space as well as to the the document space.
THE COLLECTION SPACE AND THE DOCUMENT SPACE
Consider the dual definitions of retrieval parameters in
table 1. We associate a collection space D T with a collection
c where D is the set of documents and T is the set
of terms in the collection. Let N
D
:=
|D| and N
T
:=
|T |
be the number of documents and terms, respectively. We
consider a document as a subset of T and a term as a subset
of D. Let n
T
(d) := |{t|d t}| be the number of terms that
occur in the document d, and let n
D
(t) := |{d|t d}| be the
number of documents that contain the term t.
In a dual way, we associate a document space L T with
a document d where L is the set of locations (also referred
to as positions, however, we use the letters L and l and not
P and p for avoiding confusion with probabilities) and T is
the set of terms in the document. The document dimension
in a collection space corresponds to the location (position)
dimension in a document space.
The definition makes explicit that the classical notion of
term frequency of a term in a document (also referred to as
the within-document term frequency) actually corresponds
to the location frequency of a term in a document. For the
231
space
collection
document
dimensions
documents and terms
locations and terms
document/location
frequency
n
D
(t, c): Number of documents in which term t
occurs in collection c
n
L
(t, d): Number of locations (positions) at which
term t occurs in document d
N
D
(c): Number of documents in collection c
N
L
(d): Number of locations (positions) in document
d
term frequency
n
T
(d, c): Number of terms that document d contains
in collection c
n
T
(l, d): Number of terms that location l contains
in document d
N
T
(c): Number of terms in collection c
N
T
(d): Number of terms in document d
noise/occurrence
P (t|c) (term noise)
P (t|d) (term occurrence)
containment
P (d|c) (document)
P (l|d) (location)
informativeness
- ln P (t|c)
- ln P (t|d)
conciseness
- ln P (d|c)
- ln P (l|d)
P(informative)
ln(P (t|c))/ ln(P (t
min
, c))
ln(P (t|d))/ ln(P (t
min
, d))
P(concise)
ln(P (d|c))/ ln(P (d
min
|c))
ln(P (l|d))/ ln(P (l
min
|d))
Table 1: Retrieval parameters
actual term frequency value, it is common to use the maximal
occurrence (number of locations; let lf be the location
frequency).
tf(t, d) := lf(t, d) := P
freq
(t occurs|d)
P
freq
(t
max
occurs
|d) =
n
L
(t, d)
n
L
(t
max
, d)
A further duality is between informativeness and conciseness
(shortness of documents or locations): informativeness
is based on occurrence (noise), conciseness is based on containment
.
We have highlighted in this section the duality between
the collection space and the document space. We concentrate
in this paper on the probability of a term to be noisy
and informative. Those probabilities are defined in the collection
space. However, the results regarding the term noise
and informativeness apply to their dual counterparts: term
occurrence and informativeness in a document. Also, the
results can be applied to containment of documents and locations
THE PROBABILITY OF BEING INFORMATIVE
We showed in the previous sections that the disjointness
assumption leads to frequency-based probabilities and that
the independence assumption leads to Poisson probabilities.
In this section, we formulate a frequency-based definition
and a Poisson-based definition of the probability of being
informative and then we compare the two definitions.
Definition 5. The frequency-based probability of being
informative:
P
freq
(t is informative|c) := - ln
n(t)
N
- ln
1
N
=
- log
N
n(t)
N = 1 - log
N
n(t) = 1 - ln n(t)
ln N
We define the Poisson-based probability of being informative
analogously to the frequency-based probability of being
informative (see definition 5).
Definition 6. The Poisson-based probability of being
informative:
P
poi
(t is informative|c) := ln
e
n(t)
k=1
k
k!
- ln(e
)
=
- ln
n(t)
k=1
k
k!
- ln
For the sum expression, the following limit holds:
lim
n(t)
n(t)
k=1
k
k! = e
- 1
For >> 1, we can alter the noise and informativeness Poisson
by starting the sum from 0, since e
>> 1. Then, the
minimal Poisson informativeness is poisson(0, ) = e
. We
obtain a simplified Poisson probability of being informative:
P
poi
(t is informative|c) - ln
n(t)
k=0
k
k!
=
1
- ln
n(t)
k=0
k
k!
The computation of the Poisson sum requires an optimi-sation
for large n(t). The implementation for this paper
exploits the nature of the Poisson density: The Poisson density
yields only values significantly greater than zero in an
interval around .
Consider the illustration of the noise and informativeness
definitions in figure 1. The probability functions displayed
are summarised in figure 2 where the simplified Poisson
is used in the noise and informativeness graphs. The
frequency-based noise corresponds to the linear solid curve
in the noise figure. With an independence assumption, we
obtain the curve in the lower triangle of the noise figure. By
changing the parameter p := /N of the independence probability
, we can lift or lower the independence curve. The
noise figure shows the lifting for the value := ln N
9.2. The setting = ln N is special in the sense that the
frequency-based and the Poisson-based informativeness have
the same denominator, namely ln N, and the Poisson sum
converges to . Whether we can draw more conclusions from
this setting is an open question.
We can conclude, that the lifting is desirable if we know
for a collection that terms that occur in relatively few doc-232
0
0.2
0.4
0.6
0.8
1
0
2000
4000
6000
8000
10000
Probability of being noisy
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
0
0.2
0.4
0.6
0.8
1
0
2000
4000
6000
8000
10000
Probability of being informative
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
Figure 1: Noise and Informativeness
Probability function
Noise
Informativeness
Frequency P
freq
Def
n(t)/N
ln(n(t)/N)/ ln(1/N)
Interval
1/N P
freq
1.0
0.0 P
freq
1.0
Independence P
in
Def
1
- (1 - p)
n(t)
ln(1
- (1 - p)
n(t)
)/ ln(p)
Interval
p P
in
< 1 - e
ln
(p) P
in
1.0
Poisson P
poi
Def
e
n(t)
k=1
k
k!
( - ln
n(t)
k=1
k
k!
)/( - ln )
Interval
e
P
poi
< 1 - e
( - ln(e
- 1))/( - ln ) P
poi
1.0
Poisson P
poi
simplified
Def
e
n(t)
k=0
k
k!
( - ln
n(t)
k=0
k
k!
)/
Interval
e
P
poi
< 1.0
0.0 < P
poi
1.0
Figure 2: Probability functions
uments are no guarantee for finding relevant documents,
i. e. we assume that rare terms are still relatively noisy. On
the opposite, we could lower the curve when assuming that
frequent terms are not too noisy, i. e. they are considered as
being still significantly discriminative.
The Poisson probabilities approximate the independence
probabilities for large n(t); the approximation is better for
larger . For n(t) < , the noise is zero whereas for n(t) >
the noise is one. This radical behaviour can be smoothened
by using a multi-dimensional Poisson distribution. Figure 1
shows a Poisson noise based on a two-dimensional Poisson:
poisson(k,
1
,
2
) := e
1
k
1
k! + (1 - ) e
2
k
2
k!
The two dimensional Poisson shows a plateau between
1
=
1000 and
2
= 2000, we used here = 0.5. The idea behind
this setting is that terms that occur in less than 1000
documents are considered to be not noisy (i.e. they are informative
), that terms between 1000 and 2000 are half noisy,
and that terms with more than 2000 are definitely noisy.
For the informativeness, we observe that the radical behaviour
of Poisson is preserved. The plateau here is approximately
at 1/6, and it is important to realise that this
plateau is not obtained with the multi-dimensional Poisson
noise using = 0.5. The logarithm of the noise is normalised
by the logarithm of a very small number, namely
0.5 e
-1000
+ 0.5 e
-2000
. That is why the informativeness
will be only close to one for very little noise, whereas for a
bit of noise, informativeness will drop to zero. This effect
can be controlled by using small values for such that the
noise in the interval [
1
;
2
] is still very little. The setting
= e
-2000/6
leads to noise values of approximately e
-2000/6
in the interval [
1
;
2
], the logarithms lead then to 1/6 for
the informativeness.
The indepence-based and frequency-based informativeness
functions do not differ as much as the noise functions do.
However, for the indepence-based probability of being informative
, we can control the average informativeness by the
definition p := /N whereas the control on the frequency-based
is limited as we address next.
For the frequency-based idf , the gradient is monotonously
decreasing and we obtain for different collections the same
distances of idf -values, i. e. the parameter N does not affect
the distance. For an illustration, consider the distance between
the value idf(t
n+1
) of a term t
n+1
that occurs in n+1
documents, and the value idf(t
n
) of a term t
n
that occurs in
n documents.
idf(t
n+1
)
- idf(t
n
)
=
ln
n
n + 1
The first three values of the distance function are:
idf(t
2
)
- idf(t
1
) = ln(1/(1 + 1)) = 0.69
idf(t
3
)
- idf(t
2
) = ln(1/(2 + 1)) = 0.41
idf(t
4
)
- idf(t
3
) = ln(1/(3 + 1)) = 0.29
For the Poisson-based informativeness, the gradient decreases
first slowly for small n(t), then rapidly near n(t) and
then it grows again slowly for large n(t).
In conclusion, we have seen that the Poisson-based definition
provides more control and parameter possibilities than
233
the frequency-based definition does. Whereas more control
and parameter promises to be positive for the personalisa-tion
of retrieval systems, it bears at the same time the danger
of just too many parameters. The framework presented
in this paper raises the awareness about the probabilistic
and information-theoretic meanings of the parameters. The
parallel definitions of the frequency-based probability and
the Poisson-based probability of being informative made
the underlying assumptions explicit. The frequency-based
probability can be explained by binary occurrence, constant
containment and disjointness of documents. Independence
of documents leads to Poisson, where we have to be aware
that Poisson approximates the probability of a disjunction
for a large number of events, but not for a small number.
This theoretical result explains why experimental investigations
on Poisson (see [7]) show that a Poisson estimation
does work better for frequent (bad, noisy) terms than for
rare (good, informative) terms.
In addition to the collection-wide parameter setting, the
framework presented here allows for document-dependent
settings, as explained for the independence probability. This
is in particular interesting for heterogeneous and structured
collections, since documents are different in nature (size,
quality, root document, sub document), and therefore, binary
occurrence and constant containment are less appropriate
than in relatively homogeneous collections.
SUMMARY
The definition of the probability of being informative transforms
the informative interpretation of the idf into a probabilistic
interpretation, and we can use the idf -based probability
in probabilistic retrieval approaches. We showed that
the classical definition of the noise (document frequency) in
the inverse document frequency can be explained by three
assumptions: the term within-document occurrence probability
is binary, the document containment probability is
constant, and the document containment events are disjoint.
By explicitly and mathematically formulating the assumptions
, we showed that the classical definition of idf does not
take into account parameters such as the different nature
(size, quality, structure, etc.) of documents in a collection,
or the different nature of terms (coverage, importance, position
, etc.) in a document. We discussed that the absence
of those parameters is compensated by a leverage effect of
the within-document term occurrence probability and the
document containment probability.
By applying an independence rather a disjointness assumption
for the document containment, we could establish
a link between the noise probability (term occurrence
in a collection), information theory and Poisson. From the
frequency-based and the Poisson-based probabilities of being
noisy, we derived the frequency-based and Poisson-based
probabilities of being informative. The frequency-based probability
is relatively smooth whereas the Poisson probability
is radical in distinguishing between noisy or not noisy, and
informative or not informative, respectively. We showed how
to smoothen the radical behaviour of Poisson with a multi-dimensional
Poisson.
The explicit and mathematical formulation of idf - and
Poisson-assumptions is the main result of this paper. Also,
the paper emphasises the duality of idf and tf , collection
space and document space, respectively. Thus, the result
applies to term occurrence and document containment in a
collection, and it applies to term occurrence and position
containment in a document. This theoretical framework is
useful for understanding and deciding the parameter estimation
and combination in probabilistic retrieval models. The
links between indepence-based noise as document frequency,
probabilistic interpretation of idf , information theory and
Poisson described in this paper may lead to variable probabilistic
idf and tf definitions and combinations as required
in advanced and personalised information retrieval systems.
Acknowledgment: I would like to thank Mounia Lalmas,
Gabriella Kazai and Theodora Tsikrika for their comments
on the as they said "heavy" pieces. My thanks also go to the
meta-reviewer who advised me to improve the presentation
to make it less "formidable" and more accessible for those
"without a theoretic bent".
This work was funded by a
research fellowship from Queen Mary University of London.
REFERENCES
[1] A. Aizawa. An information-theoretic perspective of
tf-idf measures. Information Processing and
Management, 39:4565, January 2003.
[2] G. Amati and C. J. Rijsbergen. Term frequency
normalization via Pareto distributions. In 24th
BCS-IRSG European Colloquium on IR Research,
Glasgow, Scotland, 2002.
[3] R. K. Belew. Finding out about. Cambridge University
Press, 2000.
[4] A. Bookstein and D. Swanson. Probabilistic models
for automatic indexing. Journal of the American
Society for Information Science, 25:312318, 1974.
[5] I. N. Bronstein. Taschenbuch der Mathematik. Harri
Deutsch, Thun, Frankfurt am Main, 1987.
[6] K. Church and W. Gale. Poisson mixtures. Natural
Language Engineering, 1(2):163190, 1995.
[7] K. W. Church and W. A. Gale. Inverse document
frequency: A measure of deviations from poisson. In
Third Workshop on Very Large Corpora, ACL
Anthology, 1995.
[8] T. Lafouge and C. Michel. Links between information
construction and information gain: Entropy and
bibliometric distribution. Journal of Information
Science, 27(1):3949, 2001.
[9] E. Margulis. N-poisson document modelling. In
Proceedings of the 15th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 177189, 1992.
[10] S. E. Robertson and S. Walker. Some simple effective
approximations to the 2-poisson model for
probabilistic weighted retrieval. In Proceedings of the
17th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 232241, London, et al., 1994. Springer-Verlag.
[11] S. Wong and Y. Yao. An information-theoric measure
of term specificity. Journal of the American Society
for Information Science, 43(1):5461, 1992.
[12] S. Wong and Y. Yao. On modeling information
retrieval with probabilistic inference. ACM
Transactions on Information Systems, 13(1):3868,
1995.
234
| inverse document frequency (idf);independent and disjoint documents;computer science;information search;probability theories;Poisson based probability;Term frequency;probabilistic retrieval models;Probability of being informative;Independent documents;Disjoint documents;Normalisation;relevance-based ranking of retrieved objects;information theory;Noise probability;frequency-based term noise probability;Poisson-based probability of being informative;Assumptions;Collection space;Poisson distribution;Probabilistic information retrieval;Document space;document retrieval;entropy;Frequency-based probability;Document frequency;Inverse document frequency;Information theory;independence assumption;inverse document frequency;maximal informative signal |
100 | High Performance Crawling System | In the present paper, we will describe the design and implementation of a real-time distributed system of Web crawling running on a cluster of machines. The system crawls several thousands of pages every second, includes a high-performance fault manager, is platform independent and is able to adapt transparently to a wide range of configurations without incurring additional hardware expenditure. We will then provide details of the system architecture and describe the technical choices for very high performance crawling. Finally, we will discuss the experimental results obtained, comparing them with other documented systems. | INTRODUCTION
With the World Wide Web containing the vast amount
of information (several thousands in 1993, 3 billion today)
that it does and the fact that it is ever expanding, we
need a way to find the right information (multimedia of
textual).
We need a way to access the information on
specific subjects that we require.
To solve the problems
above several programs and algorithms were designed that
index the web, these various designs are known as search
engines, spiders, crawlers, worms or knowledge robots graph
in its simplest terms. The pages are the nodes on the graph
and the links are the arcs on the graph. What makes this so
difficult is the vast amount of data that we have to handle,
and then we must also take into account the fact that the
World Wide Web is constantly growing and the fact that
people are constantly updating the content of their web
pages.
Any High performance crawling system should offer at
least the following two features.
Firstly, it needs to
be equipped with an intelligent navigation strategy, i.e.
enabling it to make decisions regarding the choice of subsequent
actions to be taken (pages to be downloaded etc).
Secondly, its supporting hardware and software architecture
should be optimized to crawl large quantities of documents
per unit of time (generally per second). To this we may add
fault tolerance (machine crash, network failure etc.) and
considerations of Web server resources.
Recently we have seen a small interest in these two
field. Studies on the first point include crawling strategies
for important pages [9, 17], topic-specific document downloading
[5, 6, 18, 10], page recrawling to optimize overall
refresh frequency of a Web archive [8, 7] or scheduling the
downloading activity according to time [22]. However, little
research has been devoted to the second point, being very
difficult to implement [20, 13]. We will focus on this latter
point in the rest of this paper.
Indeed, only a few crawlers are equipped with an optimized
scalable crawling system, yet details of their internal
workings often remain obscure (the majority being proprietary
solutions).
The only system to have been given a
fairly in-depth description in existing literature is Mercator
by Heydon and Najork of DEC/Compaq [13] used in the
AltaVista search engine (some details also exist on the first
version of the Google [3] and Internet Archive [4] robots).
Most recent studies on crawling strategy fail to deal with
these features, contenting themselves with the solution of
minor issues such as the calculation of the number of pages
to be downloaded in order to maximize/minimize some
functional objective. This may be acceptable in the case
of small applications, but for real time
1
applications the
system must deal with a much larger number of constraints.
We should also point out that little academic research
is concerned with high performance search engines, as
compared with their commercial counterparts (with the
exception of the WebBase project [14] at Stanford).
In the present paper, we will describe a very high
availability, optimized and distributed crawling system.
We will use the system on what is known as breadth-first
crawling, though this may be easily adapted to other
navigation strategies. We will first focus on input/output,
on management of network traffic and robustness when
changing scale. We will also discuss download policies in
1
"Soft" real time
299
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MIR'04, October 1516, 2004, New York, New York, USA.
Copyright 2004 ACM 1-58113-940-3/04/0010...$5.00.
terms of speed regulation, fault management by supervisors
and the introduction/suppression of machine nodes without
system restart during a crawl.
Our system was designed within the experimental framework
of the D
ep^
ot L
egal du Web Fran
cais (French Web
Legal Deposit). This consists of archiving only multimedia
documents in French available on line, indexing them and
providing ways for these archives to be consulted. Legal
deposit requires a real crawling strategy in order to ensure
site continuity over time.
The notion of registration is
closely linked to that of archiving, which requires a suitable
strategy to be useful. In the course of our discussion, we
will therefore analyze the implication and impact of this
experimentation for system construction.
STATE OF THE ART
In order to set our work in this field in context, listed
below are definitions of services that should be considered
the minimum requirements for any large-scale crawling
system.
Flexibility: as mentioned above, with some minor
adjustments our system should be suitable for various
scenarios. However, it is important to remember that
crawling is established within a specific framework:
namely, Web legal deposit.
High Performance: the system needs to be scalable
with a minimum of one thousand pages/second and
extending up to millions of pages for each run on
low cost hardware. Note that here, the quality and
efficiency of disk access are crucial to maintaining high
performance.
Fault Tolerance: this may cover various aspects. As
the system interacts with several servers at once,
specific problems emerge.
First, it should at least
be able to process invalid HTML code, deal with
unexpected Web server behavior, and select good
communication protocols etc. The goal here is to avoid
this type of problem and, by force of circumstance, to
be able to ignore such problems completely. Second,
crawling processes may take days or weeks, and it is
imperative that the system can handle failure, stopped
processes or interruptions in network services, keeping
data loss to a minimum. Finally, the system should
be persistent, which means periodically switching large
data structures from memory to the disk (e.g. restart
after failure).
Maintainability and Configurability: an appropriate
interface is necessary for monitoring the crawling
process, including download speed, statistics on the
pages and amounts of data stored. In online mode, the
administrator may adjust the speed of a given crawler,
add or delete processes, stop the system, add or delete
system nodes and supply the black list of domains not
to be visited, etc.
2.2
General Crawling Strategies
There are many highly accomplished techniques in terms
of Web crawling strategy. We will describe the most relevant
of these here.
Breadth-first Crawling: in order to build a wide Web
archive like that of the Internet Archive [15], a crawl
is carried out from a set of Web pages (initial URLs
or seeds).
A breadth-first exploration is launched
by following hypertext links leading to those pages
directly connected with this initial set. In fact, Web
sites are not really browsed breadth-first and various
restrictions may apply, e.g. limiting crawling processes
to within a site, or downloading the pages deemed
most interesting first
2
Repetitive Crawling: once pages have been crawled,
some systems require the process to be repeated
periodically so that indexes are kept updated. In the
most basic case, this may be achieved by launching
a second crawl in parallel.
A variety of heuristics
exist to overcome this problem:
for example, by
frequently relaunching the crawling process of pages,
sites or domains considered important to the detriment
of others.
A good crawling strategy is crucial for
maintaining a constantly updated index list. Recent
studies by Cho and Garcia-Molina [8, 7] have focused
on optimizing the update frequency of crawls by using
the history of changes recorded on each site.
Targeted Crawling: more specialized search engines
use crawling process heuristics in order to target a
certain type of page, e.g. pages on a specific topic or
in a particular language, images, mp3 files or scientific
papers. In addition to these heuristics, more generic
approaches have been suggested. They are based on
the analysis of the structures of hypertext links [6,
5] and techniques of learning [9, 18]: the objective
here being to retrieve the greatest number of pages
relating to a particular subject by using the minimum
bandwidth. Most of the studies cited in this category
do not use high performance crawlers, yet succeed in
producing acceptable results.
Random Walks and Sampling: some studies have
focused on the effect of random walks on Web graphs
or modified versions of these graphs via sampling in
order to estimate the size of documents on line [1, 12,
11].
Deep Web Crawling: a lot of data accessible via
the Web are currently contained in databases and
may only be downloaded through the medium of
appropriate requests or forms. Recently, this often-neglected
but fascinating problem has been the focus
of new interest. The Deep Web is the name given to
the Web containing this category of data [9].
Lastly, we should point out the acknowledged differences
that exist between these scenarios. For example,
a breadth-first search needs to keep track of all pages
already crawled.
An analysis of links should use
structures of additional data to represent the graph
of the sites in question, and a system of classifiers in
order to assess the pages' relevancy [6, 5]. However,
some tasks are common to all scenarios, such as
2
See [9] for the heuristics that tend to find the most
important pages first and [17] for experimental results
proving that breadth-first crawling allows the swift retrieval
of pages with a high PageRank.
300
respecting robot exclusion files (robots.txt), crawling
speed, resolution of domain names . . .
In the early 1990s, several companies claimed that their
search engines were able to provide complete Web coverage.
It is now clear that only partial coverage is possible at
present.
Lawrence and Giles [16] carried out two experiments
in order to measure coverage performance of data
established by crawlers and of their updates. They adopted
an approach known as overlap analysis to estimate the size
of the Web that may be indexed (See also Bharat and Broder
1998 on the same subject). Let W be the total set of Web
pages and W
a
W and W
b
W the pages downloaded
by two different crawlers a and b. What is the size of W
a
and W
b
as compared with W ? Let us assume that uniform
samples of Web pages may be taken and their membership of
both sets tested. Let P (W
a
) and P (W
b
) be the probability
that a page is downloaded by a or b respectively. We know
that:
P (W
a
W
b
|W
b
) = W
a
W
b
|W
b
|
(1)
Now, if these two crawling processes are assumed to be
independent, the left side of equation 1may be reduced to
P (W
a
), that is data coverage by crawler a. This may be
easily obtained by the intersection size of the two crawling
processes. However, an exact calculation of this quantity
is only possible if we do not really know the documents
crawled. Lawrence and Giles used a set of controlled data of
575 requests to provide page samples and count the number
of times that the two crawlers retrieved the same pages. By
taking the hypothesis that the result P (W
a
) is correct, we
may estimate the size of the Web as
|W
a
|/P (W
a
).
This
approach has shown that the Web contained at least 320
million pages in 1997 and that only 60% was covered by the
six major search engines of that time. It is also interesting
to note that a single search engine would have covered only
1/3 of the Web. As this approach is based on observation, it
may reflect a visible Web estimation, excluding for instance
pages behind forms, databases etc. More recent experiments
assert that the Web contains several billion pages.
2.2.1
Selective Crawling
As demonstrated above, a single crawler cannot archive
the whole Web. The fact is that the time required to carry
out the complete crawling process is very long, and impossible
given the technology currently available. Furthermore,
crawling and indexing very large amounts of data implies
great problems of scalability, and consequently entails not
inconsiderable costs of hardware and maintenance.
For
maximum optimization, a crawling system should be able
to recognize relevant sites and pages, and restrict itself to
downloading within a limited time.
A document or Web page's relevancy may be officially
recognized in various ways. The idea of selective crawling
may be introduced intuitively by associating each URL u
with a score calculation function s
()
respecting relevancy
criterion and parameters . In the most basic case, we
may assume a Boolean relevancy function, i.e. s(u) = 1 if
the document designated by u is relevant and s(u) = 0 if not.
More generally, we may think of s(d) as a function with real
values, such as a conditional probability that a document
belongs to a certain category according to its content. In all
cases, we should point out that the score calculation function
depends only on the URL and and not on the time or state
of the crawler.
A general approach for the construction of a selective
crawler consists of changing the URL insertion and extraction
policy in the queue Q of the crawler. Let us assume
that the URLs are sorted in the order corresponding to the
value retrieved by s(u). In this case, we obtain the best-first
strategy (see [19]) which consists of downloading URLs
with the best scores first). If s(u) provides a good relevancy
model, we may hope that the search process will be guided
towards the best areas of the Web.
Various studies have been carried out in this direction: for
example, limiting the search depth in a site by specifying
that pages are no longer relevant after a certain depth. This
amounts to the following equation:
s
(depth)
(u) =
1, if
|root(u) u| <
0, else
(2)
where root(u) is the root of the site containing u.
The
interest of this approach lies in the fact that maximizing
the search breadth may make it easier for the end-user to
retrieve the information. Nevertheless, pages that are too
deep may be accessed by the user, even if the robot fails to
take them into account.
A second possibility is the estimation of a page's popularity
. One method of calculating a document's relevancy
would relate to the number of backlinks.
s
(backlinks)
(u) =
1, if indegree(u) >
0, else
(3)
where is a threshold.
It is clear that s
(backlinks)
(u) may only be calculated if
we have a complete site graph (site already downloaded
beforehand).
In practice, we make take an approximate
value and update it incrementally during the crawling
process. A derivative of this technique is used in Google's
famous PageRank calculation.
OUR APPROACH THE DOMINOS SYSTEM
As mentioned above, we have divided the system into two
parts: workers and supervisors. All of these processes may
be run on various operating systems (Windows, MacOS X,
Linux, FreeBSD) and may be replicated if need be. The
workers are responsible for processing the URL flow coming
from their supervisors and for executing crawling process
tasks in the strict sense. They also handle the resolution of
domain names by means of their integrated DNS resolver,
and adjust download speed in accordance with node policy.
A worker is a light process in the Erlang sense, acting as
a fault tolerant and highly available HTTP client.
The
process-handling mode in Erlang makes it possible to create
several thousands of workers in parallel.
In our system, communication takes place mainly by sending
asynchronous messages as described in the specifications
for Erlang language. The type of message varies according to
need: character string for short messages and binary format
for long messages (large data structures or files). Disk access
is reduced to a minimum as far as possible and structures
are stored in the real-time Mnesia
3
database that forms
3
http://www.erlang.org/doc/r9c/lib/mnesia-4
.1.4/doc/html/
301
a standard part of the Erlang development kit. Mnesia's
features give it a high level of homogeneity during the base's
access, replication and deployment. It is supported by two
table management modules ETS and DETS. ETS allows
tables of values to be managed by random access memory,
while DETS provides a persistent form of management on
the disk. Mnesia's distribution faculty provides an efficient
access solution for distributed data. When a worker moves
from one node to another (code migration), it no longer need
be concerned with the location of the base or data. It simply
has to read and write the information transparently.
1
loop(InternalState) ->
% Supervisor main
2
% loop
3
receive {From,{migrate,Worker,Src,Dest}} ->
4
% Migrate the Worker process from
5
% Src node to Dest node
6
spawn(supervisor,migrate,
7
[Worker,Src,Dest]),
8
% Infinite loop
9
loop(InternalState);
10
11
{From,{replace,OldPid,NewPid,State}} ->
12
% Add the new worker to
13
% the supervisor state storage
14
NewInternalState =
15
replace(OldPid,NewPid,InternalState),
16
% Infinite loop
17
loop(NewInternalState);
18
...
19
end.
20
21
migrate(Pid,Src,Dest) ->
% Migration
22
% process
23
receive
24
Pid ! {self(), stop},
25
receive
26
{Pid,{stopped,LastState}} ->
27
NewPid = spawn{Dest,worker,proc,
28
[LastState]},
29
self() ! {self(), {replace,Pid,
30
NewPid,LastState}};
31
{Pid,Error} -> ...
32
end.
Listing 1: Process Migration
Code 1describes the migration of a worker process from
one node Src to another Dest.
4
The supervisor receives
the migration order for process P id (line 4). The migration
action is not blocking and is performed in a different Erlang
process (line 7). The supervisor stops the worker with the
identifier P id (line 25) and awaits the operation result (line
26). It then creates a remote worker in the node Dest with
the latest state of the stopped worker (line 28) and updates
its internal state (lines 30 and 12).
3.1
Dominos Process
The Dominos system is different from all the other crawling
systems cited above. Like these, the Dominos offering is
on distributed architecture, but with the difference of being
totally dynamic. The system's dynamic nature allows its
architecture to be changed as required. If, for instance, one
of the cluster's nodes requires particular maintenance, all of
the processes on it will migrate from this node to another.
When servicing is over, the processes revert automatically
4
The character % indicates the beginning of a comment in
Erlang.
to their original node. Crawl processes may change pool
so as to reinforce one another if necessary. The addition or
deletion of a node in the cluster is completely transparent in
its execution. Indeed, each new node is created containing a
completely blank system. The first action to be undertaken
is to search for the generic server in order to obtain the
parameters of the part of the system that it is to belong
to. These parameters correspond to a limited view of the
whole system. This enables Dominos to be deployed more
easily, the number of messages exchanged between processes
to be reduced and allows better management of exceptions.
Once the generic server has been identified, binaries are sent
to it and its identity is communicated to the other nodes
concerned.
Dominos Generic Server (GenServer): Erlang process
responsible for managing the process identifiers on the
whole cluster. To ensure easy deployment of Dominos,
it was essential to mask the denominations of the
process identifiers. Otherwise, a minor change in the
names of machines or their IP would have required
complete reorganization of the system.
GenServer
stores globally the identifiers of all processes existing
at a given time.
Dominos RPC Concurrent (cRPC): as its name suggests
, this process is responsible for delegating the
execution of certain remote functions to other processes
. Unlike conventional RPCs where it is necessary
to know the node and the object providing these
functions (services), our RPCC completely masks the
information.
One need only call the function, with
no concern for where it is located in the cluster or
for the name of the process offering this function.
Moreover, each RPCC process is concurrent, and
therefore manages all its service requests in parallel.
The results of remote functions are governed by two
modes: blocking or non-blocking. The calling process
may therefore await the reply of the remote function
or continue its execution.
In the latter case, the
reply is sent to its mailbox. For example, no worker
knows the process identifier of its own supervisor. In
order to identify it, a worker sends a message to the
process called supervisor. The RPCC deals with the
message and searches the whole cluster for a supervisor
process identifier, starting with the local node.
The address is therefore resolved without additional
network overhead, except where the supervisor does
not exist locally.
Dominos Distributed Database (DDB): Erlang process
responsible for Mnesia real-time database management
. It handles the updating of crawled information,
crawling progress and the assignment of URLs to be
downloaded to workers.
It is also responsible for
replicating the base onto the nodes concerned and for
the persistency of data on disk.
Dominos Nodes: a node is the physical representation
of a machine connected (or disconnected as the case
may be) to the cluster. This connection is considered
in the most basic sense of the term, namely a simple
plugging-in (or unplugging) of the network outlet.
Each node clearly reflects the dynamic character of
the Dominos system.
302
Dominos Group Manager: Erlang process responsible
for controlling the smooth running of its child processes
(supervisor and workers).
Dominos Master-Supervisor Processes: each group
manager has a single master process dealing with the
management of crawling states of progress. It therefore
controls all the slave processes (workers) contained
within it.
Dominos Slave-Worker Processes: workers are the
lowest-level elements in the crawling process.
This
is the very heart of the Web client wrapping the
libCURL.
With Dominos architecture being completely dynamic and
distributed, we may however note the hierarchical character
of processes within a Dominos node. This is the only way to
ensure very high fault tolerance. A group manager that fails
is regenerated by the node on which it depends. A master
process (supervisor) that fails is regenerated by its group
manager. Finally, a worker is regenerated by its supervisor.
As for the node itself, it is controlled by the Dominos kernel
(generally on another remote machine). The following code
describes the regeneration of a worker process in case of
failure.
1
% Activate error handling
2
process_flag(trap_exit,
true
),
3
...
4
loop(InternalState) ->
% Supervisor main loop
5
receive
6
{From,{job,
Name
,finish}, State} ->
7
% Informe the GenServer that the download is ok
8
?ServerGen ! {job,
Name
,finish},
9
10
% Save the new worker state
11
NewInternalState=save_state(From,State,InternalState),
12
13
% Infinite loop
14
loop(NewInternalState);
15
...
16
{From,Error} ->
% Worker crash
17
% Get the last operational state before the crash
18
WorkerState = last_state(From,InternalState),
19
20
% Free all allocated resources
21
free_resources(From,InternalState),
22
23
% Create a new worker with the last operational
24
% state of the crashed worker
25
Pid = spawn(worker,proc,[WorkerState]),
26
27
% Add the new worker to the supervisor state
28
% storage
29
NewInternalState =replace(From,Pid,InternalState),
30
31
% Infinite loop
32
loop(NewInternalState);
33
end.
Listing 2: Regeneration of a Worker Process in Case
of Failure
This represents the part of the main loop of the supervisor
process dealing with the management of the failure of a
worker.
As soon as a worker error is received (line 19),
the supervisor retrieves the last operational state of the
worker that has stopped (line 22), releases all of its allocated
resources (line 26) and recreates a new worker process with
the operational state of the stopped process (line 31). The
supervisor continually turns in loop while awaiting new
messages (line 40). The loop function call (lines 17 and 40)
is tail recursive, thereby guaranteeing that the supervision
process will grow in a constant memory space.
3.2
DNS Resolution
Before contacting a Web server, the worker process
needs to convert the Domain Name Server (DNS) into
a valid IP address.
Whereas other systems (Mercator,
Internet Archive) are forced to set up DNS resolvers each
time a new link is identified, this is not necessary with
Dominos.
Indeed, in the framework of French Web legal
deposit, the sites to be archived have been identified
beforehand, thus requiring only one DNS resolution
per domain name. This considerably increases crawl
speed.
The sites concerned include all online newspapers
, such as LeMonde (http://www.lemonde.fr/ ), LeFigaro
(http://www.lefigaro.fr/ ) . . . , online television/radio such as
TF1(http://www.tf1.fr/ ), M6 (http://www.m6.fr/ ) . . .
DETAILS OF IMPLEMENTATION
The workers are the medium responsible for physically
crawling on-line contents.
They provide a specialized
wrapper around the libCURL
5
library that represents the
heart of the HTTP client.
Each worker is interfaced to
libCURL by a C driver (shared library). As the system seeks
maximum network accessibility (communication protocol
support), libCURL appeared to be the most judicious choice
when compared with other available libraries.
6
.
The protocols supported include: FTP, FTPS, HTTP,
HTTPS, LDAP, Certifications, Proxies, Tunneling etc.
Erlang's portability was a further factor favoring the
choice of libCURL. Indeed, libCURL is available for various
architectures:
Solaris, BSD, Linux, HPUX, IRIX, AIX,
Windows, Mac OS X, OpenVMS etc. Furthermore, it is
fast, thread-safe and IPv6 compatible.
This choice also opens up a wide variety of functions.
Redirections are accounted for and powerful filtering is
possible according to the type of content downloaded,
headers, and size (partial storage on RAM or disk depending
on the document's size).
4.2
Document Fingerprint
For each download, the worker extracts the hypertext
links included in the HTML documents and initiates a fingerprint
(signature operation). A fast fingerprint (HAVAL
on 256 bits) is calculated for the document's content itself
so as to differentiate those with similar contents (e.g. mirror
sites). This technique is not new and has already been used
in Mercator[13]. It allows redundancies to be eliminated in
the archive.
4.3
URL Extraction and Normalization
Unlike other systems that use libraries of regular expressions
such as PCRE
7
for URL extraction, we have opted
5
Available at http://curl.haxx.se/libcurl/
6
See http://curl.haxx.se/libcurl/competitors.html
7
Available at http://www.pcre.org/
303
for the Flex tool that definitely generates a faster parser.
Flex was compiled using a 256Kb buffer in which all table
compression options were activated during parsing "-8 -f Cf
-Ca -Cr -i". Our current parser analyzes around 3,000
pages/second for a single worker for an average 49Kb per
page.
According to [20], a URL extraction speed of 300 pages/second
may generate a list of more than 2,000 URLs on average.
A naive representation of structures in the memory may
soon saturate the system.
Various solutions have been proposed to alleviate this
problem.
The Internet Archive [4] crawler uses Bloom
filters in random access memory. This makes it possible
to have a compact representation of links retrieved, but also
generates errors (false-positive), i.e. certain pages are never
downloaded as they create collisions with other pages in the
Bloom filter. Compression without loss may reduce the size
of URLs to below 10Kb [2, 21], but this remains insufficient
in the case of large-scale crawls. A more ingenious approach
is to use persistent structures on disk coupled with a cache
as in Mercator [13].
4.4
URL Caching
In order to speed up processing, we have developed a
scalable cache structure for the research and storage of URLs
already archived. Figure 1describes how such a cache works:
Links
Local Cache - Worker
Rejected
Links
0 1 2
255
JudyL-Array
URL CRC
URL
#URL
key
value
JudySL-Array
Figure 1: Scalable Cache
The cache is available at the level of each worker.
It
acts as a filter on URLs found and blocks those already
encountered.
The cache needs to be scalable to be able
to deal with increasing loads. Rapid implementation using
a non-reversible hash function such as HAVAL, TIGER,
SHA1 , GOST, MD5, RIPEMD . . . would be fatal to the
system's scalability. Although these functions ensure some
degree of uniqueness in fingerprint constructionthey are too
slow to be acceptable in these constructions. We cannot
allow latency as far as lookup or URL insertion in the cache
is concerned, if the cache is apt to exceed a certain size (over
10
7
key-value on average). This is why we have focused on
the construction of a generic cache that allows key-value
insertion and lookup in a scalable manner.
The Judy-Array
API
8
enabled us to achieve this objective. Without
going into detail about Judy-Array (see their site for more
information), our cache is a coherent coupling between
a JudyL-Array and N JudySL-Array.
The JudyL-Array
represents a hash table of N = 2
8
or N = 2
16
buckets able to
fit into the internal cache of the CPU. It is used to store "key-numeric
value" pairs where the key represents a CRC of the
8
Judy Array at the address: http://judy.sourceforge.net/
URL and whose value is a pointer to a JudySL-Array. The
second, JudySL-Array, is a "key-compressed character string
value" type of hash, in which the key represents the URL
identifier and whose value is the number of times that the
URL has been viewed. This cache construction is completely
scalable and makes it possible to have sub-linear response
rates, or linear in the worst-case scenario (see Judy-Array at
for an in-depth analysis of their performance). In the section
on experimentation (section 5) we will see the results of this
type of construction.
4.5
Limiting Disk Access
Our aim here is to eliminate random disk access completely
.
One simple idea used in [20] is periodically to
switch structures requiring much memory over onto disk.
For example, random access memory can be used to keep
only those URLs found most recently or most frequently,
in order to speed up comparisons.
This requires no
additional development and is what we have decided to
use. The persistency of data on disk depends on the size
of data in DS memory, and their DA age.
The data
in the memory are distributed transparently via Mnesia,
specially designed for this kind of situation. Data may be
duplicated (
{ram copies, [Nodes]}, {disc copies, [Nodes]})
or fragmented (
{frag properties, .....}) on the nodes in
question.
According to [20], there are on average 8 non-duplicated
hypertext links per page downloaded.
This means that
the number of pages retrieved and not yet archived is
considerably increased.
After archiving 20 million pages,
over 100 million URLs would still be waiting.
This has
various repercussions, as newly-discovered URLs will be
crawled only several days, or even weeks, later. Given this
speed, the base's data refresh ability is directly affected.
4.6
High Availability
In order to apprehend the very notion of High Availability,
we first need to tackle the differences that exist between
a system's reliability and its availability.
Reliability is
an attribute that makes it possible to measure service
continuity when no failure occurs.
Manufacturers generally provide a statistical estimation of
this value for this equipment: we may use the term MTBF
(Mean Time Between Failure). A strong MTBF provides a
valuable indication of a component's ability to avoid overly
frequent failure.
In the case of a complex system (that can be broken
down into hardware or software parts), we talk about MTTF
(Mean Time To Failure).
This denotes the average time
elapsed until service stops as the result of failure in a
component or software.
The attribute of availability is more difficult to calculate
as it includes a system's ability to react correctly in case of
failure in order to restart service as quickly as possible.
It is therefore necessary to quantify the time interval during
which service is unavailable before being re-established:
the acronym MTTR (Mean Time To Repair) is used to
represent this value.
The formula used to calculate the rate of a system's
availability is as follows:
availability =
M T T F
M T T F + M T T R
(4)
304
A system that looks to have a high level of availability should
have either a strong MTTF, or a weak MTTR.
Another more practical approach consists in measuring
the time period during which service is down in order to
evaluate the level of availability. This is the method most
frequently adopted, even if it fails to take account of the
frequency of failure, focusing rather on its duration.
Calculation is usually based on a calendar year.
The
higher the percentage of service availability, the nearer it
comes to High Availability.
It is fairly easy to qualify the level of High Availability of a
service from the cumulated downtime, by using the normalized
principle of "9's" (below 3 nine, we are no longer talking
about High Availability, but merely availability). In order
to provide an estimation of Dominos' High Availability, we
carried out performance tests by fault injection. It is clear
that a more accurate way of measuring this criterion would
be to let the system run for a whole year as explained above.
However, time constraints led us to adopt this solution. Our
injector consists in placing pieces of false code in each part
of the system and then measuring the time required for the
system to make the service available. Once again, Erlang has
proved to be an excellent choice for the setting up of these
regression tests. The table below shows the average time
required by Dominos to respond to these cases of service
unavailability.
Table 1clearly shows Dominos' High Availability.
We
Service
Error
MTTR (microsec)
GenServer
10
3
bad match
320
cRPC
10
3
bad match
70
DDB
10
7
tuples
9
10
6
Node
10
3
bad match
250
Supervisor
10
3
bad match
60
Worker
10
3
bad match
115
Table 1: MTTR Dominos
see that for 10
3
matches of error, the system resumes
service virtually instantaneously.
The DB was tested on
10
7
tuples in random access memory and resumed service
after approximately 9 seconds.
This corresponds to an
excellent MTTR, given that the injections were made on
a PIII-966Mhz with 512Mb of RAM. From these results, we
may label our system as being High Availability, as opposed
to other architectures that consider High Availability only
in the sense of failure not affecting other components of
the system, but in which service restart of a component
unfortunately requires manual intervention every time.
EXPERIMENTATION
This section describes Dominos' experimental results tested
on 5 DELL machines:
nico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl
node (supervisor, workers). Activates a local cRPC.
zico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl
node (supervisor, workers). Activates a local cRPC.
chopin: Intel Pentium 3 - 966 Mhz, 512 Mb RAM.
Main node loaded on ServerGen and DB. Also handles
crawling (supervisor, workers).
Activates a local
cRPC.
gao: Intel Pentium 3 - 500 Mhz, 256 Mb RAM. Node
for DB fragmentation. Activates a local cRPC.
margo: Intel Pentium 2 - 333 Mhz, 256 Mb RAM.
Node for DB fragmentation. Activates a local cRPC.
Machines chopin, gao and margo are not dedicated solely
to crawling and are used as everyday workstations. Disk
size is not taken into account as no data were actually
stored during these tests.
Everything was therefore carried
out using random access memory with a network of
100 Mb/second.
Dominos performed 25,116,487 HTTP
requests after 9 hours of crawling with an average of
816 documents/second for 49Kb per document.
Three
nodes (nico, zico and chopin) were used in crawling, each
having 400 workers.
We restricted ourselves to a total
of 1,200 workers, due to problems generated by Dominos
at intranet level.
The firewall set up to filter access
is considerably detrimental to performance because of its
inability to keep up with the load imposed by Dominos.
Third-party tests have shown that peaks of only 4,000
HTTP requests/second cause the immediate collapse of the
firewall. The firewall is not the only limiting factor, as the
same tests have shown the incapacity of Web servers such
as Apache2, Caudium or Jigsaw to withstand such loads
(see http://www.sics.se/
joe/apachevsyaws.html). Figure 2
(left part) shows the average URL extraction per document
crawled using a single worker. The abscissa (x) axis represents
the number of documents treated, and the ordered
(y) axis gives the time in microseconds corresponding to
extraction.
In the right-hand figure, the abscissa axis
represents the same quantity, though this time in terms
of data volume (Mb). We can see a high level of parsing
reaching an average of 3,000 pages/second at a speed of
70Mb/second. In Figure 3 we see that URL normalization
0
500000
1e+06
1.5e+06
2e+06
2.5e+06
3e+06
3.5e+06
0
2000
4000
6000
8000
10000
Time (microsec)
Documents
Average number of parsed documents
PD
0
500000
1e+06
1.5e+06
2e+06
2.5e+06
3e+06
3.5e+06
0
20
40
60
80
100
120
140
160
Time (microsec)
Document Size (Mb)
Average size of parsed documents
PDS
Figure 2: Link Extraction
is as efficient as extraction in terms of speed. The abscissa
axis at the top (and respectively at the bottom) represents
the number of documents processed per normalization phase
(respectively the quantity of documents in terms of volume).
Each worker normalizes on average 1,000 documents/second
, which is equivalent to 37,000 URLs/second at a speed
of 40Mb/second. Finally, the URL cache structure ensures
a high degree of scalability (Figure 3). The abscissa axis in
this figure represents the number of key-values inserted or
retrieved. The cache is very close to a step function due to
key compression in the Judy-Array. Following an increase in
insertion/retrieval time in the cache, it appears to plateau
by 100,000 key-value bands. We should however point out
that URL extraction and normalization also makes use of
this type of cache so as to avoid processing a URL already
encountered.
305
0
10000
20000
30000
40000
50000
60000
0
2000
4000
6000
8000
10000
Time (microsec)
Normalized documents
Average number of normalized documents
AD
0
10000
20000
30000
40000
50000
60000
0
2000 4000 6000 8000 10000 12000 14000 16000
Time (microsec)
Urls
Average number of normalized Url
AU
0
10000
20000
30000
40000
50000
60000
0
20
40
60
80
100
120
140
160
Time (microsec)
Document Size (Mb)
Average size of normalized documents
ADS
0
50000
100000
150000
200000
250000
300000
350000
0
20000
40000
60000
80000
100000
Time (microsec)
Key-Value
Scalable Cache : Insertion vs Retrieval
Cache Insertion
Cache Retrieval
Figure 3:
URL Normalization and Cache Performance
CONCLUSION
In the present paper, we have introduced a high availability
system of crawling called Dominos.
This system
has been created in the framework of experimentation
for French Web legal deposit carried out at the Institut
National de l'Audiovisuel (INA). Dominos is a dynamic
system, whereby the processes making up its kernel are
mobile.
90% of this system was developed using Erlang
programming language, which accounts for its highly flexible
deployment, maintainability and enhanced fault tolerance.
Despite having different objectives, we have been able to
compare it with other documented Web crawling systems
(Mercator, InternetArchive . . . ) and have shown it to be
superior in terms of crawl speed, document parsing and
process management without system restart.
Dominos is more complex than its description here. We
have not touched upon archival storage and indexation.
We have preferred to concentrate rather on the detail of
implementation of the Dominos kernel itself, a strategic
component that is often overlooked by other systems (in particular
those that are proprietary, others being inefficient).
However, there is still room for the system's improvement.
At present, crawled archives are managed by NFS, a file
system that is moderately efficient for this type of problem.
Switchover to Lustre
9
, a distributed file system with a
radically higher level of performance, is underway.
REFERENCES
[1] Z. BarYossef, A. Berg, S. Chien, J. Fakcharoenphol,
and D. Weitz. Approximating aggregate queries about
web pages via random walks. In Proc. of 26th Int.
Conf. on Very Large Data Bases, 2000.
[2] K. Bharat, A. Broder, M. Henzinger, P. Kumar, and
S. Venkatasubramanian. The connectivity server: Fast
access to linkage. Information on the Web, 1998.
[3] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proc. of the
Seventh World-Wide Web Conference, 1998.
[4] M. Burner. Crawling towards eternity: Building an
archive of the world wide web.
9
http://www.lustre.org/
http://www.webtechniques.com/archives/1997/05/burner/,
1997.
[5] S. Chakrabarti, M. V. D. Berg, and B. Dom.
Distributed hypertext resource discovery through
example. In Proc. of 25th Int. Conf. on Very Large
Data Base, pages 375386, 1997.
[6] S. Chakrabarti, M. V. D. Berg, and B. Dom. Focused
crawling: A new approach to topic-specific web
resource discovery. In Proc. of the 8th Int. World
Wide Web Conference, 1999.
[7] J. Cho and H. Garcia-Molina. The evolution of the
web and implications for an incremental crawler. In
Proc. of 26th Int. Conf. on Very Large Data Bases,
pages 117128, 2000.
[8] J. Cho and H. Garcia-Molina. Synchronizing a
database to improve freshness. In Proc. of the ACM
SIGMOD Int. Conf. on Management of Data, 2000.
[9] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through url ordering. In 7th Int. World Wide
Web Conference, 1998.
[10] M. Diligenti, F. Coetzee, S. Lawrence, C. Giles, and
M. Gori. Focused crawling using context graphs. In
Proc. of 26th Int. Conf. on Very Large Data Bases,
2000.
[11] M. Henzinger, A. Heydon, M. Mitzenmacher, and
M. Najork. Measuring index quality using random
walks on the web. In Proc. of the 8th Int. World Wide
Web Conference (WWW8), pages 213225, 1999.
[12] M. Henzinger, A. Heydon, M. Mitzenmacher, and
M. Najork. On near-uniform url sampling. In Proc. of
the 9th Int. World Wide Web Conference, 2000.
[13] A. Heydon and M. Najork. Mercator: A scalable,
extensible web crawler. World Wide Web Conference,
pages 219229, 1999.
[14] J. Hirai, S. Raghavan, H. Garcia-Molina, and
A. Paepcke. Webbase : : A repository of web pages. In
Proc. of the 9th Int. World Wide Web Conference,
2000.
[15] B. Kahle. Archiving the internet. Scientific American,
1997.
[16] S. Lawrence and C. L. Giles. Searching the world wide
web. Science 280, pages 98100, 1998.
[17] M. Najork and J. Wiener. Breadth-first search
crawling yields high-quality pages. In 10th Int. World
Wide Web Conference, 2001.
[18] J. Rennie and A. McCallum. Using reinforcement
learning to spider the web efficiently. In Proc. of the
Int. Conf. on Machine Learning, 1999.
[19] S. Russel and P. Norvig. Artificial Intelligence: A
modern Approach. Prentice Hall, 1995.
[20] V. Shkapenyuk and T. Suel. Design and
implementation of a high-performance distributed web
crawler. Polytechnic University: Brooklyn, Mars 2001.
[21] T. Suel and J. Yuan. Compressing the graph structure
of the web. In Proc. of the IEEE Data Compression
Conference, 2001.
[22] J. Talim, Z. Liu, P. Nain, and E. Coffman. Controlling
robots of web search engines. In SIGMETRICS
Conference, 2001.
306
| Breadth first crawling;Hierarchical Cooperation;limiting disk access;fault tolerance;Dominos nodes;dominos process;Dominos distributed database;breadth-first crawling;repetitive crawling;URL caching;Dominos Generic server;Document fingerprint;Deep web crawling;Dominos RPC concurrent;Random walks and sampling;Web Crawler;maintaiability and configurability;deep web crawling;High Availability System;real-time distributed system;crawling system;high performance crawling system;high availability;Erlang development kit;targeted crawling |
101 | Hiperlan/2 Public Access Interworking with 3G Cellular Systems | This paper presents a technical overview of the Hiperlan/2 3G interworking concept. It does not attempt to provide any business justification or plan for Public Access operation. After a brief resume of public access operation below, section 2 then introduces an overview of the technologies concerned. Section 3 describes the system approach and presents the current reference architecture used within the BRAN standardisation activity. Section 4 then goes on to cover in more detail the primary functions of the system such as authentication, mobility, quality of service (QoS) and subscription. It is worth noting that since the Japanese WLAN standard HiSWANa is very similar to Hiperlan/2, much of the technical information within this paper is directly applicable to this system, albeit with some minor changes to the authentication scheme. Additionally the high level 3G and external network interworking reference architecture is also applicable to IEEE 802.11. Finally, section 5 briefly introduces the standardisation relationships between ETSI BRAN, WIG, 3GPP, IETF, IEEE 802.11 and MMAC HSWA. | 1.1. Public access operation
Recently, mobile business professionals have been looking
for a more efficient way to access corporate information systems
and databases remotely through the Internet backbone.
However, the high bandwidth demand of the typical office applications
, such as large email attachment downloading, often
calls for very fast transmission capacity. Indeed certain hot
spots, like hotels, airports and railway stations are a natural
place to use such services. However, in these places the time
available for information download typically is fairly limited.
In light of this, there clearly is a need for a public wireless
access solution that could cover the demand for data intensive
applications and enable smooth on-line access to corporate
data services in hot spots and would allow a user to roam from
a private, micro cell network (e.g., a Hiperlan/2 Network) to a
wide area cellular network or more specifically a 3G network.
Together with high data rate cellular access, Hiperlan/2 has
the potential to fulfil end user demands in hot spot environments
. Hiperlan/2 offers a possibility for cellular operators
to offer additional capacity and higher bandwidths for end
users without sacrificing the capacity of the cellular users, as
Hiperlans operate on unlicensed or licensed exempt frequency
bands. Also, Hiperlan/2 has the QoS mechanisms that are capable
to meet the mechanisms that are available in the 3G
systems. Furthermore, interworking solutions enable operators
to utilise the existing cellular infrastructure investments
and well established roaming agreements for Hiperlan/2 network
subscriber management and billing.
Technology overview
This section briefly introduces the technologies that are addressed
within this paper.
2.1. Hiperlan/2 summary
Hiperlan/2 is intended to provide local wireless access to IP,
Ethernet, IEEE 1394, ATM and 3G infrastructure by both stationary
and moving terminals that interact with access points.
The intention is that access points are connected to an IP, Ethernet
, IEEE 1394, ATM or 3G backbone network. A number
of these access points are required to service all but the small-44
MCCANN AND FLYGARE
est networks of this kind, and therefore the wireless network
as a whole supports handovers of connections between access
points.
2.2. Similar WLAN interworking schemes
It should be noted that the interworking model presented in
this paper is also applicable to the other WLAN systems, i.e.
IEEE 802.11a/b and MMAC HiSWANa (High Speed Wireless
Access Network), albeit with some minor modifications
to the authentications schemes. It has been the intention of
BRAN to produce a model which not only fits the requirements
of Hiperlan/23G interworking, but also to try and
meet those of the sister WLAN systems operating in the same
market. A working agreement has been underway between
ETSI BRAN and MMAC HSWA for over 1 year, and with
the recent creation of WIG (see section 5), IEEE 802.11 is
also working on a similar model.
2.3. 3G summary
Within the framework of International Mobile Telecommunications
2000 (IMT-2000), defined by the International
Telecommunications Union (ITU), the 3rd Generation Partnership
Project (3GPP) are developing the Universal Mobile
Telecommunications System (UMTS) which is one of the major
third generation mobile systems. Additionally the 3rd
Generation Partnership Project 2 (3GPP2) is also developing
another 3G system, Code Division Multiple Access 2000
(CDMA-2000). Most of the work within BRAN has concentrated
on UMTS, although most of the architectural aspects
are equally applicable to Hiperlan/2 interworking with
CDMA-2000 and indeed pre-3G systems such as General
Packet Radio Services (GPRS).
The current working UMTS standard, Release 4, of UMTS
was finalised in December 2000 with ongoing development
work contributing to Release 5, due to be completed by the
end of 2002. A future release 6 is currently planned for the autumn
of 2003, with worldwide deployment expected by 2005.
System approach
This section describes the current interworking models being
worked upon within BRAN at the current time. The BRAN
Network Reference Architecture, shown in figure 1, identifies
the functions and interfaces that are required within a Hiperlan/2
network in order to support inter-operation with 3G systems
.
The focus of current work is the interface between the
Access Point (AP) and the Service provider network (SPN)
which is encapsulated by the Lx interface. The aim of the
Hiperlan/23G interworking work item is to standardise these
interfaces, initially focusing on AAA (Authentication, Authorisation
and Accounting) functionality.
A secondary aim is to create a model suitable for all the
5 GHz WLAN systems (e.g., Hiperlan/2, HiSWANa, IEEE
Figure 1. Reference architecture.
802.11a) and all 3G systems (e.g., CDMA-2000, UMTS),
thus creating a world wide standard for interworking as mentioned
in section 5.
Other interfaces between the AP and external networks and
interfaces within the AP are outside the scope of this current
work.
Figure 1 shows the reference architecture of the interworking
model. It presents logical entities within which the following
functions are supported:
Authentication: supports both SIM-based and SIM-less
authentication. The mobile terminal (MT) communicates
via the Attendant with an authentication server in the visited
network, for example a local AAA server, across the
Ls interface.
Authorisation and User Policy: the SPN retrieves authorisation
and user subscription information from the home
network when the user attaches to it. Authorisation information
is stored within a policy decision function in the
SPN. Interfaces used for this are Lp and Ls.
Accounting: the resources used by a MT and the QoS provided
to a user are both monitored by the Resource Monitor
. Accounting records are sent to accounting functions
in the visited network via the La interface.
Network Management: the Management Agent provides
basic network and performance monitoring, and allows the
configuration of the AP via the Lm interface.
Admission Control and QoS: a policy decision function in
the SPN decides whether a new session with a requested
QoS can be admitted based on network load and user subscription
information. The decision is passed to the Policy
Enforcement function via the Lp interface.
Inter-AP Context Transfer: the Handover Support function
allows the transfer of context information concerning
a user/mobile node, e.g., QoS state, across the Lh interface
from the old to the new AP between which the mobile is
handing over.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
45
Mobility: mobility is a user plane function that performs
re-routing of data across the network. The re-routing may
simply be satisfied by layer 2 switching or may require
support for a mobility protocol such as Mobile IP depending
on the technology used within the SPN. Mobility is an
attribute of the Lr interface.
Location Services: the Location Server function provides
positioning information to support location services. Information
is passed to SPN location functions via the Ll
interface.
Primary functions
This section describes the primary functions of this model (refer
to figure 1) in further detail, specifically: authentication
and accounting, mobility and QoS.
4.1. Authentication and authorisation
A key element to the integration of disparate systems is the
ability of the SPN to extract both authentication and subscription
information from the mobile users' home networks when
an initial association is requested. Many users want to make
use of their existing data devices (e.g., Laptop, Palmtop) without
additional hardware/software requirements. Conversely
for both users and mobile operators it is beneficial to be able
to base the user authentication and accounting on existing cellular
accounts, as well as to be able to have Hiperlan/2-only
operators and users; in any case, for reasons of commonality
in MT and network (indeed SPN) development it is important
to be able to have a single set of AAA protocols which
supports all the cases.
4.1.1. Loose coupling
The rest of this paper concentrates on loose coupling solutions
. "Loose coupling", is generally defined as the utilisation
of Hiperlan/2 as a packet based access network complementary
to current 3G networks, utilising the 3G subscriber databases
but without any user plane Iu type interface, as shown
in figure 1. Within the UMTS context, this scheme avoids
any impact on the SGSN and GGSN nodes. Security, mobility
and QoS issues are addressed using Internet Engineering
Task Force (IETF) schemes.
Other schemes which essentially replace the User Terminal
Radio Access Network (UTRAN) of UMTS with a HIRAN
(Hiperlan Radio Access Network) are referred to as "Tight
Coupling", but are not currently being considered within the
work of BRAN.
4.1.2. Authentication flavours
This section describes the principle functions of the loose
coupling interworking system and explains the different authentication
flavours that are under investigation. The focus
of current work is the interface between the AP and the SPN.
Other interfaces between the AP and external networks and
interfaces within the AP are initially considered to be implementation
or profile specific.
The primary difference between these flavours is in the
authentication server itself, and these are referred to as the
"IETF flavour" and the "UMTS-HSS flavour", where the
Home Subscriber Server (HSS) is a specific UMTS term for a
combined AAA home server (AAAH)/Home Location Register
(HLR) unit. The motivation for network operators to build
up Hiperlan/2 networks based on each flavour may be different
for each operator. However, both flavours offer a maximum
of flexibility through the use of separate Interworking
Units (IWU) and allow loose coupling to existing and future
cellular mobile networks. These alternatives are presented in
figure 2.
IETF flavour.
The IETF flavour outlined in figure 2 is driven
by the requirement to add only minimal software functionality
to the terminals (e.g., by downloading java applets), so
that the use of a Hiperlan/2 mobile access network does not
require a radical change in the functionality (hardware or software
) compared to that required by broadband wireless data
access in the corporate or home scenarios. Within a multiprovider
network, the WLAN operator (who also could be a
normal ISP) does not necessary need to be the 3G operator
as well, but there could still be an interworking between the
networks.
Within this approach Hiperlan/2 users may be either existing
3G subscribers or just Hiperlan/2 network subscribers.
These users want to make use of their existing data devices
(e.g., Laptop, Palmtop) without additional hardware/software
requirements. For both users and mobile operators it is beneficial
to be able to base the user authentication and accounting
on existing cellular accounts, as well as to be able to have
Hiperlan/2-only operators and users; in any case, for reasons
of commonality in MT and AP development it is important to
be able to have a single set of AAA protocols which supports
all the cases.
UMTS-HSS flavour.
Alternatively the UMTS flavour (also
described within figure 1) allows a mobile subscriber using
a Hiperlan/2 mobile access network for broadband wireless
data access to appear as a normal cellular user employing
standard procedures and interfaces for authentication purposes
. It is important to notice that for this scenario functionality
normally provided through a user services identity
module (USIM) is required in the user equipment. The USIM
provides new and enhanced security features in addition to
those provided by 2nd Generation (2G) SIM (e.g., mutual authentication
) as defined by 3GPP (3G Partnership Program).
The UMTS-HSS definitely requires that a user is a native
cellular subscriber while in addition and distinctly from the
IETF flavoured approach standard cellular procedures and
parameters for authentication are used (e.g., USIM quintets).
In this way a mobile subscriber using a Hiperlan/2 mobile access
network for broadband wireless data access will appear
as a normal cellular user employing standard procedures and
interfaces for authentication purposes. It is important to notice
that for this scenario USIM functionality is required in
the user equipment.
46
MCCANN AND FLYGARE
Figure 2. Loose coupling authentication flavours.
For the IETF flavoured approach there is no need to integrate
the Hiperlan/2 security architecture with the UMTS
security architecture [2]. It might not even be necessary to
implement all of the Hiperlan/2 security features if security is
applied at a higher level, such as using IPsec at the IP level.
An additional situation that must be considered is the use of
pre-paid SIM cards. This scenario will introduce additional
requirements for hot billing and associated functions.
4.1.3. EAPOH
For either flavour authentication is carried out using a mechanism
based on EAP (Extensible Authentication Protocol) [3].
This mechanism is called EAPOH (EAP over Hiperlan/2) and
is analogous to the EAPOL (EAP over LANs) mechanism as
defined in IEEE 802.1X. On the network side, Diameter [4]
is used to relay EAP packets between the AP and AAAH.
Between the AP and MT, EAP packets and additional Hiperlan/2
specific control packets (termed pseudo-EAP packets)
are transferred over the radio interface. This scheme directly
supports IETF flavour authentication, and by use of the pro-posed
EAP AKA (Authentication and Key Agreement) mechanism
would also directly support the UMTS flavour authentication
.
Once an association has been established, authorisation information
(based on authentication and subscription) stored
within a Policy Decision Function within the SPN itself can
be transmitted to the AP. This unit is then able to regulate services
such as time-based billing and allocation of network and
radio resources to the required user service. Mobile users with
different levels of subscription (e.g., "bronze, silver, gold")
can be supported via this mechanism, with different services
being configured via the policy interface. A change in authentication
credentials can also be managed at this point.
4.1.4. Key exchange
Key agreement for confidentiality and integrity protection is
an integral part of the UMTS authentication procedure, and
hence the UTRAN confidentiality and integrity mechanisms
should be reused within the Hiperlan/2 when interworking
with a 3G SPN (i.e. core network). This will also increase
the applied level of security.
The Diffie-Hellman encryption key agreement procedure,
as used by the Hiperlan/2 air interface, could be used to improve
user identity confidentiality. By initiating encryption
before UMTS AKA is performed, the user identity will not
have to be transmitted in clear over the radio interface, as
is the case in UMTS when the user enters a network for the
first time. Thus, this constitutes an improvement compared to
UMTS security.
It is also important to have a secure connection between
APs within the same network if session keys or other sensitive
information are to be transferred between them. A secure
connection can either be that they for some reason trust
each other and that no one else can intercept the communication
between them or that authentication is performed and
integrity and confidentiality protection are present.
4.1.5. Subscriber data
There are three basic ways in which the subscriber management
for Hiperlan/2 and 3G users can be co-ordinated:
Have the interworking between the Hiperlan/2 subscriber
database and HLR/HSS. This is for the case where the in-HIPERLAN/2
PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
47
terworking is managed through a partnership or roaming
agreement.
The administrative domains' AAA servers share security
association or use an AAA broker.
The Hiperlan/2 authentication could be done on the basis
of a (U)SIM token. The 3G authentication and accounting
capabilities could be extended to support access authentication
based on IETF protocols. This means either integrating
HLR and AAA functions within one unit (e.g.,
a HSS unit), or by merging native HLR functions of the
3G network with AAA functions required to support IP
access.
Based on these different ways for subscriber management,
the user authentication identifier can be on three different formats
:
Network Address Identifier (NAI),
International Mobile Subscriber Identity (IMSI) (requires
a (U)SIM card), and
IMSI encapsulated within a NAI (requires a (U)SIM card).
4.1.6. Pre-paid SIM cards
As far as the HLR within the SPN is concerned, it cannot
tell the difference between a customer who is pre-paid or not.
Hence, this prevents a non-subscriber to this specific 3G network
from using the system, if the operator wishes to impose
this restriction.
As an example, pre-paid calls within a 2G network are
handled via an Intelligent Network (IN) probably co-located
with the HLR. When a call is initiated, the switch can be pro-grammed
with a time limit, or if credit runs out the IN can
signal termination of the call. This then requires that the SPN
knows the remaining time available for any given customer.
Currently the only signals that originate from the IN are to
terminate the call from the network side.
This may be undesirable in a Hiperlan/23G network, so
that a more graceful solution is required. A suitable solution
is to add pre-paid SIM operation to our system together with
hot billing (i.e. bill upon demand) or triggered session termination
. This could be achieved either by the AAAL polling
the SPN utilising RADIUS [5] to determine whether the customer
is still in credit, or by using a more feature rich protocol
such as Diameter [4] which allows network signalling directly
to the MT.
The benefit of the AAA approach is to allow the operator
to present the mobile user with a web page (for example), as
the pre-paid time period is about to expire, allowing them to
purchase more airtime.
All these solutions would require an increased integration
effort with the SPN subscriber management system. Further
additional services such as Customized Applications for
Mobile Network Enhanced Logic (CAMEL) may also allow
roaming with pre-paid SIM cards.
4.2. Accounting
In the reference architecture of figure 2, the accounting function
monitors the resource usage of a user in order to allow
cost allocation, auditing and billing. The charging/accounting
is carried out according to a series of accounting and resource
monitoring metrics, which are derived from the policy function
and network management information.
The types of information needed in order to monitor each
user's resource consumption could include parameters such
as, for example, volume of traffic, bandwidth consumption,
etc. Each of these metrics could have AP specific aspects
concerning the resources consumed over the air interface and
those consumed across the SPN, respectively. As well as providing
data for billing and auditing purposes, this information
is exchanged with the Policy Enforcement/Decision functions
in order to provide better information on which to base policy
decisions.
The accounting function processes the usage related information
including summarisation of results and the generation
of session records. This information may then be forwarded
to other accounting functions within and outside the network,
for example a billing function. This information may also be
passed to the Policy Decision function in order to improve
the quality of policy decisions; vice versa the Policy Decision
function can give information about the QoS, which may affect
the session record. There are also a number of extensions
and enhancements that can be made to the basic interworking
functionality such as those for the provision of support for
QoS and mobility.
In a multiprovider network, different sorts of inter-relationships
between the providers can be established. The inter-relationship
will depend upon commercial conditions, which
may change over time. Network Operators have exclusive
agreements with their customers, including charging and
billing, and also for services provided by other Network Op-erators/Service
Providers. Consequently, it must be possible
to form different charging and accounting models and this requires
correspondent capabilities from the networks.
Charging of user service access is a different issue from the
issue of accounting between Network Operators and Service
Providers. Although the issues are related, charging and accounting
should be considered separately. For the accounting
issue it is important for the individual Network Operator or
Service Provider to monitor and register access use provided
to his customers.
Network operators and service providers that regularly
provide services to the same customers could either charge
and bill them individually or arrange a common activity. For
joint provider charging/billing, the providers need revenue accounting
in accordance with the service from each provider.
For joint provider charging of users, it becomes necessary
to transfer access/session related data from the providers to
the charging entity. Mechanisms for revenue accounting are
needed, such as technical configuration for revenue accounting
. This leads to transfer of related data from the Network
48
MCCANN AND FLYGARE
Operator and/or Service Providers to the revenue accounting
entity.
The following parameters may be used for charging and
revenue accounting:
basic access/session (pay by subscription),
toll free (like a 0800 call),
premium rate access/session,
access/session duration,
credit card access/session,
pre-paid,
calendar and time related charging,
priority,
Quality of Service,
duration dependent charging,
flat rate,
volume of transferred packet traffic,
rate of transferred packet traffic (Volume/sec),
multiple rate charge.
4.3. Mobility
Mobility can be handled by a number of different approaches.
Indeed many mobility schemes have been developed in the
IETF that could well be considered along with the work of the
MIND (Mobile IP based Network Developments) project that
has considered mobility in evolved IP networks with WLAN
technologies. Mobility support is desirable as this functionality
would be able to provide support for roaming with an
active connection between the interworked networks, for example
, to support roaming from UMTS to WLAN in a hotspot
for the downloading of large data.
In the loose coupling approach, the mobility within the
Hiperlan/2 network is provided by native Hiperlan/2 (i.e.
RLC layer) facilities, possibly extended by the Convergence
Layer (CL) in use (e.g., the current Ethernet CL [6], or a future
IP CL). This functionality should be taken unchanged
in the loose coupling approach, i.e. handover between access
points of the same Hiperlan/2 network does not need
to be considered especially here as network handover capabilities
of Hiperlan/2 RLC are supported by both MTs and
APs.
Given that Hiperlan/2 network handover is supported, further
details for completing the mobility between access points
are provided by CL dependent functionality.
Completion of this functionality to cover interactions between
the APs and other parts of the network (excluding the
terminal and therefore independent of the air interface) are
currently under development outside BRAN. In the special
case where the infrastructure of a single Hiperlan/2 network
spans more than one IP sub-network, some of the above approaches
assume an additional level of mobility support that
may involve the terminal.
4.3.1. Roaming between Hiperlan/2 and 3G
For the case of mobility between Hiperlan/2 and 3G access
networks, recall that we have the following basic scenario:
A MT attaches to a Hiperlan/2 network, authenticates and acquires
an IP address. At that stage, it can access IP services
using that address while it remains within that Hiperlan/2 network
. If the MT moves to a network of a different technology
(i.e. UMTS), it can re-authenticate and acquire an IP address
in the packet domain of that network, and continue to use IP
services there.
We have referred to this basic case as AAA roaming. Note
that while it provides mobility for the user between networks,
any active sessions (e.g., multimedia calls or TCP connections
) will be dropped on the handover between the networks
because of the IP address change (e.g., use Dynamic Host
Configuration Protocol DHCP).
It is possible to provide enhanced mobility support, including
handover between Hiperlan/2 access networks and 3G access
networks in this scenario by using servers located outside
the access network. Two such examples are:
The MT can register the locally acquired IP address with
a Mobile IP (MIP) home agent as a co-located care-of address
, in which case handover between networks is handled
by mobile IP. This applies to MIPv4 and MIPv6 (and
is the only mode of operation allowed for MIPv6).
The MT can register the locally acquired IP address with
an application layer server such as a Session Initiation Protocol
(SIP) proxy. Handover between two networks can
then be handled using SIP (re-invite message).
Note that in both these cases, the fact that upper layer mobility
is in use is visible only to the terminal and SPN server,
and in particular is invisible to the access network. Therefore,
it is automatically possible, and can be implemented according
to existing standards, without impact on the Hiperlan/2
network itself. We therefore consider this as the basic case
for the loose coupling approach.
Another alternative is the use of a Foreign Agent care-of
address (MIPv4 only). This requires the integration of Foreign
Agent functionality with the Hiperlan/2 network, but has
the advantage of decreasing the number of IPv4 addresses that
have to be allocated. On the other hand, for MTs that do not
wish to invoke global mobility support in this case, a locally
assigned IP address is still required, and the access network
therefore has to be able to operate in two modes.
Two options for further study are:
The option to integrate access authentication (the purpose
of this loose coupling standard) with Mobile IP
home agent registration (If Diameter is used, it is already
present). This would allow faster attach to the network in
the case of a MT using MIP, since it only requires one set
of authentication exchanges; however, it also requires integration
on the control plane between the AAAH and the
Mobile IP home agent itself. It is our current assumption
that this integration should be carried out in a way that is
independent of the particular access network being used,
and is therefore out of scope of this activity.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
49
The implications of using services (e.g., SIP call control
) from the UMTS IMS (Internet Multimedia Subsys-tem
), which would provide some global mobility capability
. This requires analysis of how the IMS would interface
to the Hiperlan/2 access network (if at all).
4.3.2. Handover
For handovers within the Hiperlan/2 network, the terminal
must have enough information to be able to make a handover
decision for itself, or be able to react to a network decision to
handover. Indeed these decision driven events are referred to
as triggers, resulting in Network centric triggers or Terminal
centric triggers.
Simple triggers include the following:
Network Centric: Poor network resources or low bandwidth
, resulting in poor or changing QoS. Change of policy
based on charging (i.e. end of pre-paid time).
Terminal Centric: Poor signal strength. Change of QoS.
4.4. QoS
QoS support is available within the Hiperlan/2 specification
but requires additional functionality in the interworking specifications
for the provision of QoS through the CN rather than
simply over the air. QoS is a key concept, within UMTS, and
together with the additional QoS functionality in Hiperlan/2,
a consistent QoS approach can therefore be provided. A number
of approaches to QoS currently exist which still need to
be considered at this stage.
QoS within the Hiperlan/2 network must be supported between
the MT and external networks, such as the Internet. In
the loose coupling scenario, the data path is not constrained
to travelling across the 3G SPN, e.g., via the SGSN/GGSNs.
Therefore no interworking is required between QoS mechanisms
used within the 3G and Hiperlan/2 network. There is a
possible interaction regarding the interpretation and mapping
of UMTS QoS parameters onto the QoS mechanisms used
in the Hiperlan/2 network. The actual provisioning of QoS
across the Hiperlan/2 network is dependent on the type of the
infrastructure technology used, and therefore the capabilities
of the CL.
4.4.1. HiperLAN2/Ethernet QoS mapping
Within the Hiperlan/2 specification, radio bearers are referred
to as DLC connections. A DLC connection is characterised
by offering a specific support for QoS, for instance in terms of
bandwidth, delay, jitter and bit error rate. The characteristics
of supported QoS classes are implementation specific. A user
might request for multiple DLC connections, each transferring
a specific traffic type, which indicates that the traffic division
is traffic type based and not application based. The DLC
connection set-up does not necessarily result in immediate assignment
of resources though. If the MT has not negotiated a
fixed capacity agreement with the AP, it must request capacity
by sending a resource request (RR) to the AP whenever it has
data to transmit. The allocation of resources may thereby be
very dynamic. The scheduling of the resources is vendor specific
and is therefore not included in the Hiperlan/2 standard,
which also means that QoS parameters from higher layers are
not either.
Hiperlan/2 specific QoS support for the DLC connection
comprises centralised resource scheduling through the
TDMA-based MAC structure, appropriate error control (acknowledged
, unacknowledged or repetition) with associated
protocol settings (ARQ window size, number of retransmis-sions
and discarding), and the physical layer QoS support.
Another QoS feature included in the Hiperlan/2 specification
is a polling mechanism that enables the AP to regularly poll
the MT for its traffic status, thus providing rapid access for
real-time services. The CL acts as an integrator of Hiperlan/2
into different existing service provider networks, i.e. it
connects the SPNs to the Hiperlan/2 data link control (DLC)
layer.
IEEE 802.1D specifies an architecture and protocol for
MAC bridges interconnecting IEEE 802 LANs by relaying
and filtering frames between the separate MACs of the
Bridged LAN. The priority mechanism within IEEE 802.1D
is handled by IEEE 802.1p, which is incorporated into IEEE
802.1D. All traffic types and their mappings presented in the
tables of this section only corresponds to default values specified
in the IEEE 802.1p standard, since these parameters are
vendor specific.
IEEE 802.1p defines eight different priority levels and describes
the traffic expected to be carried within each priority
level. Each IEEE 802 LAN frame is marked with a user priority
(07) corresponding to the traffic type [8].
In order to support appropriate QoS in Hiperlan/2 the
queues are mapped to the different QoS specific DLC connections
(maximum of eight). The use of only one DLC connection
between the AP and the MT results in best effort traffic
only, while two to eight DLC connections indicates that the
MT wants to apply IEEE 802.1p. A DLC connection ID is
only MT unique, not cell unique.
The AP may take the QoS parameters into account in the
allocation of radio resources (which is out of the Hiperlan/2
scope). This means that each DLC connection, possibly operating
in both directions, can be assigned a specific QoS,
for instance in terms of bandwidth, delay, jitter and bit error
rate, as well as being assigned a priority level relative to
other DLC connections. In other words, parameters provided
by the application, including UMTS QoS parameters if desired
, are used to determine the most appropriate QoS level
to be provided by the network, and the traffic flow is treated
accordingly.
The support for IEEE 802.1p is optional for both the MT
and AP.
4.4.2. End-to-end based QoS
Adding QoS, especially end-to-end QoS, to IP based connections
raises significant alterations and concerns since it represents
a digression from the "best-effort" model, which constitutes
the foundation of the great success of Internet. However,
the need for IP QoS is increasing and essential work is cur-50
MCCANN AND FLYGARE
rently in progress. End-to-end IP QoS requires substantial
consideration and further development.
Since the Hiperlan/2 network supports the IEEE 802.1p
priority mechanism and since Differentiated Services (DiffServ
) is priority based, the natural solution to the end-to-end
QoS problem would be the end-to-end implementation
of DiffServ. The QoS model would then appear as follows.
QoS from the MT to the AP is supported by the Hiperlan/2
specific QoS mechanisms, where the required QoS for each
connection is identified by a unique Data Link Control (DLC)
connection ID. In the AP the DLC connection IDs may be
mapped onto the IEEE 802.1p priority queues. Using the
IEEE 802.1p priority mechanisms in the Ethernet, the transition
to a DiffServ network is easily realised by mapping the
IEEE 802.1p user priorities into DiffServ based priorities.
Neither the DiffServ nor the IEEE 802.1p specification
elaborates how a particular packet stream will be treated
based on the Differentiated Services (DS) field and the layer
2 priority level. The mappings between the IEEE 802.1p priority
classes and the DiffServ service classes are also unspec-ified
. There is however an Integrated Services over Specific
Link Layers (ISSLL) draft mapping for Guaranteed and Controlled
Load services to IEEE 802.1p user priority, and a mapping
for Guaranteed and Controlled Load services, to DiffServ
which together would imply a DiffServ to IEEE 802.1p
user priority mapping.
DiffServ provides inferior support of QoS than IntServ, but
the mobility of a Hiperlan/2 MT indicates a need to keep the
QoS signalling low. IntServ as opposed to DiffServ involves
significant QoS signalling.
The DiffServ model provides less stringent support of QoS
than the IntServ/RSVP model but it has the advantage over
IntServ/RSVP of requiring less protocol signalling, which
might be a crucial factor since the mobility of a Hiperlan/2
MT indicates a need to keep the QoS signalling low. Furthermore
, the implementation of an end-to-end IntServ/RSVP
based QoS architecture is much more complex than the implementation
of a DiffServ based one.
Discussions around end-to-end QoS support raise some
critical questions that need to be considered and answered before
a proper solution can be developed; which performance
can we expect from the different end-to-end QoS models,
what level of QoS support do we actually need, how much
bandwidth and other resources are we willing to sacrifice on
QoS, and how much effort do we want to spend on the process
of developing well-supported QoS?
Relationships with other standardisation bodies
BRAN is continuing to have a close working relationship with
the following bodies:
WLAN Interworking Group (WIG)
This group met for the first time in September 2002. Its broad
aim is to provide a single point of contact for the three main
WLAN standardisation bodies (ETSI BRAN, IEEE 802.11
and MMAC HSWA) and to produce a generic approach to
both Cellular and external network interworking of WLAN
technology. It has been also decided to work upon, complete
and then share a common standard for WLAN Public Access
and Cellular networks.
3rd Generation Partnership Project (3GPP)
The System Architecture working group 1 (SA1) is currently
developing a technical report detailed the requirements for
a UMTSWLAN interworking system. They have defined
6 scenarios detailing aspects of differently coupled models,
ranging from no coupling, through loose coupling to tight
coupling. Group 2 (SA2) is currently investigating reference
architecture models, concentrating on the network interfaces
towards the WLAN. Group 3 (SA3) has now started work on
security and authentication issues with regard to WLAN interworking
. ETSI BRAN is currently liasing with the SA2
and SA3 groups.
Internet Engineering Task Force (IETF)
Within the recently created `eap' working group, extensions
are being considered to EAP (mentioned in section 4), which
will assist in system interworking.
Institute of Electrical and Electronics Engineers (IEEE)
USA
The 802.11 WLAN technical groups are continuing to progress
their family of standards. Many similarities exist between
the current 802.11a standard and Hiperlan2/HiSWANa
with regard to 3G interworking. ETSI BRAN is currently liasing
with the Wireless Next Generation (WNG) group of the
IEEE 802.11 project.
Multimedia Mobile Access Communication (MMAC) Japan
The High Speed Wireless Access (HSWA) group's HiSWANa
(High Speed Wireless Access Network system A) is essentially
identical to Hiperlan/2, except that it mandates the use
of an Ethernet convergence layer within the access point. An
agreement between ETSI BRAN and MMAC HSWA has now
been in place for some time to share the output of the ETSI
BRAN 3G interworking group.
Conclusions
This paper has addressed some of the current thinking within
ETSI BRAN (and indeed WIG) regarding the interworking of
the Hiperlan2 and HiSWANa wireless LAN systems into a 3G
Cellular System. Much of this information is now appearing
in the technical specification being jointly produced by ETSI
and MMAC, expected to be published in the first half of 2003.
Of the two initial solutions investigated (tight and loose
coupling), current work has concentrated on the loose variant,
producing viable solutions for security, mobility and QoS.
The authentication schemes chosen will assume that EAP is
carried over the air interface, thus being compatible, at the
interworking level, with IEEE 802.11 and 3GPP.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
51
This standardisation activity thus hopes to ensure that all
WLAN technologies can provide a value added service within
hotspot environments for both customers and operators of 3G
systems.
Acknowledgements
The authors wish to thank Maximilian Riegel (Siemens AG,
Germany), Dr. Robert Hancock and Eleanor Hepworth (Roke
Manor, UK) together with se Jevinger (Telia Research AB,
Sweden) for their invaluable help and assistance with this
work.
References
[1] ETSI TR 101 957 (V1.1.1):
Broadband Radio Access Networks
(BRAN); HIPERLAN Type 2; Requirements and Architectures for Interworking
between Hiperlan/2 and 3rd Generation Cellular Systems (August
2001).
[2] 3GPP TS 33.102: 3rd Generation Partnership Project; Technical Specification
Group Services and System Aspects; 3G Security; Security Architecture
.
[3] L. Blunk, J. Vollbrecht and B. Aboba, PPP Extensible Authentication
Protocol (EAP), RFC 2284bis, draft-ietf-pppext-rfc2284bis
-04.txt
(April 2002).
[4] P. Calhoun et al., Diameter base protocol, draft-ietf-aaa-diameter
-10
(April 2002).
[5] C. Rigney et al., Remote Authentication Dial In User Service (RADIUS),
RFC 2058 (January 1997).
[6] HIPERLAN Type 2; Packet Based Convergence Layer; Part 2: Ethernet
Service Specific Convergence Sublayer (SSCS), ETSI TS 101 493-2,
BRAN.
[7] HIPERLAN Type 2; System overview, ETSI TR 101 683, BRAN.
[8] Information Technology Telecommunications and Information Exchange
between Systems Local and Metropolitan Area Networks
Common Specifications Part 3: Media Access Control (MAC) Bridges
(Revision and redesignation of ISO/IEC 10038: 1993 [ANSI/IEEE
Std 802.1D, 1993 Edition], incorporating IEEE supplements P802.1p,
802.1j-1996, 802.6k-1992, 802.11c-1998, and P802.12e)", ISO/IEC
15802-3: 1998.
Stephen McCann holds a B.Sc. (Hons) degree from
the University of Birmingham, England. He is currently
editor of the ETSI BRAN "WLAN3G" interworking
specification, having been involved in
ETSI Hiperlan/2 standardisation for 3 years. He is
also involved with both 802.11 work and that of the
Japanese HiSWANa wireless LAN system. In the
autumn of 2002, Stephen co-organised and attended
the first WLAN Interworking Group (WIG) between
ETSI BRAN, MMAC HSWA and IEEE 802.11. He
is currently researching multimode WLAN/3G future terminals and WLAN
systems for trains and ships, together with various satellite communications
projects. In parallel to his Wireless LAN activities, Stephen has also been
actively involved in the `rohc' working group of the IETF, looking at various
Robust Header Compression schemes. Previously Stephen has been involved
with avionics and was chief software integrator for the new Maastricht air
traffic control system from 1995 to 1998. He is a chartered engineer and a
member of the Institute of Electrical Engineers.
E-mail: stephen.mccann@roke.co.uk
Helena Flygare holds a M.Sc. degree in electrical
engineering from Lund Institute of Technology,
Sweden, where she also served as a teacher in Automatic
Control for the Master Degree program. Before
her present job she worked in various roles with
system design for hardware and software development
. In 1999 she joined Radio System Malm at
Telia Research AB. She works with specification, design
and integration between systems with different
access technologies, e.g. WLANs, 2.5/3G, etc. from
a technical, as well as from a business perspective. Since the year 2000, she
has been active with WLAN interworking with 3G and other public access
networks in HiperLAN/2 Global Forum, ETSI/BRAN, and 3GPP.
E-mail: helena.flygare@telia.se | Hiperlan/2;interworking;3G;ETSI;BRAN;WIG;public access |
102 | 2D Information Displays | "Many exploration and manipulation tasks benefit from a coherent integration of multiple views onto (...TRUNCATED) | "INTRODUCTION\nIn many areas knowledge about structures and their meaning\nas well as their spatial (...TRUNCATED) | Information visualization;Spreading activation |
103 | Impedance Coupling in Content-targeted Advertising | "The current boom of the Web is associated with the revenues originated from on-line advertising. Wh(...TRUNCATED) | "INTRODUCTION\nThe emergence of the Internet has opened up new marketing\nopportunities. In fact, a (...TRUNCATED) | ";advertisements;triggering page;Bayesian networks;Advertising;matching;kNN;Web;content-targeted adv(...TRUNCATED) |
104 | Implementing the IT Fundamentals Knowledge Area | "The recently promulgated IT model curriculum contains IT fundamentals as one of its knowledge areas(...TRUNCATED) | "INTRODUCTION\nThe recently promulgated IT Model Curriculum, available at\nhttp://sigite.acm.org/act(...TRUNCATED) | IT Fundamentals Knowledge Area;IT Model Curriculum |
105 | Implicit User Modeling for Personalized Search | "Information retrieval systems (e.g., web search engines) are critical for overcoming information ov(...TRUNCATED) | "INTRODUCTION\nAlthough many information retrieval systems (e.g., web search\nengines and digital li(...TRUNCATED) | "user model;interactive retrieval;personalized search;information retrieval systems;user modelling;i(...TRUNCATED) |
106 | Improvements of TLAESA Nearest Neighbour Search Algorithm and Extension to Approximation Search | "Nearest neighbour (NN) searches and k nearest neighbour (k-NN) searches are widely used in pattern (...TRUNCATED) | "Introduction\nNN and k-NN searches are techniques which find the\nclosest object (closest k objects(...TRUNCATED) | "Approximation Search;TLAESA;Distance Computaion;k Nearest Neighbour Search;Nearest Neighbour Search(...TRUNCATED) |
107 | Improving the Static Analysis of Embedded Languages via Partial Evaluation | "Programs in embedded languages contain invariants that are not automatically detected or enforced b(...TRUNCATED) | "1. One Language, Many Languages\nEvery practical programming language contains small programming\nl(...TRUNCATED) | "macros;interpreter;value flow analysis;flow analysis;set-based analysis;partial evaluation;embedded(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 144